WO2016176494A1 - Systems and methods for detecting and initiating activities - Google Patents

Systems and methods for detecting and initiating activities Download PDF

Info

Publication number
WO2016176494A1
WO2016176494A1 PCT/US2016/029860 US2016029860W WO2016176494A1 WO 2016176494 A1 WO2016176494 A1 WO 2016176494A1 US 2016029860 W US2016029860 W US 2016029860W WO 2016176494 A1 WO2016176494 A1 WO 2016176494A1
Authority
WO
WIPO (PCT)
Prior art keywords
communications device
application
mode
user
determining
Prior art date
Application number
PCT/US2016/029860
Other languages
French (fr)
Inventor
Mathew Hudson
Andrew Stadtlander
Jeffrey DEWITTE
William Kirkpatrick
Anthony RADZINS
Jean Carlos CORDERO
Tyler BREESE
Original Assignee
Stadson Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Stadson Technology filed Critical Stadson Technology
Publication of WO2016176494A1 publication Critical patent/WO2016176494A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/90Services for handling of emergency or hazardous situations, e.g. earthquake and tsunami warning systems [ETWS]

Definitions

  • the present invention relates to apparatus and methods for detecting and initiating activities.
  • the input can include: audio input, video input, touch input, user input, environmental input, external hardware input, internal hardware input, and any other form of input.
  • This system of detection and initiation of activities can be directly geared towards situations of compromised safety.
  • there have been many documented instances of people being in situations of compromised safety where they were unable to reach out for assistance by currently available methods such as making a phone call, manually sending a text message, or initiating a safety alert application on a smartphone.
  • the main obstacle in the currently available alert methods is the fact that the all require manual, prolonged interaction with a mobile device. This interaction usually also requires several time-consuming steps to send out a request for assistance. To add to this obstacle, in order to get supplemental data such as GPS location and personal information of the person requires additional interactions with the mobile device.
  • Embodiments of the invention concern apparatus and methods for detecting and initiating activities.
  • a method that includes transitioning a communications device associated with a user to an application launching mode, determining that a default application exists on the device, in response to determining that the default application exists on the device, accessing the default application, and in response to determining that the default application does not exists on the device, recording, at the communications device, user actions with the communications device to select the default application based on the user interactions.
  • the method can also include, in response to determining that the default application does not exists on the device, performing the steps of determining that a time out has been reached and, in response to determining that a time out has been reached, accessing a first available application on the communications device.
  • the method can also include, performing, prior to the transitioning to the application launching mode, the steps of placing a communications device to a monitoring mode, detecting that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, and, in response to detecting that the user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, proceeding to the transitioning, determining, accessing, and recording.
  • the method can also include determining that a time out has been reached prior to determining that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern and, in response to determining that a time out has been reached prior to determining that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, repeating the detecting and proceeding.
  • the method claim can also include, performing, prior to the transitioning to the application launching mode, the steps of placing a communications device to a monitoring mode, detecting that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, and, in response to detecting that the user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, proceeding to at least one of an audio capture mode, an image capture mode or a video capture mode.
  • a computer-readable medium having stored thereon a computer program executable by a communications device, the computer program comprising a plurality of instructions for performing any of the methods of the first embodiment.
  • a communications device that includes at least one input device, a processor coupled to the at least one input device, a computer-readable medium having stored thereon a computer program executable by the processor, the computer program comprising a plurality of instructions for causing the processor to perform any of the methods the first embodiment.
  • FIG. 1 is flowchart illustrating an application launching mode according to the present invention.
  • FIG. 2 is flowchart illustrating an application launching mode, with time out, according to the present invention.
  • FIG. 3 is flowchart illustrating a method for transitioning to an application launching mode according to the present invention.
  • FIG. 4 is flowchart illustrating an image capture mode according to the present invention.
  • FIG. 5 is flowchart illustrating a video capture mode according to the present invention.
  • FIG. 6 is flowchart illustrating an audio capture mode according to the present invention.
  • FIG. 7 is an example scenario of a hit and run event that is useful for describing the present invention.
  • FIG. 8 is a flowchart illustrating a method for sending SMS (text) messages according to the present invention.
  • FIG. 9 is an exemplary schematic of a smartphone for implementing the presenting invention.
  • FIG. 10A, and FIG. 10B illustrate other exemplary possible system configurations for implementing the presenting invention.
  • Our system provides users with a way for users to start a monitoring session by specific actions including audio input, video input, touch input, user input, environmental input, external hardware input, internal hardware input, and any other form of input .
  • This monitoring session can collect location information and other supplemental data for assistance during a situation of comprised safety without the obstacles faced by the currently available alert methods.
  • the user has the ability to interact with their mobile device in a unique way which could be faster, more discreet, more efficient, and more intuitive than the currently available methods on mobile devices. Once the system detects the specified interaction, it is capable of automatically initiating an action on the mobile device that could capture supplemental information without the need for further interaction from the user.
  • This discreet detection system utilizes audio, video, touch, user input, environmental input, external hardware input, internal hardware input, and any other form of input to package the data on a connection device.
  • the data can be used for personal use when stored on the device, or it can be automatically sent to pre-designated individuals.
  • Audio input can include:
  • a continuously-monitoring device that interprets audible content
  • the system would obtain an initial baseline for someone's speech pattern. The system would then have the ability to detect differences from that baseline speech pattern and determine specific states of distress based on the level of difference detected.
  • Choke detection including but not limited to the following:
  • c. Vomit detection including but not limited to the following:
  • c. System can then activate a distributed process method to assist the emergency responders or to alert other devices outside of the range of the emergency response siren
  • the vicinity of the qualified trigger event will collect audio signals to be processed in combination with the other audio data, with the goal of increasing the accuracy of the audio signal.
  • Noise reduction can include but is not limited to: recurrent neural network
  • noise reduction computational statistical Gaussian noise reduction, single network layer noise removal, recurrent network layer noise removal, single- ended pre-recording, single-ended hiss reduction, single-ended surface noise reduction, codec or dual-ended systems, and additive white Gaussian noise signal extraction.
  • Heart rate detection If the user is interacting with a heart rate detector and the
  • the qualified trigger event will fire.
  • [0028] 4 Sawtooth Wave - also called a "ramp" wave for obvious reasons is the most complex of the basic wave shapes. You can view it as the 'front end' of a triangle wave and the 'back end' of a square wave. The more complicated shape generates more overtones, in this case every harmonic is present at gradually decreasing levels.
  • [0029] 5 White Noise which can be described as including all frequencies at equal levels. Imagine 20Hz, 21Hz, 22, 23...210, 211 , 212...2100, 2101, 2102, etc. all the way up the spectrum at equal volume. It sounds somewhat thin and 'bright' because of all those high harmonics present.
  • Audio input can include:
  • Other input can be used for transitioning the communications device to a monitoring mode, setting of a timer in communications device, recording user actions, and ascertaining whether criteria are met.
  • Such other input can include:
  • Barometric pressure change detection Through pressure sensor on a smart device, the ambient pressure is taken, and if there is a rapid pressure change that is determined to be something other than noise in the sensor data, a qualified trigger event is fired (i.e. cabin pressure loss inflight).
  • Ambient light detection pattern If there is a change in ambient light that can be determined to be within the parameters of a predetermined pattern, the qualified trigger event will fire (i.e. the user covers and uncovers the ambient light detector inapattern).
  • Blood glucose detection In the event the smart device is capable of receiving blood glucose data from a user interface device, or external interface device paired with the smart device, the system will detect if the readings of the blood glucose are within appropriate range. In the event that the readings go outside of the normal range, a qualified trigger event will be fired.
  • a Qualified Trigger Event could be launched by (but not only by) the following:
  • Unplugging a phone from a charger disconnecting a Bluetooth device, disconnecting from mobile network, and disconnecting from Wi-Fi.
  • a continuously-monitoring device that is waiting for a command or activation sequence to launch a set of actions or activities. Once a single command or set of commands is interpreted by the listening device, a subsequent action is completed. “Subsequent action” includes but is not limited to:
  • the provided haptic feedback would be used to inform the user of a successful activation.
  • Touch input can also be used for transitioning the communications device to a touch-monitoring mode, setting of a timer in the communications device, recording user actions, and ascertaining whether the criteria is met.
  • These can include:
  • Drawing shapes on screen The ability to recognize a transition from one area on a screen (touch interpreter) to another area on a screen.
  • a series of transitions can be combined to create a transition pattern.
  • the pattern can further be recognized as a specific shape, such as a circle, which can be compared to a criteria.
  • the specific shape can vary in number of transition points.
  • the distance between one transition point and another can vary and be different between any two sets of transition points.
  • the time taken between transition points can be used to determine a pattern.
  • the time taken for the total series of transition point used to create a pattern can be used to meet a criteria.
  • Tapping a pattern on the screen A series of single- and/or multi-touch inputs are collected by the system. The system can recognize a pattern of these inputs based on number of inputs, the time period between each input, and the total time elapsed for the series of inputs
  • Skin pattern detection with unknown association through fingerprint reader Prior to an urgent event, the user qualifies a section of skin such as one of their fingertips as section of skin what will trigger an urgent event. If a section of skin is placed upon the fingerprint reader that is not determined to be in the list of indexed skin sections, a qualified trigger event is fired.
  • Touch data - a method of launching based off of touch.
  • a series of single-touch inputs are collected by the system.
  • the system can recognize a pattern of these inputs based on number of inputs, the time period between each input, and the total time elapsed for the series of inputs.
  • a series of transitions can be combined to create a transition pattern.
  • the pattern can further be recognized as a specific shape, such as a circle, which can be compared to a criteria.
  • the specific shape can vary in number of transition points.
  • the distance between one transition point and another can vary and be different between any two sets of transition points.
  • the time taken between transition points can be used to determine a pattern.
  • the time taken for the total series of transition point used to create a pattern can be used to meet a criteria.
  • Circular patterns (specific embodiment of shape pattern claim).
  • a touch screen and vibration-capable device waits until a finger(s), object(s) or anything else touches its surface and detects the pattern of the touch. If it matches a predefined pattern, then the device launches a predefined action while the touching is occurring. A vibration pattern is then executed. Otherwise, the device would produce a sound or vibration pattern to alert the user of the invalid touch pattern.
  • a touch screen device is placed into a waiting mode state and waits until a finger(s), object(s) or anything else touches its surface.
  • the device may detect speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic wave, radio frequency, heart rate, vibration, or shakes.
  • the device analyzes all the information captured while the surface is being touched, and if a pattern is found that matches a predefined criteria, then a predefined action is executed.
  • a touch screen device keeps track of all the touching patterns (speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic waves, radio frequency, heart rate, vibration, shakes, etc.), analyzes them, and chooses the most used pattern. With all the information collected, it checks to see if there is something completely erratic in a new touch sequence pattern. If so, an alarm is executed on the device, which is placed into an emergency mode. The emergency mode remains active until a new touch sequence pattern is detected, analyzed, and verified. If a correct pattern of touching is not found, and if the device is in the emergency mode, a predefined emergency action is executed (emergency call, emergency text message, the capturing of audio, images, video or any other media type, etc.).
  • a predefined emergency action is executed (emergency call, emergency text message, the capturing of audio, images, video or any other media type, etc.).
  • a touchscreen device capable of executing vibration patterns and/or making sounds, detects a touching pattern (speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic waves, radio frequency, heart rate, vibration, shakes, etc.), and translates the pattern into audible speech and/or sounds.
  • a touching pattern speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic waves, radio frequency, heart rate, vibration, shakes, etc.
  • a touchscreen device detects a requested command via a touch pattern (speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic waves, radio frequency, heart rate, vibration, shakes, etc.), and executes a unique, predefined vibration pattern or sound.
  • a touch pattern speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic waves, radio frequency, heart rate, vibration, shakes, etc.
  • a touchscreen device detects text, audio, video, objects, sounds, movements, and translates them into a pre-defined vibration pattern or sound.
  • Additional Hardware can include
  • a system comprising a device listening for a specific input, with the intent of launching an action upon receiving input that meets a predefined set of criteria, and which has been equipped with hardware that functions as a launching mechanism, with the express purpose of delivering said input.
  • the hardware can be externally attached to the device OR internally embedded into the device as an actual system component.
  • a smartphone accessory that attaches to a smartphone (using the charge port, headphone jack, etc.) and is comprised of a button that, when pressed, will launch a specific action on the phone.
  • This mode allows the communications device to initiate actions within separate applications running on the primary communications device or a separate communications device. These actions include, but are not limited to: launching the application, bringing the application to the foreground (in the scenario of a smartphone or computer), performing calculations in the application, and initiating a specific method, activity, or intent within the application. This mode can initiate actions on one or multiple applications simultaneously or sequentially.
  • This mode starts at 102 where the application is transitioned into the application launching mode. From there the application transitions to 104 and determines which other applications are capable of being launched by application launch mode. These applications will have some identifier that the application launch mode will look for. After this, the application will transition to 106, where it will determine if a default launch application exists. The default application is capable of being launched without further user interaction. If a default launch application does exist, the application launch mode then checks if the default application is already active, launched, and/or within the system, as shown in 108. If so, the system transitions to the already active default application, as shown in 110. If the default application is not already active, the application launch mode activates the default application and the system transitions to 122: the default application.
  • the system then moves to 114, where it begins recording user interaction with the device. If a user interaction is detected that corresponds to selecting an available launch application (116), the selected launch application is activated or launched, and the system transitions to the launched application 118.
  • This mode is a separate embodiment of Figure 1, where a timeout is added to launch a specific application if one is not selected within the timeout.
  • This mode starts at 202, where the application is transitioned into application launch mode. From there the application moves to 204 and determines which other applications are capable of being launched by application launch mode. These applications will have some identifier that the application launch mode will look for. After this, the application will move to 206 and determine if a default launch application exists. The default application is capable of being launched without further user interaction. If a default launch application does exist, the application launch mode then checks in 208 if the default application is already active, launched, within the system. If so, the system enters 210 and transitions to the already active default application. If the default application is not already active, the application launch mode activates the default application and the system transitions to the default application in 212.
  • the system checks in 214 if the timeout for application selection has been reached. If the timeout has been reached, the system will move to 216 and activate the first available launch application. If the timeout has not yet been reached and there is no default launch application, the system then moves to 218 and begins recording user interaction with the device. If a user interaction is detected that corresponds to selecting an available launch application (as in 220), the selected launch application is activated or launched, and the system transitions to the launched application at 222. If there is no user interaction that corresponds to selecting an available application and the timeout has been reached, the system will activate the first available launch application at 216.
  • FIG. 3 shows a flow chart of steps in an exemplary method for transitioning a user communications device to an application launching mode in accordance with various embodiments.
  • FIG. 3 shows a flow chart of steps in an exemplary method 300 for transitioning a user's communications device to an application launching mode in accordance with the various embodiments.
  • the method can begin at step 302 and continue on to step 304.
  • the communication device can be transitioned to a listening mode. That is, as described above, the application can be configured to initially operate in the background.
  • a timer can be reset at step 306. The time can be configured to initiate a countdown of a specified length or to count up a certain amount of time. In the various embodiments any lengths of time can be used. For example, the length of time can be as long as 10 minutes in some embodiments.
  • shorter time periods can be used, such as 60, 45, 30, or 10 seconds.
  • the timer can be implemented as a portion of the application or can be implemented as a separate hardware or software module operating on the communications device in conjunction with the application.
  • the method 300 can also being recording user interactions with the device at step 308. The process to determine whether the communications device needs to be transitioned to an application launching mode can then begin at step 310. At step 310, the user interactions recorded at step 308 can be compared to one or more pre-defined patterns of user interactions.
  • the method 300 can continue recording the user interactions at step 308 and performing the comparison at step 310 until a match occurs. Thereafter, the method can proceed to step 312. if the pre-defined pattern occurs at step 310, it is then determined whether the pre-defined pattern occurred prior to the expiry of the timer at step 312. That is, whether or not the pre-defined pattern occurred within a specific timeframe. If the pre-defined pattern did occur within the specific time frame, the method can then proceed to step 314 to place the device in an application launching mode.
  • FIG. 4 shows a flow chart of steps in an exemplary method for transitioning a user communications device to an image capturing mode in accordance with various embodiments.
  • FIG. 4 shows a flow chart of steps in an exemplary method 400 for transitioning a user's communications device to an image capturing mode in accordance with the various embodiments.
  • the method can begin at step 402 and continue on to step 404.
  • the communication device can be transitioned to a listening mode. That is, as described above, the application can be configured to initially operate in the background.
  • a timer can be reset at step 406. The time can be configured to initiate a countdown of a specified length or to count up a certain amount of time.
  • any lengths of time can be used.
  • the length of time can be as long as 10 minutes in some embodiments.
  • shorter time periods can be used, such as 60, 45, 30, or 10 seconds.
  • the timer can be implemented as a portion of the application or can be implemented as a separate hardware or software module operating on the communications device in conjunction with the application.
  • the method 400 can also being recording user interactions with the device at step 408. The process to determine whether the communications device needs to be transitioned to an image capturing mode can then begin at step 410.
  • the user interactions recorded at step 408 can be compared to one or more pre-defined patterns of user interactions. If the recorded user actions do not match the pre-defined patterns at step 410, the method 400 can continue recording the user interactions at step 408 and performing the comparison at step 410 until a match occurs.
  • the method can proceed to step 412. if the pre-defined pattern occurs at step 410, it is then determined whether the pre-defined pattern occurred prior to the expiry of the timer at step 412. That is, whether or not the pre-defined pattern occurred within a specific timeframe. If the pre-defined pattern did occur within the specific time frame, the method can then proceed to step 414 to place the device in an image capturing mode.
  • FIG. 5 shows a flow chart of steps in an exemplary method for transitioning a user communications device to a video capturing mode in accordance with various embodiments.
  • FIG. 5 shows a flow chart of steps in an exemplary method 500 for transitioning a user's communications device to a video capturing mode in accordance with the various embodiments.
  • the method can begin at step 502 and continue on to step 504.
  • the communication device can be transitioned to a listening mode. That is, as described above, the application can be configured to initially operate in the background.
  • a timer can be reset at step 506. The time can be configured to initiate a countdown of a specified length or
  • any lengths of time can be used.
  • the length of time can be as long as 10 minutes in some embodiments.
  • shorter time periods can be used, such as 60, 45, 30, or 10 seconds.
  • the timer can be implemented as a portion of the application or can be implemented as a separate hardware or software module operating on the communications device in conjunction with the application.
  • the method 500 can also being recording user interactions with the device at step 508. The process to determine whether the communications device needs to be transitioned to a video capturing mode can then begin at step 510.
  • the user interactions recorded at step 508 can be compared to one or more pre-defined patterns of user interactions. If the recorded user actions do not match the pre-defined patterns at step 510, the method 500 can continue recording the user interactions at step 508 and performing the comparison at step 510 until a match occurs.
  • the method can proceed to step 512. if the pre-defined pattern occurs at step 510, it is then determined whether the pre-defined pattern occurred prior to the expiry of the timer at step 512. That is, whether or not the pre-defined pattern occurred within a specific timeframe. If the pre-defined pattern did occur within the specific time frame, the method can then proceed to step 514 to place the device in a video capturing mode.
  • FIG. 6 shows a flow chart of steps in an exemplary method for transitioning a user communications device to an audio capturing mode in accordance with various embodiments.
  • FIG. 6 shows a flow chart of steps in an exemplary method 600 for transitioning a user's communications device to an audio capturing mode in accordance with the various embodiments.
  • the method can begin at step 602 and continue on to step 604.
  • the communication device can be transitioned to a listening mode. That is, as described above, the application can be configured to initially operate in the background.
  • a timer can be reset at step 606. The time can be configured to initiate a countdown of a specified length or to count up a certain amount of time.
  • any lengths of time can be used.
  • the length of time can be as long as 10 minutes in some embodiments.
  • shorter time periods can be used, such as 60, 45, 30, or 10 seconds.
  • the timer can be implemented as a portion of the application or can be implemented as a separate hardware or software module operating on the communications device in conjunction with the application.
  • the method 600 can also being recording user interactions with the device at step 608. The process to determine whether the communications device needs to be transitioned to an audio capturing mode can then begin at step 610.
  • the user interactions recorded at step 608 can be compared to one or more pre-defined patterns of user interactions. If the recorded user actions do not match the pre-defined patterns at step 610, the method 600 can continue recording the user interactions at step 608 and performing the comparison at step 610 until a match occurs.
  • the method can proceed to step 612. if the pre-defined pattern occurs at step 610, it is then determined whether the pre-defined pattern occurred prior to the expiry of the timer at step 612. That is, whether or not the pre-defined pattern occurred within a specific timeframe. If the pre-defined pattern did occur within the specific time frame, the method can then proceed to step 614 to place the device in an audio capturing mode.
  • FIG. 7 This figure embodies a specific example of the use case of launching the communications device into a video or image recording mode.
  • the user's car (702) is hit by another car (704) and the other car drives away from the accident before exchanging insurance information.
  • the user (706) launches their smart phone (708) into video recording mode, as referenced in FIG. 5, and then proceeds to capture a video of the other driver's car driving away including the car's license plate (710). This video is saved to the user's device and can then be sent to the authorities, the user's insurance company, or any form of social media.
  • This figure can also embody a similar example where the user launches their smart phone into an image recording mode to capture a single image of the license plate, as opposed to the video capturing mode of the previous embodiment.
  • This figure shows a flow chart of steps in an exemplary method where the communications device is capable of sending information to emergency response services over the form of an SMS message.
  • step 804 The figure begins at step 802 and transitions to step 804 where the device is put into a monitoring mode. During the monitoring mode of step 804 the device then enters into a communications mode at step 806. In this specific embodiment, there is an option during the communications mode for the user of the device to select to send an sms message to emergency personnel. This option is exemplified in step 808 when a user select the text-to-emergency option. This transitions the device into a state to be capable of sending the sms message to emergency response services. During this state, the device determines the GPS coordinates of the communications device at step 810. The user then has the option to confirm if the information is to be sent to emergency response services at step 812.
  • the device will launch the default sms messaging app on the device and load a message pre-filled with information from the text-to-emergency state (including the GPS coordinates) at step 814. This pre-filled message will be directed to emergency response services and to an additional contact at the communications device. If the user does not confirm to send the information at step 812, the device is transitioned back to monitoring mode. After the information is sent at step 814, the device is transitioned back to monitoring mode.
  • Video recording mode allows the communications device to capture a combination of a series of sequential images and audio into a centralized format. This recording can then be saved on the device, sent to another device, and/or shared on any form of social media.
  • an illustrative smartphone 910 includes a processor 912, a display 914, a touchscreen 916 and other physical user interface (UI) elements 918 (e.g., buttons, etc.). Also included are one or more microphones 920, a variety of other sensors 922 (e.g., motions sensors such 3D accelerometers, gyroscopes and magnetometers), a network adapter 924, a location-determining module 926 (e.g., GPS), and an RF transceiver 928.
  • UI physical user interface
  • the depicted phone 910 also includes one or more cameras, such as two cameras 930, 932.
  • Camera 930 is front-facing, i.e., with a lens mounted on the side of the smartphone that also includes the screen.
  • the second camera 932 has a lens on a different side of the smartphone, commonly on the back side.
  • Associated with the second camera 932 can be an LED "torch” 934 that is mounted so as to illuminate the second camera's field of view. Commonly, this torch is positioned on the same side of the smartphone as the lens of the second camera, although this is not essential.
  • Smartphone 910 also includes a memory 936 that stores software and data.
  • the software includes both operating system software and application software.
  • the software may include other audio, video, and image recognition software, as discussed throughout, or any other software for implementing the various embodiments of the present invention.
  • FIG. 10A, and FIG. 10B illustrate exemplary possible system configurations. The more appropriate configuration will be apparent to those of ordinary skill in the art when practicing the present technology. Persons of ordinary skill in the art will also readily appreciate that other system configurations are possible.
  • FIG. 10A illustrates a conventional system bus computing system architecture 1000 wherein the components of the system are in electrical communication with each other using a bus 1005.
  • Exemplary system 1000 includes a processing unit (CPU or processor) 1010 and a system bus 1005 that couples various system components including the system memory 1015, such as read only memory (ROM) 1020 and random access memory (RAM) 1025, to the processor 1010.
  • the system 1000 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor 1010.
  • the system 1000 can copy data from the memory 1015 and/or the storage device 1030 to the cache 1012 for quick access by the processor 1010. In this way, the cache can provide a performance boost that avoids processor 1010 delays while waiting for data.
  • the processor 1010 can include any general purpose processor and a hardware module or software module, such as module 1 1032, module 2 1034, and module 3 1036 stored in storage device 1030, configured to control the processor 1010 as well as a special-purpose processor where software instructions are incorporated into the actual processor design.
  • the processor 1010 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc.
  • a multi-core processor may be symmetric or asymmetric.
  • an input device 1045 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth.
  • An output device 1035 can also be one or more of a number of output mechanisms known to those of skill in the art.
  • multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 1000.
  • the communications interface 1040 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
  • Storage device 1030 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 1025, read only memory (ROM) 1020, and hybrids thereof.
  • RAMs random access memories
  • ROM read only memory
  • the storage device 1030 can include software modules 1032, 1034, 1036 for controlling the processor 1010. Other hardware or software modules are contemplated.
  • the storage device 1030 can be connected to the system bus 1005.
  • a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 1010, bus 1005, display 1035, and so forth, to carry out the function.
  • FIG. 10B illustrates a computer system 1050 having a chipset architecture that can be used in executing the described method and generating and displaying a graphical user interface (GUI).
  • GUI graphical user interface
  • Computer system 1050 is an example of computer hardware, software, and firmware that can be used to implement the disclosed technology.
  • System 1050 can include a processor 1055, representative of any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform identified computations.
  • Processor 1055 can communicate with a chipset 1060 that can control input to and output from processor 1055.
  • chipset 1060 outputs information to output 1065, such as a display, and can read and write information to storage device 1070, which can include magnetic media, and solid state media, for example.
  • Chipset 1060 can also read data from and write data to RAM 1075.
  • a bridge 1080 for interfacing with a variety of user interface components 1085 can be provided for interfacing with chipset 1060.
  • Such user interface components 1085 can include a keyboard, a microphone, touch detection and processing circuitry, a pointing device, such as a mouse, and so on.
  • inputs to system 1050 can come from any of a variety of sources, machine generated and/or human generated.
  • Chipset 1060 can also interface with one or more communication interfaces 1090 that can have different physical interfaces.
  • Such communication interfaces can include interfaces for wired and wireless local area networks, for broadband wireless networks, as well as personal area networks.
  • Some applications of the methods for generating, displaying, and using the GUI disclosed herein can include receiving ordered datasets over the physical interface or be generated by the machine itself by processor 1055 analyzing data stored in storage 1070 or 1075. Further, the machine can receive inputs from a user via user interface components 1085 and execute appropriate functions, such as browsing functions by interpreting these inputs using processor 1055.
  • exemplary systems 1000 and 1050 can have more than one processor 1010 or be part of a group or cluster of computing devices networked together to provide greater processing capability.
  • the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.
  • the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like.
  • non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
  • Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media.
  • Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network.
  • the computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
  • Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
  • the instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
  • Tangible computer-readable storage media, computer-readable storage devices, or computer-readable memory devices expressly exclude media such as transitory waves, energy, carrier signals, electromagnetic waves, and signals per se.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Emergency Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • Public Health (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

Apparatus and methods for detecting and initiating activities are provided. The method includes transitioning a communications device associated with a user to an application launching mode, determining that a default application exists on the device, in response to determining that the default application exists on the device, accessing the default application, and in response to determining that the default application does not exists on the device, recording, at the communications device, user actions with the communications device to select the default application based on the user interactions.

Description

SYSTEMS AND METHODS FOR DETECTING AND INITIATING ACTIVITIES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of and priority to U.S. Provisional Patent Application No. 62/153,909, entitled "SYSTEM FOR DETECTING AND INITIATING ACTIVITIES" AND FILED ON April 28, 2015, the contents of which are herein incorporated by reference in their entirety as if fully set forth herein.
FIELD OF THE INVENTION
[0002] The present invention relates to apparatus and methods for detecting and initiating activities.
BACKGROUND
[0003] Methods for devices to launch activities of data gathering in response to data received via input are available. See for example, U.S. Patent No. 9,071,957, the contents of which are hereby incorporated by reference in their entirety, as if fully set forth herein. The input can include: audio input, video input, touch input, user input, environmental input, external hardware input, internal hardware input, and any other form of input. This system of detection and initiation of activities can be directly geared towards situations of compromised safety. However, there have been many documented instances of people being in situations of compromised safety where they were unable to reach out for assistance by currently available methods such as making a phone call, manually sending a text message, or initiating a safety alert application on a smartphone. The main obstacle in the currently available alert methods is the fact that the all require manual, prolonged interaction with a mobile device. This interaction usually also requires several time-consuming steps to send out a request for assistance. To add to this obstacle, in order to get supplemental data such as GPS location and personal information of the person requires additional interactions with the mobile device.
SUMMARY
[0004] Embodiments of the invention concern apparatus and methods for detecting and initiating activities. In a first embodiment, there is a method that includes transitioning a communications device associated with a user to an application launching mode, determining that a default application exists on the device, in response to determining that the default application exists on the device, accessing the default application, and in response to determining that the default application does not exists on the device, recording, at the communications device, user actions with the communications device to select the default application based on the user interactions.
[0005] The method can also include, in response to determining that the default application does not exists on the device, performing the steps of determining that a time out has been reached and, in response to determining that a time out has been reached, accessing a first available application on the communications device.
[0006] The method can also include, performing, prior to the transitioning to the application launching mode, the steps of placing a communications device to a monitoring mode, detecting that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, and, in response to detecting that the user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, proceeding to the transitioning, determining, accessing, and recording.
[0007] The method can also include determining that a time out has been reached prior to determining that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern and, in response to determining that a time out has been reached prior to determining that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, repeating the detecting and proceeding.
[0008] The method claim can also include, performing, prior to the transitioning to the application launching mode, the steps of placing a communications device to a monitoring mode, detecting that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, and, in response to detecting that the user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, proceeding to at least one of an audio capture mode, an image capture mode or a video capture mode.
[0009] In a second embodiment there is a computer-readable medium having stored thereon a computer program executable by a communications device, the computer program comprising a plurality of instructions for performing any of the methods of the first embodiment. [0010] In a third embodiment, there is a communications device that includes at least one input device, a processor coupled to the at least one input device, a computer-readable medium having stored thereon a computer program executable by the processor, the computer program comprising a plurality of instructions for causing the processor to perform any of the methods the first embodiment.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] FIG. 1 is flowchart illustrating an application launching mode according to the present invention.
[0012] FIG. 2 is flowchart illustrating an application launching mode, with time out, according to the present invention.
[0013] FIG. 3 is flowchart illustrating a method for transitioning to an application launching mode according to the present invention.
[0014] FIG. 4 is flowchart illustrating an image capture mode according to the present invention.
[0015] FIG. 5 is flowchart illustrating a video capture mode according to the present invention.
[0016] FIG. 6 is flowchart illustrating an audio capture mode according to the present invention.
[0017] FIG. 7 is an example scenario of a hit and run event that is useful for describing the present invention.
[0018] FIG. 8 is a flowchart illustrating a method for sending SMS (text) messages according to the present invention.
[0019] FIG. 9 is an exemplary schematic of a smartphone for implementing the presenting invention.
[0020] FIG. 10A, and FIG. 10B illustrate other exemplary possible system configurations for implementing the presenting invention.
DETAILED DESCRIPTION
The present invention is described with reference to the attached figures, wherein like numerals are used throughout the figures to designate similar or equivalent elements. The figures are not drawn to scale and they are provided merely to illustrate the instant invention. Several aspects of the invention are described below with reference to example applications for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide a full understanding of the invention. One having ordinary skill in the relevant art, however, will readily recognize that the invention can be practiced without one or more of the specific details or with other methods. In other instances, well-known structures or operations are not shown in detail to avoid obscuring the invention. The present invention is not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a methodology in accordance with the present invention.
[0022] Our system provides users with a way for users to start a monitoring session by specific actions including audio input, video input, touch input, user input, environmental input, external hardware input, internal hardware input, and any other form of input . This monitoring session can collect location information and other supplemental data for assistance during a situation of comprised safety without the obstacles faced by the currently available alert methods. With our system, the user has the ability to interact with their mobile device in a unique way which could be faster, more discreet, more efficient, and more intuitive than the currently available methods on mobile devices. Once the system detects the specified interaction, it is capable of automatically initiating an action on the mobile device that could capture supplemental information without the need for further interaction from the user. This discreet detection system utilizes audio, video, touch, user input, environmental input, external hardware input, internal hardware input, and any other form of input to package the data on a connection device. The data can be used for personal use when stored on the device, or it can be automatically sent to pre-designated individuals.
[0023] Audio input can include:
1. Sounds - transitioning the communications device to an audio-monitoring mode, setting of a timer occurs in communications device, recording user actions, and ascertaining whether criteria hasbeenmet.
a. A continuously-monitoring device that interprets audible content and
determines the psychological/emotional tonality being emitted by the source. b. Ability to interpret animal sounds and initiate an action. Launch an action in a listening device or detection system. Spraying bug spray/omit a high frequency sound/initiate an action in a mobile device/initiate an action in a stationary device.
Subtle speech variation detection.
a. The ability to detect minute, subtle variations in the speech patterns
of someone in distress.
b. The system would obtain an initial baseline for someone's speech pattern. The system would then have the ability to detect differences from that baseline speech pattern and determine specific states of distress based on the level of difference detected.
i. Emotions
ii. Pain
iii. Lying
iv. Physical abnormalities
An individual's struggle with a physical ailment can be detected via a subtle change in the individual's speech pattern.
a. Higher/lower pitch
b. Increased/decreased frequency
c. Erratic speech
d. Slurred speech
Psychologically-driven variations in human sounds.
a. Speech
b. Breathing
c. Release of breath
d. Deep sighs
Audible items that are received from bodily reactions to designate or recommend specific actions.
a. Choke detection, including but not limited to the following:
i. An abrupt hacking
ii. Struggling sounds indicating an attempt to clear a blocked air passage iii. Detection of choking on a specific item (such as food)
b. Garble detection
c. Vomit detection, including but not limited to the following:
i. Sounds of liquid being dumped into a container or on the ground
d. Retching sounds
e. Grunting and straining noises
f. Gastrointestinal noises associated with gastritis and other stomach issues g. Gasp detection
h. Wheeze detection
i. Struggled breathing
j. Grunt detection
k. Drowning detection
1. Sigh detection
m. Fatigue detection
n. Detection of change in emotion
i. Sadness
ii. Love
o. Sniffle Detection
i. Detect if someone has blockage in nasal passageway
p. Cough Detection
i. Detect mucus levels from a living being respiratory system q. Can recommend medication based on detected health issue
r. Can showcase the level of sickness someone is currently experiencing
i. Can detect the severity of a cough and recommend a specific action Scratching - Any action that produces audible or non- audible sound via friction or resistance against or alongside various surfaces. Surfaces include but are not limited to: naturalmaterials, modified organic materials, andman-made materials. a. Detecting emergency response sirens, followed by activating a distributed processing method to assist the emergency response
b. Ability to detect and distinguish the type of emergency response siren from the environment
c. System can then activate a distributed process method to assist the emergency responders or to alert other devices outside of the range of the emergency response siren
i. Alert other users out of range
ii. Send information to news stations
iii. Communicate with emergency responders
iv. Allow emergency responders to utilize hardware resources of the user's device
Underwater voice launches - Ability to automatically launch when a certain noise is created (such as a chirp or a yell).
a. Utilizing similar technology to the underwater speaker system
Echolocation
a. Ability to send out a series of sound waves and receive the resulting echo wave from bouncing off of objects in the environment
b. Ability to interpret these echo waves to determine the orientation in space of the objects in the environment in relation to the sound-producing device
i. In cars
ii. On bikes
iii. For the visually impaired
c. Launching based on detected wave type - Launches a qualified trigger
event when a specific wave pattern is detected (square wave, sawtooth wave, triangle wave, sinusoidal wave, etc.)
Multiple microphone noise reduction.
a. In the event of a single qualified trigger event, communications devices in
the vicinity of the qualified trigger event will collect audio signals to be processed in combination with the other audio data, with the goal of increasing the accuracy of the audio signal.
b. Noise reduction can include but is not limited to: recurrent neural network
noise reduction, computational statistical Gaussian noise reduction, single network layer noise removal, recurrent network layer noise removal, single- ended pre-recording, single-ended hiss reduction, single-ended surface noise reduction, codec or dual-ended systems, and additive white Gaussian noise signal extraction.
10. Heart rate detection - If the user is interacting with a heart rate detector and the
user's heart rate goes above or below a specific level, the qualified trigger event will fire.
[0024] The most important characteristic of sounds that have a definite pitch is that they have a repeating waveform. With audio oscillators we can generate what are considered to be the 4 most basic waveforms, the sine, triangle, square, and sawtooth (or "ramp") waves. These waveforms, in that order, represent a steadily increasing complexity of shape and of timbre as the number and strength of the harmonics for each wave form increases.
[0025] 1 : Sine Wave - sounds like the lowest of the samples because it is only playing the fundamental frequency or "pure tone" of 1 10-Hz.
[0026] 2: Triangle Wave - sounds somewhat higher, richer, and a bit louder because the fundamental frequency is joined by the odd harmonics which are those frequencies 3x, 5x, 7x, etc. above the fundamental, in this case 330-Hz, 550-Hz, 770-Hz, etc.
[0027] 3: Square Wave - sounds higher, richer, and a bit louder still. It is similar to the triangle wave in that only odd harmonics are present, however the harmonics are louder relative to the fundamental frequency and so have a greater impact on the timbre of the wave.
[0028] 4: Sawtooth Wave - also called a "ramp" wave for obvious reasons is the most complex of the basic wave shapes. You can view it as the 'front end' of a triangle wave and the 'back end' of a square wave. The more complicated shape generates more overtones, in this case every harmonic is present at gradually decreasing levels.
[0029] 5: White Noise which can be described as including all frequencies at equal levels. Imagine 20Hz, 21Hz, 22, 23...210, 211 , 212...2100, 2101, 2102, etc. all the way up the spectrum at equal volume. It sounds somewhat thin and 'bright' because of all those high harmonics present.
[0030] 6: Pink Noise which can be described as including all frequencies but at steadily decreasing levels. The rate of decrease is generally set at -3dB per octave, with the idea being that since there are twice as many individual frequencies present as we move up the spectrum (think 1,000 steps between 1kHz & 2kHz, but 2,000 steps between 2kHz & 4kHz, etc.) that we then need to decrease the volume of each successive frequency. Of course, a difference of only - 3dB is dividing the overall signal strength by 1.4 as opposed to a decrease of -6dB which would divide the signal strength by 2. This is probably why this sounds louder that the white noise. It also sounds 'darker' because the higher frequencies are not overpowering the lower ones. Pink Noise is used to test and balance live sound systems.
[0031] Other Audio input can include:
11. Audible Launch Command (ALC) - Voice commands that are used for requesting assistance.
a. Distress commands
i. "Help! "
ii. "Be careful! "
iii. "Look out! " or "Watch out!
iv. "Please help me"
Medical emergencies
i. "Call an ambulance! "
ii. "I need a doctor"
iii. "There's been an accident"
iv. "Please hurry! "
v. "I've cut myself
vi. "I've burnt myself
vii. "Are you OK?"
viii. "Is everyone OK?"
Crime
i. "Stop, thief! "
ii. "Call the police! "
iii. "My wallet's been stolen"
iv. "My purse has been stolen"
v. "My handbag's been stolen'
vi. "My laptop's been stolen"
vii. "My phone's been stolen"
viii. "I'd like to report a theft" ix. "My car's been broken into"
X. "I've been mugged"
xi. "I've been attacked"
d. Fire
I. "Fire! "
II. "Call the fire brigade! "
iii. "Can you smell burning?"
iv. "There's a fire"
v. "The building's on fire"
vi. "I'm stuck under a something"
e. Other difficult situations
I. "I'm lost"
II. "We're lost"
iii. "I can't find my ... "
1. "keys"
2. "passport"
3. "mobile"
iv. "I've lost my ... "
1. "wallet"
2. "purse"
3. "camera"
v. "I've locked myself out of my ... "
1. "car"
2. "room"
vi. "Please leave me alone"
vii. "Go away ! "
[0032] Other input can be used for transitioning the communications device to a monitoring mode, setting of a timer in communications device, recording user actions, and ascertaining whether criteria are met. Such other input can include:
1. Rotation - Ability to utilize the gyroscope to detect the pattern of rotation of a device. If a recognized pattern meets a criteria then a specific action is performed (i.e. transition into emergency mode).
Scent detection - Ability to recognize odors of specific chemicals and launch an action depending on the specific chemical signature.
a. Ex: Carbon monoxide
b. Ability to analyze odors, biomarkers and thousands of molecules in someone's breath
Barometric pressure change detection - Through pressure sensor on a smart device, the ambient pressure is taken, and if there is a rapid pressure change that is determined to be something other than noise in the sensor data, a qualified trigger event is fired (i.e. cabin pressure loss inflight).
Ambient light detection pattern - If there is a change in ambient light that can be determined to be within the parameters of a predetermined pattern, the qualified trigger event will fire (i.e. the user covers and uncovers the ambient light detector inapattern).
Distance detection - If the user changes the distance between the smart device and a detectable medium in a recognizable pattern (using the smart device's distance detector), the qualified trigger event will launch.
Blood glucose detection - In the event the smart device is capable of receiving blood glucose data from a user interface device, or external interface device paired with the smart device, the system will detect if the readings of the blood glucose are within appropriate range. In the event that the readings go outside of the normal range, a qualified trigger event will be fired.
RFID detection - If the smart device encounters a specific RFID Tag Reading, a qualified trigger event will be launched.
Change in connection state of a smart device to another device
a. A Qualified Trigger Event could be launched by (but not only by) the following:
Unplugging a phone from a charger, disconnecting a Bluetooth device, disconnecting from mobile network, and disconnecting from Wi-Fi.
A continuously-monitoring device that is waiting for a command or activation sequence to launch a set of actions or activities. Once a single command or set of commands is interpreted by the listening device, a subsequent action is completed. "Subsequent action" includes but is not limited to:
i. Launching apps
ii. Launching modes
1. Emergency mode
2. Media recording mode
3. Visual recording
a. Motion sensor
b. Thermal detection (i.e. infrared)
c. Taking a photo
d. Sequence of photos
e. Hit & runs
i. Cycling
ii. Vehicles
f. In-car cameras
i. Intentional actions
ii. Drowsiness
iii. Discomfort
iv. Medical
v. Unexpected motion in vehicle when locked & off vi. Biometric scanning
g. Out-of-car cameras
i. Detect hit & run
4. Audio recording
iii. Language interpretation mode
1. Detect a language different from the normal environment, and begin translating it to another language.
iv. Disability assistance mode
v. Visually-impaired mode
1. Items on screen increase size 2. Items are described or text is read aloud to assist the
visually-impaired individual
vi. Hearing-impaired mode - write out sets of instructions, prompts
for additional details, and requests for what action the individual would like to initiate next.
vii. Deaf & blind mode
1. Could launch based on a single or multiple noises created
2. Can launch based on a whistle
a. Can launch based on taps of the hand/ foot or finger Touch screen that could interpret taps on a listening screen and provide haptic feedback.
b. The provided haptic feedback would be used to inform the user of a successful activation.
i. Example: A small vibration as an immediate response to a tap on a touch s creen .
ii. Example: A vibrating response to a user leaving their foot on a pedal or button for a set period of time.
viii. Medical assistance mode
ix. Biometrics monitoring mode
1. Vocal tonality
x. Visual gesture interpretation (voluntary or involuntary patterns)
1. Body language
2. Facial recognition
3. Hand movements
a. Whole hand
b. Fingers
4. Feet movements
a. Whole foot
b. Ankle
c. Toes
5. Limb movements 6. Head/neck movements
7. Eye-ball movements
8. Eye-lid movements
9. Eye-brow movements
10. Lip movements
11. Jaw movements
[0033] Touch input can also be used for transitioning the communications device to a touch-monitoring mode, setting of a timer in the communications device, recording user actions, and ascertaining whether the criteria is met. These can include:
1. Drawing shapes on screen - The ability to recognize a transition from one area on a screen (touch interpreter) to another area on a screen. A series of transitions can be combined to create a transition pattern. The pattern can further be recognized as a specific shape, such as a circle, which can be compared to a criteria. The specific shape can vary in number of transition points. The distance between one transition point and another can vary and be different between any two sets of transition points. The time taken between transition points can be used to determine a pattern. The time taken for the total series of transition point used to create a pattern can be used to meet a criteria.
2. Tapping a pattern on the screen - A series of single- and/or multi-touch inputs are collected by the system. The system can recognize a pattern of these inputs based on number of inputs, the time period between each input, and the total time elapsed for the series of inputs
3. Emergency fingerprint direct association - Prior to an emergency, the user qualifies one of their fingerprints as an emergency finger, if that finger is placed upon the fingerprint detector of the smart device, the qualified trigger event is fired.
4. Emergency fingerprint direct unknown association - If any finger is placed upon the fingerprint detector of the smart device other than the users approved finger, the qualified trigger event is fired.
5. Skin pattern detection with direct association through fingerprint reader - Prior to an urgent event, the user qualifies a section of skin such as one of their fingertips as section of skin what will trigger an urgent event. If that section of skin is placed upon the fingerprint reader of the smart device, a qualified trigger event is fired.
Skin pattern detection with unknown association through fingerprint reader - Prior to an urgent event, the user qualifies a section of skin such as one of their fingertips as section of skin what will trigger an urgent event. If a section of skin is placed upon the fingerprint reader that is not determined to be in the list of indexed skin sections, a qualified trigger event is fired.
Touch data - a method of launching based off of touch.
a. One finger touch
i. Via multiple taps - A series of single-touch inputs are collected by the system. The system can recognize a pattern of these inputs based on number of inputs, the time period between each input, and the total time elapsed for the series of inputs.
ii. Press and hold - The ability to determine the elapsed time of the state of an uninterrupted single-touch input occurring directly after a state of no touch input. The determined time is compared against a criteria, and if the criteria is met, a specific method is launched.
iii. Other shape patterns
1. The ability to recognize a transition from one area on a screen (touch interpreter) to another area on a screen. A series of transitions can be combined to create a transition pattern. The pattern can further be recognized as a specific shape, such as a circle, which can be compared to a criteria. The specific shape can vary in number of transition points. The distance between one transition point and another can vary and be different between any two sets of transition points. The time taken between transition points can be used to determine a pattern. The time taken for the total series of transition point used to create a pattern can be used to meet a criteria.
2. Circular patterns (specific embodiment of shape pattern claim).
b. Multi-finger touch
i. Same as above, but with the detection of multiple, simultaneous touch inputs. ii. Ex: Using your index finger plus thumb to start a timer and initiate a certain set of activities.
c. Touchscreen feeling detection
i. A touch screen and vibration-capable device waits until a finger(s), object(s) or anything else touches its surface and detects the pattern of the touch. If it matches a predefined pattern, then the device launches a predefined action while the touching is occurring. A vibration pattern is then executed. Otherwise, the device would produce a sound or vibration pattern to alert the user of the invalid touch pattern. ii. A touch screen device is placed into a waiting mode state and waits until a finger(s), object(s) or anything else touches its surface. The device may detect speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic wave, radio frequency, heart rate, vibration, or shakes. The device analyzes all the information captured while the surface is being touched, and if a pattern is found that matches a predefined criteria, then a predefined action is executed.
iii. A touch screen device keeps track of all the touching patterns (speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic waves, radio frequency, heart rate, vibration, shakes, etc.), analyzes them, and chooses the most used pattern. With all the information collected, it checks to see if there is something completely erratic in a new touch sequence pattern. If so, an alarm is executed on the device, which is placed into an emergency mode. The emergency mode remains active until a new touch sequence pattern is detected, analyzed, and verified. If a correct pattern of touching is not found, and if the device is in the emergency mode, a predefined emergency action is executed (emergency call, emergency text message, the capturing of audio, images, video or any other media type, etc.).
d. Touchscreen brailed system
i. A touchscreen device, capable of executing vibration patterns and/or making sounds, detects a touching pattern (speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic waves, radio frequency, heart rate, vibration, shakes, etc.), and translates the pattern into audible speech and/or sounds.
ii. A touchscreen device detects a requested command via a touch pattern (speed, pressure, size, movement, sound, position, force, atmosphere, temperature, electricity, magnetic waves, radio frequency, heart rate, vibration, shakes, etc.), and executes a unique, predefined vibration pattern or sound.
iii. A touchscreen device detects text, audio, video, objects, sounds, movements, and translates them into a pre-defined vibration pattern or sound.
[0034] Additional Hardware can include
1. Plugin for car to add extra accessibility options
2. External Launching Hardware
A system comprising a device listening for a specific input, with the intent of launching an action upon receiving input that meets a predefined set of criteria, and which has been equipped with hardware that functions as a launching mechanism, with the express purpose of delivering said input.
• The hardware can be externally attached to the device OR internally embedded into the device as an actual system component.
• Examples include (but are not limited to):
o (Hardware is an External Component): A smartphone accessory that attaches to a smartphone (using the charge port, headphone jack, etc.) and is comprised of a button that, when pressed, will launch a specific action on the phone.
o (Hardware is a System Component): A smartphone that has a pre-installed "panic button" that when pressed caused the phone to automatically call and/or text emergency services. [0035] Interacting with a service or action associated with an application on a communications device to launch, monitor, perform, or create a service or subsequent action. Includes (but is not limited to) the following:
• App features
• Phone features
• Entertainment features
• Visual recording Features
• Data streaming features
• Audio recording features
• Motion recording features
• Gyroscopic recording
• Light monitoring features
• Proximity monitoring features
• Chemical monitoring features
• Scent
• Pressure monitoring features
• Force monitoring features
• Moisture monitoring features
• Electricity-based monitoring featuresSurge detection
• Electromagnetic monitoring features
• Magnetic monitoring features
Compass
Manometer
• Brain waves
• Wind monitoring features
[0036] EXAMPLES
[0037] The examples shown here are not intended to limit the various embodiments. Rather they are presented solely for illustrative purposes. [0038] App launching mode (Fig. 1)
[0039] This mode allows the communications device to initiate actions within separate applications running on the primary communications device or a separate communications device. These actions include, but are not limited to: launching the application, bringing the application to the foreground (in the scenario of a smartphone or computer), performing calculations in the application, and initiating a specific method, activity, or intent within the application. This mode can initiate actions on one or multiple applications simultaneously or sequentially.
[0040] This mode starts at 102 where the application is transitioned into the application launching mode. From there the application transitions to 104 and determines which other applications are capable of being launched by application launch mode. These applications will have some identifier that the application launch mode will look for. After this, the application will transition to 106, where it will determine if a default launch application exists. The default application is capable of being launched without further user interaction. If a default launch application does exist, the application launch mode then checks if the default application is already active, launched, and/or within the system, as shown in 108. If so, the system transitions to the already active default application, as shown in 110. If the default application is not already active, the application launch mode activates the default application and the system transitions to 122: the default application. If there are no default launch applications, the system then moves to 114, where it begins recording user interaction with the device. If a user interaction is detected that corresponds to selecting an available launch application (116), the selected launch application is activated or launched, and the system transitions to the launched application 118.
[0041] App launching mode (with timeout) (FIG. 2)
[0042] This mode is a separate embodiment of Figure 1, where a timeout is added to launch a specific application if one is not selected within the timeout.
[0043] This mode starts at 202, where the application is transitioned into application launch mode. From there the application moves to 204 and determines which other applications are capable of being launched by application launch mode. These applications will have some identifier that the application launch mode will look for. After this, the application will move to 206 and determine if a default launch application exists. The default application is capable of being launched without further user interaction. If a default launch application does exist, the application launch mode then checks in 208 if the default application is already active, launched, within the system. If so, the system enters 210 and transitions to the already active default application. If the default application is not already active, the application launch mode activates the default application and the system transitions to the default application in 212. If there is no default application, the system then checks in 214 if the timeout for application selection has been reached. If the timeout has been reached, the system will move to 216 and activate the first available launch application. If the timeout has not yet been reached and there is no default launch application, the system then moves to 218 and begins recording user interaction with the device. If a user interaction is detected that corresponds to selecting an available launch application (as in 220), the selected launch application is activated or launched, and the system transitions to the launched application at 222. If there is no user interaction that corresponds to selecting an available application and the timeout has been reached, the system will activate the first available launch application at 216.
[0044] Transition to app launching mode (FIG. 3)
[0045] FIG. 3 shows a flow chart of steps in an exemplary method for transitioning a user communications device to an application launching mode in accordance with various embodiments.
[0046] FIG. 3 shows a flow chart of steps in an exemplary method 300 for transitioning a user's communications device to an application launching mode in accordance with the various embodiments. The method can begin at step 302 and continue on to step 304. At step 304, the communication device can be transitioned to a listening mode. That is, as described above, the application can be configured to initially operate in the background. Contemporaneously with step 304, a timer can be reset at step 306. The time can be configured to initiate a countdown of a specified length or to count up a certain amount of time. In the various embodiments any lengths of time can be used. For example, the length of time can be as long as 10 minutes in some embodiments. In another example, to avoid accidental activation, shorter time periods can be used, such as 60, 45, 30, or 10 seconds. In the various embodiments, the timer can be implemented as a portion of the application or can be implemented as a separate hardware or software module operating on the communications device in conjunction with the application. In addition to steps 304 and 306, the method 300 can also being recording user interactions with the device at step 308. The process to determine whether the communications device needs to be transitioned to an application launching mode can then begin at step 310. At step 310, the user interactions recorded at step 308 can be compared to one or more pre-defined patterns of user interactions. If the recorded user actions do not match the pre-defined patterns at step 310, the method 300 can continue recording the user interactions at step 308 and performing the comparison at step 310 until a match occurs. Thereafter, the method can proceed to step 312. if the pre-defined pattern occurs at step 310, it is then determined whether the pre-defined pattern occurred prior to the expiry of the timer at step 312. That is, whether or not the pre-defined pattern occurred within a specific timeframe. If the pre-defined pattern did occur within the specific time frame, the method can then proceed to step 314 to place the device in an application launching mode.
[0047] Transition to image capturing mode (FIG. 4)
[0048] FIG. 4 shows a flow chart of steps in an exemplary method for transitioning a user communications device to an image capturing mode in accordance with various embodiments. FIG. 4 shows a flow chart of steps in an exemplary method 400 for transitioning a user's communications device to an image capturing mode in accordance with the various embodiments. The method can begin at step 402 and continue on to step 404. At step 404, the communication device can be transitioned to a listening mode. That is, as described above, the application can be configured to initially operate in the background. Contemporaneously with step 404, a timer can be reset at step 406. The time can be configured to initiate a countdown of a specified length or to count up a certain amount of time. In the various embodiments any lengths of time can be used. For example, the length of time can be as long as 10 minutes in some embodiments. In another example, to avoid accidental activation, shorter time periods can be used, such as 60, 45, 30, or 10 seconds. In the various embodiments, the timer can be implemented as a portion of the application or can be implemented as a separate hardware or software module operating on the communications device in conjunction with the application. In addition to steps 404 and 406, the method 400 can also being recording user interactions with the device at step 408. The process to determine whether the communications device needs to be transitioned to an image capturing mode can then begin at step 410. At step 410, the user interactions recorded at step 408 can be compared to one or more pre-defined patterns of user interactions. If the recorded user actions do not match the pre-defined patterns at step 410, the method 400 can continue recording the user interactions at step 408 and performing the comparison at step 410 until a match occurs.
[0049] Thereafter, the method can proceed to step 412. if the pre-defined pattern occurs at step 410, it is then determined whether the pre-defined pattern occurred prior to the expiry of the timer at step 412. That is, whether or not the pre-defined pattern occurred within a specific timeframe. If the pre-defined pattern did occur within the specific time frame, the method can then proceed to step 414 to place the device in an image capturing mode.
[0050] Transition to video capturing mode (FIG. 5)
[0051] FIG. 5 shows a flow chart of steps in an exemplary method for transitioning a user communications device to a video capturing mode in accordance with various embodiments. FIG. 5 shows a flow chart of steps in an exemplary method 500 for transitioning a user's communications device to a video capturing mode in accordance with the various embodiments. The method can begin at step 502 and continue on to step 504. At step 504, the communication device can be transitioned to a listening mode. That is, as described above, the application can be configured to initially operate in the background. Contemporaneously with step 504, a timer can be reset at step 506. The time can be configured to initiate a countdown of a specified length or
[0052] to count up a certain amount of time. In the various embodiments any lengths of time can be used. For example, the length of time can be as long as 10 minutes in some embodiments. In another example, to avoid accidental activation, shorter time periods can be used, such as 60, 45, 30, or 10 seconds. In the various embodiments, the timer can be implemented as a portion of the application or can be implemented as a separate hardware or software module operating on the communications device in conjunction with the application. In addition to steps 504 and 506, the method 500 can also being recording user interactions with the device at step 508. The process to determine whether the communications device needs to be transitioned to a video capturing mode can then begin at step 510. At step 510, the user interactions recorded at step 508 can be compared to one or more pre-defined patterns of user interactions. If the recorded user actions do not match the pre-defined patterns at step 510, the method 500 can continue recording the user interactions at step 508 and performing the comparison at step 510 until a match occurs.
[0053] Thereafter, the method can proceed to step 512. if the pre-defined pattern occurs at step 510, it is then determined whether the pre-defined pattern occurred prior to the expiry of the timer at step 512. That is, whether or not the pre-defined pattern occurred within a specific timeframe. If the pre-defined pattern did occur within the specific time frame, the method can then proceed to step 514 to place the device in a video capturing mode.
[0054] Transition to audio capturing mode (FIG. 6)
[0055] FIG. 6 shows a flow chart of steps in an exemplary method for transitioning a user communications device to an audio capturing mode in accordance with various embodiments. FIG. 6 shows a flow chart of steps in an exemplary method 600 for transitioning a user's communications device to an audio capturing mode in accordance with the various embodiments. The method can begin at step 602 and continue on to step 604. At step 604, the communication device can be transitioned to a listening mode. That is, as described above, the application can be configured to initially operate in the background. Contemporaneously with step 604, a timer can be reset at step 606. The time can be configured to initiate a countdown of a specified length or to count up a certain amount of time. In the various embodiments any lengths of time can be used. For example, the length of time can be as long as 10 minutes in some embodiments. In another example, to avoid accidental activation, shorter time periods can be used, such as 60, 45, 30, or 10 seconds. In the various embodiments, the timer can be implemented as a portion of the application or can be implemented as a separate hardware or software module operating on the communications device in conjunction with the application. In addition to steps 604 and 606, the method 600 can also being recording user interactions with the device at step 608. The process to determine whether the communications device needs to be transitioned to an audio capturing mode can then begin at step 610. At step 610, the user interactions recorded at step 608 can be compared to one or more pre-defined patterns of user interactions. If the recorded user actions do not match the pre-defined patterns at step 610, the method 600 can continue recording the user interactions at step 608 and performing the comparison at step 610 until a match occurs.
[0056] Thereafter, the method can proceed to step 612. if the pre-defined pattern occurs at step 610, it is then determined whether the pre-defined pattern occurred prior to the expiry of the timer at step 612. That is, whether or not the pre-defined pattern occurred within a specific timeframe. If the pre-defined pattern did occur within the specific time frame, the method can then proceed to step 614 to place the device in an audio capturing mode.
[0057] Hit & Run (Fig. 7) [0058] This figure embodies a specific example of the use case of launching the communications device into a video or image recording mode. In this example, the user's car (702) is hit by another car (704) and the other car drives away from the accident before exchanging insurance information. To avoid legal repercussions, the user (706) launches their smart phone (708) into video recording mode, as referenced in FIG. 5, and then proceeds to capture a video of the other driver's car driving away including the car's license plate (710). This video is saved to the user's device and can then be sent to the authorities, the user's insurance company, or any form of social media.
[0059] This figure can also embody a similar example where the user launches their smart phone into an image recording mode to capture a single image of the license plate, as opposed to the video capturing mode of the previous embodiment.
[0060] Text-To-Emergency (Fig. 8)
[0061] This figure shows a flow chart of steps in an exemplary method where the communications device is capable of sending information to emergency response services over the form of an SMS message.
[0062] The figure begins at step 802 and transitions to step 804 where the device is put into a monitoring mode. During the monitoring mode of step 804 the device then enters into a communications mode at step 806. In this specific embodiment, there is an option during the communications mode for the user of the device to select to send an sms message to emergency personnel. This option is exemplified in step 808 when a user select the text-to-emergency option. This transitions the device into a state to be capable of sending the sms message to emergency response services. During this state, the device determines the GPS coordinates of the communications device at step 810. The user then has the option to confirm if the information is to be sent to emergency response services at step 812. If the user does confirm this, then the device will launch the default sms messaging app on the device and load a message pre-filled with information from the text-to-emergency state (including the GPS coordinates) at step 814. This pre-filled message will be directed to emergency response services and to an additional contact at the communications device. If the user does not confirm to send the information at step 812, the device is transitioned back to monitoring mode. After the information is sent at step 814, the device is transitioned back to monitoring mode.
[0063] Video recording mode [0064] Video recording mode allows the communications device to capture a combination of a series of sequential images and audio into a centralized format. This recording can then be saved on the device, sent to another device, and/or shared on any form of social media.
[0065] Referring to FIG. 9, an illustrative smartphone 910 includes a processor 912, a display 914, a touchscreen 916 and other physical user interface (UI) elements 918 (e.g., buttons, etc.). Also included are one or more microphones 920, a variety of other sensors 922 (e.g., motions sensors such 3D accelerometers, gyroscopes and magnetometers), a network adapter 924, a location-determining module 926 (e.g., GPS), and an RF transceiver 928.
[0066] The depicted phone 910 also includes one or more cameras, such as two cameras 930, 932. Camera 930 is front-facing, i.e., with a lens mounted on the side of the smartphone that also includes the screen. The second camera 932 has a lens on a different side of the smartphone, commonly on the back side.
[0067] Associated with the second camera 932 can be an LED "torch" 934 that is mounted so as to illuminate the second camera's field of view. Commonly, this torch is positioned on the same side of the smartphone as the lens of the second camera, although this is not essential.
[0068] Smartphone 910 also includes a memory 936 that stores software and data. The software includes both operating system software and application software. The software may include other audio, video, and image recognition software, as discussed throughout, or any other software for implementing the various embodiments of the present invention.
[0069] FIG. 10A, and FIG. 10B illustrate exemplary possible system configurations. The more appropriate configuration will be apparent to those of ordinary skill in the art when practicing the present technology. Persons of ordinary skill in the art will also readily appreciate that other system configurations are possible.
[0070] FIG. 10A illustrates a conventional system bus computing system architecture 1000 wherein the components of the system are in electrical communication with each other using a bus 1005. Exemplary system 1000 includes a processing unit (CPU or processor) 1010 and a system bus 1005 that couples various system components including the system memory 1015, such as read only memory (ROM) 1020 and random access memory (RAM) 1025, to the processor 1010. The system 1000 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor 1010. The system 1000 can copy data from the memory 1015 and/or the storage device 1030 to the cache 1012 for quick access by the processor 1010. In this way, the cache can provide a performance boost that avoids processor 1010 delays while waiting for data. These and other modules can control or be configured to control the processor 1010 to perform various actions. Other system memory 1015 may be available for use as well. The memory 1015 can include multiple different types of memory with different performance characteristics. The processor 1010 can include any general purpose processor and a hardware module or software module, such as module 1 1032, module 2 1034, and module 3 1036 stored in storage device 1030, configured to control the processor 1010 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 1010 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
[0071] To enable user interaction with the computing device 1000, an input device 1045 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 1035 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 1000. The communications interface 1040 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
[0072] Storage device 1030 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 1025, read only memory (ROM) 1020, and hybrids thereof.
[0073] The storage device 1030 can include software modules 1032, 1034, 1036 for controlling the processor 1010. Other hardware or software modules are contemplated. The storage device 1030 can be connected to the system bus 1005. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 1010, bus 1005, display 1035, and so forth, to carry out the function. [0074] FIG. 10B illustrates a computer system 1050 having a chipset architecture that can be used in executing the described method and generating and displaying a graphical user interface (GUI). Computer system 1050 is an example of computer hardware, software, and firmware that can be used to implement the disclosed technology. System 1050 can include a processor 1055, representative of any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform identified computations. Processor 1055 can communicate with a chipset 1060 that can control input to and output from processor 1055. In this example, chipset 1060 outputs information to output 1065, such as a display, and can read and write information to storage device 1070, which can include magnetic media, and solid state media, for example. Chipset 1060 can also read data from and write data to RAM 1075. A bridge 1080 for interfacing with a variety of user interface components 1085 can be provided for interfacing with chipset 1060. Such user interface components 1085 can include a keyboard, a microphone, touch detection and processing circuitry, a pointing device, such as a mouse, and so on. In general, inputs to system 1050 can come from any of a variety of sources, machine generated and/or human generated.
[0075] Chipset 1060 can also interface with one or more communication interfaces 1090 that can have different physical interfaces. Such communication interfaces can include interfaces for wired and wireless local area networks, for broadband wireless networks, as well as personal area networks. Some applications of the methods for generating, displaying, and using the GUI disclosed herein can include receiving ordered datasets over the physical interface or be generated by the machine itself by processor 1055 analyzing data stored in storage 1070 or 1075. Further, the machine can receive inputs from a user via user interface components 1085 and execute appropriate functions, such as browsing functions by interpreting these inputs using processor 1055.
[0076] It can be appreciated that exemplary systems 1000 and 1050 can have more than one processor 1010 or be part of a group or cluster of computing devices networked together to provide greater processing capability.
[0077] For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. [0078] In some configurations the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
[0079] Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
[0080] Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
[0081] The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
[0082] Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Claim language reciting "at least one of a set indicates that one member of the set or multiple members of the set satisfy the claim. Tangible computer-readable storage media, computer-readable storage devices, or computer-readable memory devices, expressly exclude media such as transitory waves, energy, carrier signals, electromagnetic waves, and signals per se.
[0083] While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Numerous changes to the disclosed embodiments can be made in accordance with the disclosure herein without departing from the spirit or scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above described embodiments. Rather, the scope of the invention should be defined in accordance with the following claims and their equivalents.
[0084] Although the invention has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the invention may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.
[0085] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, to the extent that the terms "including", "includes", "having", "has", "with", or variants thereof are used in either the detailed description and/or the claims, such terms are intended to be inclusive in a manner similar to the term "comprising. "
[0086] Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

Claims

CLAIMS What is claimed is:
1. A method, comprising:
transitioning a communications device associated with a user to an application launching mode;
determining that a default application exists on the device;
in response to determining that the default application exists on the device, accessing the default application;
in response to determining that the default application does not exists on the device, recording, at the communications device, user actions with the communications device to select the default application based on the user interactions.
2. The method of claim 1, further comprising:
in response to determining that the default application does not exists on the device, performing the steps of:
determining that a time out has been reached; and
in response to determining that a time out has been reached, accessing a first available application on the communications device.
3. The method of claim 1, further comprising performing, prior to the transitioning to the application launching mode:
placing a communications device to a monitoring mode;
detecting that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern;
in response to detecting that the user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, proceeding to the transitioning, determining, accessing, and recording.
4. The method of claim 3, further comprising: determining that a time out has been reached prior to determining that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern; and
in response to determining that a time out has been reached prior to determining that a user interaction with the communications device in the monitoring mode corresponds to a predefined pattern, repeating the detecting and proceeding.
5. The method of claim 1, further comprising performing, prior to the transitioning to the application launching mode:
placing a communications device to a monitoring mode;
detecting that a user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern;
in response to detecting that the user interaction with the communications device in the monitoring mode corresponds to a pre-defined pattern, proceeding to at least one of an audio capture mode, an image capture mode or a video capture mode.
6. A computer-readable medium having stored thereon a computer program executable by a communications device, the computer program comprising a plurality of instructions for performing any of the methods of claims 1-5.
7. A communications device, comprising:
at least one input device;
a processor coupled to the at least one input device;
a computer-readable medium having stored thereon a computer program executable by the processor, the computer program comprising a plurality of instructions for causing the processor to perform any of the methods of claims 1-5.
PCT/US2016/029860 2015-04-28 2016-04-28 Systems and methods for detecting and initiating activities WO2016176494A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562153909P 2015-04-28 2015-04-28
US62/153,909 2015-04-28

Publications (1)

Publication Number Publication Date
WO2016176494A1 true WO2016176494A1 (en) 2016-11-03

Family

ID=57198799

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/029860 WO2016176494A1 (en) 2015-04-28 2016-04-28 Systems and methods for detecting and initiating activities

Country Status (1)

Country Link
WO (1) WO2016176494A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030046401A1 (en) * 2000-10-16 2003-03-06 Abbott Kenneth H. Dynamically determing appropriate computer user interfaces
US20070061495A1 (en) * 2005-08-05 2007-03-15 Microsoft Corporation Initiating software responses based on a hardware action
US7444645B1 (en) * 2000-04-21 2008-10-28 Microsoft Corporation Method and system for detecting content on media and devices and launching applications to run the content
US20110230209A1 (en) * 2010-03-22 2011-09-22 Dsp Group Ltd. Method and Mobile Device for Automatic Activation of Applications
US20130102300A1 (en) * 2011-10-21 2013-04-25 Myine Electronics, Inc. System And Method For Forming Automatically Launching User Set Default App From Smartphone
US20130283274A1 (en) * 2012-03-29 2013-10-24 David MIMRAN Method and system for discovering and activating an application in a computer device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7444645B1 (en) * 2000-04-21 2008-10-28 Microsoft Corporation Method and system for detecting content on media and devices and launching applications to run the content
US20030046401A1 (en) * 2000-10-16 2003-03-06 Abbott Kenneth H. Dynamically determing appropriate computer user interfaces
US20070061495A1 (en) * 2005-08-05 2007-03-15 Microsoft Corporation Initiating software responses based on a hardware action
US20110230209A1 (en) * 2010-03-22 2011-09-22 Dsp Group Ltd. Method and Mobile Device for Automatic Activation of Applications
US20130102300A1 (en) * 2011-10-21 2013-04-25 Myine Electronics, Inc. System And Method For Forming Automatically Launching User Set Default App From Smartphone
US20130283274A1 (en) * 2012-03-29 2013-10-24 David MIMRAN Method and system for discovering and activating an application in a computer device

Similar Documents

Publication Publication Date Title
CN110874129B (en) Display system
KR102298947B1 (en) Voice data processing method and electronic device supporting the same
KR102414122B1 (en) Electronic device for processing user utterance and method for operation thereof
KR102636638B1 (en) Method for managing contents and electronic device for the same
KR102405793B1 (en) Method for recognizing voice signal and electronic device supporting the same
KR102363794B1 (en) Information providing method and electronic device supporting the same
CN105391937B (en) Electronic equipment, method for controlling electronic equipment and mobile terminal
US20180108227A1 (en) Haptic effects conflict avoidance
KR102398649B1 (en) Electronic device for processing user utterance and method for operation thereof
KR101800992B1 (en) Method and apparatus for generating mood-based haptic feedback
CN103677261B (en) The context aware service provision method and equipment of user apparatus
KR102416782B1 (en) Method for operating speech recognition service and electronic device supporting the same
US9848796B2 (en) Method and apparatus for controlling media play device
KR102412523B1 (en) Method for operating speech recognition service, electronic device and server supporting the same
KR102348758B1 (en) Method for operating speech recognition service and electronic device supporting the same
KR20220032655A (en) Semantic framework for variable haptic output
BR112015018905B1 (en) Voice activation feature operation method, computer readable storage media and electronic device
KR102391298B1 (en) electronic device providing speech recognition service and method thereof
KR102369083B1 (en) Voice data processing method and electronic device supporting the same
KR101480668B1 (en) Mobile Terminal Having Emotion Recognition Application using Voice and Method for Controlling thereof
KR20190068133A (en) Electronic device and method for speech recognition
CN107146625A (en) A kind of method of speech recognition, terminal and storage medium
EP2731369A1 (en) Mobile terminal and control method thereof
KR20210141516A (en) Systems and methods for monitoring occupant status and managing devices in a building
WO2017214732A1 (en) Remote control by way of sequences of keyboard codes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16787181

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16787181

Country of ref document: EP

Kind code of ref document: A1