US20170187866A1 - Automatic Volume Control Based on Context and Location - Google Patents

Automatic Volume Control Based on Context and Location Download PDF

Info

Publication number
US20170187866A1
US20170187866A1 US14/886,044 US201514886044A US2017187866A1 US 20170187866 A1 US20170187866 A1 US 20170187866A1 US 201514886044 A US201514886044 A US 201514886044A US 2017187866 A1 US2017187866 A1 US 2017187866A1
Authority
US
United States
Prior art keywords
mobile computing
computing device
user
detecting
predetermined location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/886,044
Inventor
Eric Qing Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US14/886,044 priority Critical patent/US20170187866A1/en
Publication of US20170187866A1 publication Critical patent/US20170187866A1/en
Priority to US15/809,637 priority patent/US10237396B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72457User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
    • H04M1/72572
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/66Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
    • H04M1/667Preventing unauthorised calls from a telephone set
    • H04M1/67Preventing unauthorised calls from a telephone set by electronic means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • H04W72/0446Resources in time domain, e.g. slots or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6016Substation equipment, e.g. for use by subscribers including speech amplifiers in the receiver circuit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6075Portable telephones adapted for handsfree use adapted for handsfree use in a vehicle
    • H04M1/6083Portable telephones adapted for handsfree use adapted for handsfree use in a vehicle by interfacing with the vehicle audio system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. SMS or e-mail
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72451User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to schedules, e.g. using calendar applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/10Details of telephonic subscriber devices including a GPS signal receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • the present disclosure generally relates to improving the interface and usability of a mobile computing device, such as a smartphone or a tablet computer.
  • FIG. 1 is a diagrammatic view of an example mobile computing device and a user for performing a face-unlock according to various aspects of the present disclosure.
  • FIG. 2 is a flowchart illustrating an example method for using the mobile computing device to perform a face-unlock according to various aspects of the present disclosure.
  • FIGS. 3-6 are diagrammatic views of an example mobile computing device and a user for performing a voice-unlock according to various aspects of the present disclosure.
  • FIG. 7 is a flowchart illustrating an example method for using the mobile computing device to perform a voice-unlock according to various aspects of the present disclosure.
  • FIGS. 8-9 are diagrammatic views of an example mobile computing device for performing a dynamically adjustable screen time-out according to various aspects of the present disclosure.
  • FIGS. 10-11 are flowcharts illustrating example methods for using the mobile computing device to perform a dynamically adjustable screen time-out according to various aspects of the present disclosure.
  • FIGS. 12, 13A-16A, 13B-16B, 17-20, 21A-21B, 22-24, and 25A-25B are diagrammatic views of various interfaces of an example mobile computing device for an improved lock screen according to various aspects of the present disclosure.
  • FIG. 26 is a flowchart illustrating an example method for launching applications from a lock screen of the mobile computing device according to various aspects of the present disclosure.
  • FIGS. 27-28 are diagrammatic views of various environments in which a mobile computing device performs contextually-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIG. 29 is a simplified flowchart illustrating a method of using a mobile computing device to perform a contextually-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIG. 30 is a diagrammatic view of an example environment in which a mobile computing device performs locationally-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIGS. 31-32 are diagrammatic views of various interfaces of an example mobile computing device for performing locationally-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIG. 33 is a simplified flowchart illustrating a method of using a mobile computing device to perform locationally-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIG. 34 is a simplified block diagram of an example mobile computing device for performing one or more of the processes of FIGS. 1-33 according to various aspects of the present disclosure.
  • FIG. 35 is a simplified block diagram of an example system for performing one or more of the processes of FIGS. 1-33 according to various aspects of the present disclosure.
  • the term “about” refers to a +/ ⁇ 5% variation from the nominal value.
  • the term “and/or” includes any and all combinations of one or more of the associated listed items.
  • the singular forms “a”, “an”, and “the” are intended to include the plurality forms as well, unless the context clearly and specifically indicates otherwise.
  • all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
  • a user of these mobile computing devices can perform a plurality of tasks on these mobile computing devices, for example tasks that previously required a conventional desktop or laptop computer.
  • a user can play movies/videos, browse the web, play games, view photographs, listen to digital music, read e-books, receive navigational instructions, send and receive emails, conduct audio or video telephone calls, perform word processing/spreadsheet calculation/presentation management tasks, or take advantage of additional functionalities offered by applications (apps) that can be downloaded from online app stores.
  • the mobile computing devices may still have certain drawbacks that limit the versatility and the usability of these devices.
  • the present disclosure offers solutions to overcome these drawbacks, as discussed in more detail below.
  • Maintaining secured access to a user's mobile computing device has always been a challenge to their manufacturers (or makers of an operating system that runs on these devices). For example, it may be desirable to limit the access to a smartphone or a tablet computer to an authorized user only. This may be referred to as “unlocking” the smartphone or tablet computer.
  • Some methods of adding security to the unlocking process involve using password protection, that is, a user needs to input a correct password to gain access to the smartphone or tablet computer.
  • a “face-unlock” method has also been employed to unlock a smartphone or tablet computer. To perform the face-unlock process, a user basically lets the camera of the smartphone or tablet computer capture a facial image of the user. The smartphone or tablet computer then compares the captured facial image to a previously-stored facial image and determines whether the user who is attempting to unlock the smartphone or tablet computer is the authorized user having the previously-stored facial image.
  • the face-unlock process has proven useful to some degree, one drawback that it suffers is tied to inconsistent performance based on poor lighting conditions. For example, if the user attempts to perform the face-unlock process in a poorly-lit environment, the camera of the smartphone or tablet computer often times is not able to capture a good enough facial image of the user, and the face-unlock process may fail as a result. The user may then be asked to perform an alternative method of unlocking the phone. This is frustrating to the user, who may then abandon the face-unlock process altogether after suffering through a few failed attempts.
  • the present disclosure proposes methods and devices that offer an improved face-unlock experience for the user.
  • the mobile computing device 100 may be a laptop computer, or a tablet computer (for example, APPLE's® IPAD®, an ANDROID® tablet, a WINDOWS® powered tablet, or a BLACKBERRY® tablet), or a mobile telephone (for example, APPLE's® IPHONE®, an ANDROID® smartphone, a WINDOWS® smartphone, or a BLACKBERRY® smartphone), or a wear-able electronic device (e.g., a smart watch or glass).
  • a tablet computer for example, APPLE's® IPAD®, an ANDROID® tablet, a WINDOWS® powered tablet, or a BLACKBERRY® tablet
  • a mobile telephone for example, APPLE's® IPHONE®, an ANDROID® smartphone, a WINDOWS® smartphone, or a BLACKBERRY® smartphone
  • a wear-able electronic device e.g., a smart watch or glass.
  • the mobile computing device 100 may include a touch-sensitive display (or touch screen) 110 for displaying one or more visual objects.
  • a touch-sensitive display or touch screen 110 for displaying one or more visual objects.
  • a non-touch screen display may detect user input via more traditional mechanisms such as a mouse, a keyboard, a remote control, a gesture, or voice commands.
  • the mobile computing device 100 may further include a camera 120 .
  • the camera 120 may be any type of image-capturing device suitable for implementation on a smartphone or a tablet computer, for example a camera that contains a CMOS image sensor.
  • the camera 120 is configured to capture both still shots (photographs) of a person or an object, or a motion video of the person or object.
  • the mobile computing device 100 also includes a lighting mechanism 130 that is capable of producing or illuminating light.
  • the lighting mechanism 130 includes a plurality of light-emitting diodes (LEDs), but it may include additional or other types of light-producing devices in alternative embodiments. When activated, the LEDs emit light. The intensity and color of the emitted light can be controlled by software implemented on the mobile computing device 100 . It is understood that both the camera 120 and the lighting mechanism 130 are implemented on a “front” side of the mobile computing device 100 (i.e., the same side as the display 110 ). However, the mobile computing device 100 may further include a different camera and/or another lighting mechanism on the back side of the mobile computing device 100 as well.
  • the mobile computing device 100 further includes an ambient light sensor 140 .
  • the ambient light sensor 140 is configured to gauge how much light is available in an area near the mobile computing device. In other words, the ambient light sensor 140 can be used to determine whether the mobile computing device 140 is located in a well-lit or a poorly-lit environment.
  • a user 150 now wishes to perform a face-unlock process.
  • the user 150 may press a home button 160 or a power button 170 of the mobile computing device 100 , which may turn on the display 110 and thereafter initiate the face-unlock process automatically.
  • the user 150 may also be first asked to perform another task before the face-unlock process is initiated. For example, the user 150 may be asked to swipe an object on the display 110 after the home button 160 or the power button 170 is pressed before the face-unlock process is initiated. By performing these actions, the user 150 essentially sends a request to the mobile computing device 100 that he/she wishes to gain access to the mobile computing device.
  • the mobile computing device 100 After the mobile computing device 100 receives the user's request to gain access to the device, it instructs the ambient light sensor 140 to sense or detect an ambient light condition near the mobile computing device 100 . The detected ambient light condition is then compared to a predefined lighting condition threshold.
  • the predefined lighting condition threshold may be a minimum lighting condition that allows a user's face to be clearly captured by the camera 120 . In some embodiments, this predefined lighting condition threshold may be preset by a manufacturer of the mobile computing device 100 or by the maker of the operating system running on the mobile computing device 100 .
  • the predefined lighting condition threshold is actually determined through a calibration process. For example, at the time when the user 150 initially configures (or sets up) the face-unlock action with the mobile computing device 100 , the user 150 may be asked to position his/her face in front of the mobile computing device 100 such that his/her face can be captured by the camera 120 . While the user's face is being continuously captured by the camera 120 , the mobile computing device 100 turns on the lighting mechanism 130 and gradually increases its lighting output. The increasing of the light output of the lighting mechanism 130 causes the ambient lighting situation to continuously improve as well. Thus, the ambient light sensor 140 is instructed to continuously detect the ambient lighting condition while the light output of the lighting mechanism 130 is increased.
  • the camera 120 can capture a satisfactorily clear facial image of the user 150 , and the ambient lighting condition detected by the ambient light sensor 140 corresponding to the satisfactory capture of the user's facial image is stored in the mobile computing device 100 as the predefined lighting condition threshold.
  • the user 150 may be asked to go into a dimly-lit environment to perform the calibration process.
  • the mobile computing device 100 compares the detected ambient lighting condition with the predefined lighting condition threshold. Based on the comparison, the mobile computing device 100 determines whether the ambient lighting condition is below the predefined lighting condition threshold. If not, the mobile computing device 100 may continue with the face-unlock action, in which the user 150 may be asked to position his/her face in front of the mobile computing device 100 such that his/her facial image 180 is within a specified location of the display 110 , for example within the dotted lines as shown in FIG. 1 . In other words, if the detected ambient lighting condition reveals that the mobile computing device 100 (and therefore the user 150 ) is located in a well-lit environment, the face-unlock action is unlikely to encounter any problems and may therefore continue “normally.”
  • the mobile computing device 100 determines the ambient lighting condition is below the predefined lighting condition threshold, the mobile computing device 100 activates the lighting mechanism 130 such that it illuminates light. The light is sufficient to clearly illuminate the user 150 's face. The mobile computing device 100 then performs the face-unlock action for the user 150 while the light is illuminated by the lighting mechanism. By doing so, the user 150 's face can be satisfactorily captured by the camera 120 (i.e., captured as a clear facial image 180 ) even if the user 150 and the mobile computing device 100 are located in a poorly-lit environment. Therefore, the face-unlock action may be performed with a substantially reduced failure rate. It is understood that the specific sequence of activating the lighting mechanism 130 and prompting the user 150 to position his/her face in front of the camera 120 is not important. One can be performed before the other, or vice versa, or they can be performed simultaneously.
  • the mobile computing device 100 may turn on a portion of (or all of) the display 110 .
  • the display 110 of the mobile computing device 100 may contain a plurality of LEDs each capable of emitting light, or some other light-producing element.
  • the mobile computing device 100 may instruct a portion of the display 110 (for example the portion outside of the dotted lines) to emit bright white light, which also serves to illuminate the user 150 's face.
  • the illuminated display 110 serves a similar function as light emitted by the lighting mechanism 130 , and vice versa. Hence, they can be interchangeably used, or used in combination with each other to achieve even more light in certain embodiments.
  • the user 150 's face is captured as a clear facial image 180 , which may then be compared with a stored facial image of the user 150 to determine if the user 150 is an authorized user.
  • the stored facial image of the user 150 may be generated when the user 150 initially sets up the face-unlock for the mobile computing device 100 , for example during the calibration process discussed above.
  • the mobile computing device 100 may capture a “live” video of the facial image 180 of the user 150 and compares the video to the stored facial image (or video) of the authorized user.
  • the term “facial image” herein may refer to both a still photograph and a motion video.
  • the present disclosure substantially improves the reliability of a face-unlock action for the mobile computing device 100 and enhances the likelihood that it will be actually used by prospective users as a security-protection mechanism for accessing the mobile computing device 100 .
  • FIG. 2 is a simplified flowchart illustrating a method 200 for performing the face-unlock process discussed above.
  • the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, or a wear-able electronic device.
  • the method 200 includes a step 205 , in which a request to gain access to a mobile computing device is received from a user.
  • the method 200 includes a step 210 , in which an ambient lighting condition is detected via the mobile computing device.
  • the method 200 includes a step 215 , in which the predefined threshold is determined through a calibration process.
  • the method 200 includes a step 220 , in which the detected ambient lighting condition is compared with a predefined threshold.
  • the method 200 includes a step 225 , in which the detected ambient lighting condition is determined to be below the predefined threshold.
  • the method 200 includes a step 230 , in which at least one of the following tasks is performed in response to the step 225 : activating a front-facing light-emitting diode (LED) mechanism of the mobile computing device, or illuminating at least a portion of a screen of the mobile computing device.
  • the method 200 includes a step 235 , in which a face-unlock action is performed on the mobile computing device to authenticate the user. The step 235 is performed while the LED mechanism is activated or while the portion of the screen of the mobile computing device is illuminated.
  • the step 235 includes the following steps: activating a front-facing camera of the mobile computing device; displaying, on the screen of the mobile computing device, a facial image of the user captured by the front-facing camera; prompting the user to move the captured facial image to a specified location on the screen; and comparing the captured facial image to a stored facial image of the user; granting access to the user if the captured facial image matches the stored facial image of the user; and denying access to the user if the captured facial image fails to match the stored facial image of the user.
  • the predefined threshold in step 215 is a minimum lighting condition that allows the user's face to be clearly captured by a camera of the mobile computing device.
  • the step of receiving the request from the user in step 205 includes detecting the mobile computing device being powered on.
  • the step of detecting the ambient lighting condition in step 210 is performed using an ambient light sensor implemented on the mobile computing device.
  • a voice-unlock process generally involves recording a voice-based password from a user, who will then be prompted to speak that recorded voice-based password when such user wants to access the mobile computing device.
  • voice-based password is a static password. That is, once the mobile computing device of the user (or a remote server) stores the user-spoken voice-based password, it remains the password unless the user decides to change it.
  • the static nature of the voice-based password decreases reliability and security of the voice-unlock process.
  • a hacker may somehow obtain that static voice-based password (e.g., through a secret recording) and may then use that password to gain illegal access to the user's mobile computing device.
  • that static voice-based password e.g., through a secret recording
  • the password when the password always remains the same, it is more prone to theft or cracking, and therefore does not offer the optimal security protection for a user.
  • the present disclosure offers a dynamic voice-unlock process.
  • the voice-based password dynamically changes from time to time, which may occur without the user actively demanding or initiating the change.
  • the dynamic voice-unlock process is discussed in more detail below with reference to FIGS. 3-7 . Similar elements appearing in FIGS. 1, 3-6 are labeled the same for reasons of clarity and consistency.
  • the user 150 of a mobile computing device 100 may engage in voice-based interactions with the mobile computing device 100 periodically.
  • these voice-based interactions include voice-based commands issued by the user 150 to initiate tasks to be performed by the mobile computing device 100 .
  • the user 150 may issue a voice command such as “call dad's work phone”, “set a reminder for me to do laundry at 5:30 tonight”, “send a text message to Chris”, “turn up the volume”, “navigate me to 100 Drury lane” etc.
  • the user may also issue a query to the mobile computing device 100 , for example “what is the score of the Yankees-Red Sox game last night”, “what is the weather for this weekend”, “how many feet are in a mile”, etc.
  • the voice-based interactions include voice-dictations made by the user 150 .
  • the mobile computing device 100 may allow the user 150 to dictate the content of an email, a text, or a voice search. With a click of a button, the user 150 may dictate, as an example text message, “I am unable to make it to dinner tonight, sorry guys, maybe next time.”
  • the voice-based interaction may include a telephone call made by the user 150 .
  • the mobile computing device 100 may be activated periodically to record a spoken phrase 250 from the user 150 while the mobile computing device 100 is engaged in the voice-based interactions with the user 150 .
  • the mobile computing device 100 is activated to record this phrase.
  • the user 150 is issuing the query “what is the score of the Yankees-Red Sox game last night,” the mobile computing device 100 is activated to record this phrase as well.
  • the user 150 is dictating the text message “I can't make it to dinner tonight, sorry guys, maybe next time”, the mobile computing device 100 is activated to record this phrase as well.
  • the mobile computing device 100 records the spoken voice phrase 250 from the voice-based interaction only if the phrase 250 is intelligible or comprehensible. That is, the mobile computing device 100 needs to have a high confidence level that the voice phrase it recorded is indeed what it “thinks” it is. For example, this may be verified by the mobile computing device 100 repeating (either by voice or text) the phrase and then prompting the user 150 to confirm that the recorded phrase is indeed what the user 150 meant to say.
  • the mobile computing device 100 may initially record voice phrases 250 that may or may not be 100% accurate. The subsequent actions from the user may help the mobile computing device 100 determine the accuracy of the recorded phrase though.
  • the mobile computing device 100 may deem that sentence correctly entered. In other words, the mobile computing device 100 may deem that the voice phrase 250 spoken by the user 150 has been accurately captured.
  • the recording of the voice phrase 250 by the user 150 is performed without alerting the user 150 .
  • the user 150 need not be informed that a voice phrase 250 he/she just spoke was captured or recorded by the mobile computing device 100 .
  • the mobile computing device 100 may alert the user 150 that the mobile computing device 100 is indeed recording the voice phrase 250 spoken by the user as a part of the voice-based interaction.
  • the mobile computing device 100 may display an alert such as “your voice command is now being recorded” or something similar on the display 110 .
  • the mobile computing device 100 will use the recorded phrase 250 (or a portion thereof) to generate a dynamically-changing voice password.
  • the voice phrase 250 is “set a reminder for me to do laundry at 5:30 tonight.”
  • the mobile computing device 100 may then select one or more words of this phrase as the password. The selection may be done so that a random segment of the phrase 250 is selected to be the password.
  • the words “set a reminder” may be selected as a voice password.
  • the words “do laundry” may be selected as a voice password.
  • the selected words may not necessarily be words that are adjacent to each other.
  • the words “do laundry tonight” may be selected as a voice password, even though the words “at 5:30” is between the words “do laundry” and “tonight.”
  • multiple segments of a single phrase may each be saved as a password. Therefore, in the example above, the words “set a reminder”, “do laundry”, and “do laundry tonight” may all each be saved as a password.
  • the selected password is saved either locally in a database in a local memory storage of the mobile computing device 100 , or in a remote electronic database, or both.
  • the password may now be used as a password for performing a voice-unlocking process.
  • the mobile computing device 100 displays a message on the display 110 to prompt the user 150 to speak a password.
  • the mobile computing device 100 may prompt the user 150 to speak any one of the following passwords “set a reminder”, “do laundry”, or “do laundry tonight.”
  • the mobile computing device 100 may only display one of the passwords (instead of displaying multiple passwords) and prompt the user 150 to speak only that one password.
  • the user 150 may speak a password 255 . If the user 150 is the authorized user, he/she can speak the correct password with the correct voice associated with that password. In other words, the user 150 had previously spoken those words in the password displayed by the mobile computing device. Therefore, if the user 150 speaks those words again, the voice behind the words will match. On the other hand, if the user trying to gain access to the mobile computing device 100 is not the authorized user, he/she will not be able to reproduce the correct voice associated with the password. In other words, even though the user can read and understand the prompt, and therefore can speak the “correct” password, the voice associated with the spoken password 255 is from a different person.
  • person A e.g., authorized user
  • person B e.g., unauthorized user
  • the mobile computing device 100 records the password 255 spoken by the user attempting to gain access.
  • the mobile computing device 100 compares the recorded spoken password 255 with the previously recorded password.
  • This comparison process involves a voice-matching process to ensure that not only does the user have to speak the correct password, but that he/she also must speak the correct password with the correct voice associated with such password. If the comparison process indicates that the voice from the password 255 matches the recorded password, then access will be granted to the user. However, if the comparison process indicates that the voice from the password 255 fails to match the recorded password, then access will be denied to the user, or the user will be required to go through an alternative authentication process (e.g., entering a password, etc.).
  • an alternative authentication process e.g., entering a password, etc.
  • X may be in a range from about 90%-99.99%.
  • the process discussed above with reference to FIGS. 3-4 may repeat itself after the user 150 gains access to the mobile computing device.
  • the user 150 may dictate (via a spoken phrase 260 ) a text message “I can't make it to dinner tonight, sorry guys, maybe next time” to a friend.
  • the mobile computing device 100 may then record this spoken phrase 260 and select random segments of such phrase as a new voice password. For example, the words “I can't make it”, “dinner tonight”, or “next time” may each be selected as the new voice password.
  • the recording of the phrase 260 may be done with or without alerting the user 150 .
  • the user 150 is attempting to gain access to the mobile computing device 100 again.
  • the mobile computing device 100 again prompts the user 150 to speak one of the following new voice passwords “I can't make it”, “dinner tonight”, or “next time.”
  • one or more of the previous voice passwords may also be displayed (and they still function as effective passwords).
  • the new voice passwords are generated, the previous voice passwords are erased and are no longer effective. In those cases, only the newly-generated voice password will be displayed.
  • the user 150 is the authorized user whose voice was recorded for the correct password, then he/she can speak the password and have the mobile computing device 100 verify that he/she is indeed the authorized user, because the voice will match the stored password. Otherwise, the user will be denied access if he/she is an unauthorized user, as the voice will fail to match that of the stored password.
  • the voice password discussed herein is a dynamic password, because it can change constantly or periodically.
  • the changing of the voice password may occur without the user requesting to change it.
  • a new voice password may be dynamically generated based on the voice-based interactions between the mobile computing device 100 and the user, for example through voice commands, dictations, or telephone calls. Since the voice password is generated periodically and randomly, the user himself/herself may not even know what the current voice password is until he/she is being prompted to enter it during a voice unlock process. Nevertheless, the authorized user can still gain access to the mobile computing device 100 without encountering problems, since the authorized user can readily reproduce the correct voice associated with the updated password.
  • the mobile computing device 100 may also decide how often to update or change the voice password. For example, in a settings menu, the user may choose from options to change the voice password every day, every week, every month, or every specified number of days, etc. Based on the user selection, the mobile computing device 100 can implement the updating of the voice password accordingly.
  • the mobile computing device 100 may implement another unlock mechanism as a back-up unlock method for the voice-unlock mechanism discussed above.
  • the face-unlock may be used as a back-up mechanism for the voice-unlock, or vice versa.
  • Other unlock mechanisms such as text passwords, drawing predefined patterns, etc., may also be used as back-up unlock mechanisms for unlocking the mobile computing device 100 in case the voice-unlock fails.
  • the dynamic voice-unlock process is discussed above using the mobile computing device 100 (e.g., smartphones, tablet computers, laptop computers, wear-able electronic devices) as an example, other computer systems may also benefit from the dynamic voice-unlock process disclosed herein.
  • a desktop workstation or a server may also be “unlocked” via the dynamic voice unlock process discussed herein.
  • the authorized user's voice may be recorded from online chatting sessions, for example.
  • FIG. 7 is a simplified flowchart illustrating a method 300 for performing the dynamic voice-unlock process discussed above.
  • One or more steps of the 300 are performed by a mobile computing device of the user.
  • the mobile computing device includes a mobile telephone, a tablet computer, or a laptop computer.
  • the method 300 includes a step 305 , in which the mobile computing device is engaged in a voice-based interaction with a user of the mobile computing device.
  • the method 300 includes a step 310 , in which a spoken phrase from the user is recorded during the voice-based interaction.
  • the method 300 includes a step 315 , in which a segment of the spoken phrase is selected as a password for authenticating the user.
  • the method 300 includes a step 320 , in which a recording of the segment of the spoken phrase is recorded as a recorded password in a database.
  • the method 300 includes a step 325 , in which a request to gain access to the mobile computing device is received from the user.
  • the method 300 includes a step 330 , in which the user is prompted to speak the password to the mobile computing device.
  • the method 300 includes a step 335 , in which one or more words spoken by the user in response to the prompting is recorded.
  • the method 300 includes a step 340 , in which the one or more words spoken by the user is compared with the recorded password.
  • the method 300 includes a step 345 , in which the user is authenticated in response to the comparing.
  • the comparing in step 340 includes matching a voice associated with the one or more words spoken by the user with a voice from the recorded password.
  • the spoken phrase in step 310 contains one or more identifiable words.
  • the selecting in step 315 is performed so that the at least one of the identifiable words is randomly selected as the segment of the spoken phrase.
  • the engaging in step 305 includes conducting a telephone conversation. In some other embodiments, the engaging in step 305 includes performing a voice dictation. In yet other embodiments, the engaging in step 305 includes receiving a voice command from the user and initiating a task using the mobile computing device in response to the voice command.
  • the recording of step 310 is performed without alerting the user.
  • the steps 305 - 345 of the method 300 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 305 - 345 in FIG. 7 .
  • the spoken phrase may be a first spoken phrase
  • the password may be a first password
  • the request may be a first request.
  • the method 300 may further include the following steps: after the authenticating, recording, via the mobile computing device, a second spoken phrase from the user, the second spoken phrase being different from the first spoken phrase; selecting a segment of the second spoken phrase as a second password for authenticating the user; saving a recording of the segment of the second spoken phrase as a recorded second password in the database; receiving, from the user, a second request to gain access to the mobile computing device; prompting, in response to the receiving the second request, the user to speak the second password to the mobile computing device; thereafter recording one or more words spoken by the user in response to the prompting; comparing the one or more words spoken by the user with the second recorded password; and thereafter authenticating the user in response to the comparing the one or more words spoken by the user with the second recorded password.
  • the method 300 may further include the following steps: receiving, from the user, a third request to gain access to the mobile computing device; prompting, in response to the receiving the third request, the user to speak one of: the first password or the second password to the mobile computing device; thereafter recording one or more words spoken by the user in response to the prompting; comparing the one or more words spoken by the user with the first record password or the second recorded password; and thereafter authenticating the user in response to the comparing the one or more words spoken by the user with the first recorded password or the second recorded password.
  • mobile computing devices may be programmed to dim its screen (i.e., display) after a period of inactivity from the user. The screen will then be turned off shortly thereafter if the user does not attempt to keep the screen on, for example by touching it.
  • Most modern day mobile computing devices offer the user an option to set the screen time-out period, for example anywhere from 30 seconds to a few minutes.
  • the screen time-out period is set too low, the user may be constantly interrupted by the dimming of the screen while the user is still viewing content on the mobile computing device, which requires the user to touch the screen (or engage the mobile computing device in some other suitable manner) to indicate that the user is still actively using it.
  • the present disclosure offers an adaptive screen time-out to address the problems facing existing mobile computing devices.
  • the adaptive screen time-out is discussed in more detail below with reference to FIGS. 8-12 . Similar elements appearing in FIGS. 1 and 8-9 are labeled the same for reasons of clarity and consistency.
  • the display 110 of the mobile computing device 100 has detected a period of inactivity from the user (e.g., the user 150 shown in FIG. 6 ). Inactivity from the user may correspond to a lack of user input, such as an absence of a touch input to the display 110 .
  • the mobile computing device 100 may have a preset default screen time-out period, which may be set by the manufacturer of the mobile computing device 100 . In some embodiments, the screen time-out period may also be set by the user. For example, the user may go into a settings menu of the mobile computing device 100 and choose the default screen time-out period from a list of screen time-out periods. Alternatively, the user may also specify a screen dimming period, which occurs X seconds (e.g., 5 seconds) before the screen time-out period.
  • X seconds e.g., 5 seconds
  • the default screen time-out period is one minute, which means the display will be turned off in one minute if no touch input is received from the user for one continuous minute.
  • the mobile computing device 100 dims the display 110 before the display 110 is turned off, so as to give the user an opportunity to “revive” the display 110 (e.g., un-dim it) or otherwise indicate that he/she is still using the mobile computing device 100 .
  • the display 110 may be dimmed after 55 seconds of inactivity from the user, and if the user does not engage the mobile computing device 100 in some manner during the next 5 seconds (e.g., touching the display 110 ), the display 110 will be turned off.
  • the user is still actively using the mobile computing device 100 (e.g., viewing a web page on the display 110 ).
  • the user may then engage the mobile computing device 100 .
  • the user engages the mobile computing device 100 by touching the display 110 with his/her hand 360 while the display 110 is dimmed but not turned off completely.
  • the user may engage the mobile computing device 100 by another suitable mechanism, for example by touching the display 110 with a stylus, or by voice command, or by shaking/tilting/moving the mobile computing device 100 , etc.
  • the mobile computing device 100 when the mobile computing device 100 detects the engagement of the mobile computing device 100 from the user (e.g., the user's hand touching the display 110 ), the mobile computing device 100 undims the display 110 .
  • the mobile computing device 100 also increases the default screen time-out period. For example, the screen time-out period is increased from 1 minute to 2 minutes. In other words, the display 110 will be turned off after two minutes of continuous user inactivity, which also means the display 110 will be dimmed after 1 minute and 55 seconds of continuous user inactivity.
  • the display 110 will dim again, similar to what is shown in FIG. 8 . If the user is still actively using the mobile computing device 100 , he/she may again engage the mobile computing device 100 in a manner similar to those discussed above. The mobile computing device 100 will undim the display 110 in response to detecting the user's engagement, and increase the screen time-out period again, for example from 2 minutes to 4 minutes.
  • the increase in the screen time-out period is linear (e.g., increasing by 1 predetermined number every time). In other embodiments, the increase in the screen time-out period may be non-linear (e.g., geometric).
  • the first increase in the screen time-out period may be from 1 minute to 2 minutes
  • the second increase in the screen time-out period may be from 2 minute to 4 minutes
  • the third increase in the screen time-out period may be from 4 minute to 8 minutes
  • the increase in the screen time-out period may be governed by a predefined algorithm.
  • This process may be repeated as long as the user still engages the mobile computing device 100 in some manner to prevent the display 110 from being turned off. Consequently, the screen time-out period keeps on increasing. As the screen time-out period keeps on getting higher, the user is interrupted less frequently by the screen dimming. Therefore, it can be seen that the screen time-out period according to the present disclosure becomes adaptive to the user's behavior.
  • the user's engagement of the display 110 likely means that the user is still using the mobile computing device 100 , and therefore the mobile computing device 100 “learns” not to time out the display 110 too soon.
  • the increasingly-longer intervals before the user is disrupted by the dimming of the display 110 results in less user frustration and annoyance.
  • a user may have been interrupted 20 times by the screen dimming for an existing mobile computing device (i.e., dimming about every minute).
  • the user may be only interrupted 4 times by using the mobile computing device 100 with the adaptive screen time-out (e.g., interruptions at 1 minute, 3 minutes, 7 minutes, and 15 minutes).
  • the disruptions caused by the screen time-out occur far less-frequently, thereby enhancing user satisfaction.
  • the battery is not being wasted because the user is still using the mobile computing device 100 .
  • the display 110 will be turned off.
  • the screen time-out period is now reset to the default screen time-out period, i.e., 1 minute in this example discussed above.
  • the display 110 will be timed out again after 1 minute (and be dimmed after 55 seconds). The subsequent user engagement of the mobile computing device 100 (or the lack thereof) will determine whether the screen time-out period will be adjusted.
  • the longer screen timeout period hardly results in waste of the battery resources, since the only possible scenario where battery waste occurs is when the screen time-out period has been increased to a fairly long period, AND if the user has forgotten to manually turn off the display 110 after he/she finishes using it. This is unlikely to occur, but even if it does occur, the battery waste happens only once, as the screen time-out period will revert back to the (relatively) short default screen time-out period when the display 110 times itself out after user inactivity.
  • the screen time-out period according to the present disclosure may still be capped at some maximum amount, for example ten or twenty minutes. This ensures that an inadvertent user error (i.e., forgetting to turn the display 110 off) will not lead to a catastrophic failure (e.g., the display remaining on for too long and draining most, if not all, of the battery).
  • the adaptive screen time-out of the present disclosure has been described using a touch-sensitive screen of a mobile computing device 100 as an example, the concept may be applied to other computing devices that relies on turning off the screen to conserve power.
  • most traditional laptops use a trackpad and/or a keyboard as inputs.
  • the screen of the laptop may dim and then turn itself off if user inactivity has been detected for a predetermined period of time. For example, the user has not typed anything through the keyboard or touched the touch pad for X seconds or minutes.
  • the laptop will also increase its screen time-out period if the user engages the laptop (e.g., touching the touchpad or typing on the keyboard, or even through a speaker/microphone).
  • Such laptop may achieve substantially the same benefits from utilizing the adaptive screen time-out discussed above.
  • Another aspect of the adaptive screen time-out of the present disclosure involves monitoring a frequency of the user engagement with the mobile computing device 100 , and adjusting the screen time-out period in response to the monitored frequency. For example, while content is being displayed on the display 110 , as shown in FIG. 9 , the mobile computing device 100 monitors how frequently the user's hand (or stylus) touches the display 110 . If a user normally touches the display 110 frequently, for example at least once every few seconds, then a prolonged inactivity for that user (for example 30 seconds or a minute) is an indication that the user is no longer using the mobile computing device 100 .
  • the relatively prolonged inactivity is a deviation from the user's normal behavior (e.g., touching the display 110 every few seconds), which probably indicates that the user is no longer viewing content on the display 110 .
  • the user rarely touches the display 110 then even a relatively prolonged inactivity from the user does not necessarily mean that he is no longer using the mobile computing device 100 , because such inactivity could be well within the normal behavior of such user.
  • the screen time-out period may be adjusted accordingly to reflect the likelihood of whether the user is using the mobile computing device 100 or not. If the user is frequently engaged with the mobile computing device 100 , then the screen time-out period may be shortened, since an absence of user input even within a short time span likely means the user is no longer using the mobile computing device 100 . On the other hand, if the user is infrequently engaged with the mobile computing device 100 , then the screen time-out period may be lengthened, since an absence of user input even within a relatively prolonged time span does not necessarily mean that the user is no longer using the mobile computing device 100 .
  • the mobile computing device 100 provides a plurality of predefined ranges of frequency of engagement with the mobile computing device 100 .
  • these ranges may include: a first range where the user engages with the mobile computing device 100 (e.g., by touching the display 110 ) every few seconds, a second range where the user engages with the mobile computing device 100 every tens of seconds, and a third range where the user engages with the mobile computing device 100 every minute or so. It is understood, of course, that the above example does not require the user's engagements with the mobile computing device 100 to be evenly or uniformly spaced apart.
  • the mobile computing device 100 may just indicate that the user engagement of the mobile computing device 100 occurs with the above-listed frequency on average (e.g., the user touches the display 110 every few seconds on average), or alternatively that a period of inactivity does not exceed the above-listed frequency (e.g., the user does not go more than a few seconds without touching the display 110 while still using the mobile computing device 100 ).
  • the mobile computing device 100 may then associate a plurality of predefined screen time-out periods with the predefined ranges of frequency of user engagements, respectively.
  • the user may associate a first screen time-out period of 30 seconds with the first range of frequency of user engagement with the mobile computing device 100 , a second screen time-out period of 1 minute with the second range of frequency of user engagement with the mobile computing device 100 , and a third screen time-out period of 5 minutes with the third range of frequency of user engagement with the mobile computing device 100 discussed above.
  • the mobile computing device 100 then dynamically adjusts the screen time-out period for the display 110 based on the monitored user engagement frequency. For example, if the user has been monitored to touch the display every 5 seconds or so, then the observed user engagement frequency falls within the first predefined range, and corresponding the mobile computing device 100 sets the screen time-out period to 30 seconds. This is because since the user touches the display 110 so frequently, a mere 30 seconds of the display 110 not being touched is a good indication of the user no longer using the display 110 . As another example, if the user has been monitored to touch the display every minute or so, then the observed user engagement frequency falls within the third predefined range, and correspondingly the mobile computing device 100 sets the screen time-out period to 5 minutes.
  • the display 110 should be programmed to remain turned on for a little bit longer.
  • the screen time-out period is dynamically and adaptively adjusted as an inverse (though not necessarily linear) function of the frequency of user engagement with the mobile computing device 100 .
  • the more frequently the user engages the mobile computing device 100 e.g., touching the display 110
  • the shorter the screen time-out period is set.
  • the less frequently the user engages the mobile computing device 100 the longer the screen time-out period is set.
  • the monitoring of the user frequency of engagement with the mobile computing device 100 is done consistently, for example every time the user uses the mobile computing device 100 . In other embodiments, however, the monitoring of the user frequency of engagement with the mobile computing device 100 is done at predetermined time intervals, for example every few hours, days, or even weeks.
  • the user engagement of the mobile computing device 100 are described using touching the display 110 as examples, the user engagement of the mobile computing device 100 may be in other forms in alternative embodiments, such as via moving the mobile computing device 100 , tilting the mobile computing device 100 , talking to the mobile computing device 100 , etc.
  • the mobile computing device 100 may be compatible with multiple user profiles, where multiple authorized users may each be authorized to use the mobile computing device 100 .
  • the present disclosure may track each user's frequency of engagement with the mobile computing device 100 and associate that engagement frequency with the use's profile. For example, a husband and a wife may both be authorized users of the mobile computing device 100 , so they each have an account with the mobile computing device 100 . Through monitoring their frequency of engagement with the mobile computing device 100 , the mobile computing device 100 determines that the husband user tends to engage the mobile computing device 100 frequently, while the wife user tends to engage the mobile computing device 100 infrequently. These engagement tendencies may be electronically stored with their respective user profiles.
  • the screen time-out period is dynamically adjusted to be relatively short.
  • the screen time-out period is dynamically adjusted to be relatively long. These adjustments are done automatically without necessarily requiring the user's request to do so. Of course, the user may still actively override these dynamically adjusted screen time-out periods via the settings.
  • the frequency of user engagement with the mobile computing device 100 associated with each user profile may also be updated at predetermined time intervals, for example every few hours, days, or weeks.
  • the mobile computing device 100 also takes into account of the particular application the user is running while the engagement frequency is monitored. For example, the user may have different engagement frequencies with the mobile computing device 100 with respect to a web page application and an email application.
  • the predefined screen time-out periods discussed above may be associated with a particular application as well. For example, if user inactivity has been observed for 20 seconds while the user is browsing a web site, the screen time-out period may be adjusted to a particular predefined screen time-out period. But if the user activity (for the same user) has been observed for 20 seconds while the user is using an email client, the screen time-out period may be adjusted to a different predefined screen time-out period. Again, this is due to the fact that even the same user may have different tendencies of engaging the mobile computing device 100 while using different applications (or programs) on the mobile computing device 100 .
  • the mobile computing device 100 may derive an initial (or default) screen time-out period based on monitoring the frequency of user engagement with the mobile computing device 100 . This initial screen time-out period may then be dynamically adjusted subsequently by the user's engagement with the mobile computing device 100 when the display 110 is dimmed.
  • FIG. 10 is a simplified flowchart illustrating a method 400 for performing the dynamic screen time-out discussed above.
  • the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, a wear-able electronic device such as a smart watch, or a smart glass.
  • the method 400 includes a step 405 , in which a default screen time-out period is received for a mobile computing device.
  • the screen time-out period specifies an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off.
  • the method 400 includes a step 410 , in which content is displayed on the screen of the mobile computing device.
  • the method 400 includes a step 415 , in which the screen is dimmed after the content has been displayed, without receiving any touch input from the user, for an amount of time shorter than the default screen time-out period by X number of seconds. X may be a predefined integer, for example.
  • the method 400 includes a step 420 , in which a touch input is received from the user.
  • the method 400 includes a step 425 , in which the screen is undimmed in response to receiving the touch input.
  • the method 400 includes a step 430 , in which the default screen time-out period is increased in response to receiving the touch input.
  • the default screen time-out period is increased to a first new screen time-out period, the first new screen time-out period being longer than the default screen time-out period.
  • the method 400 includes a step 435 , in which the screen is turned off after the first new screen time-out period has passed without receiving any touch input from the user.
  • the method 400 includes a step 440 , in which the first new screen time-out period is reset back to the default screen time-out period.
  • the screen of the mobile computing device is a touch-sensitive screen.
  • the touch input from the user is received through the touch-sensitive screen.
  • the mobile computing device comprises a touch pad that is separate from the screen of the mobile computing device. The touch input from the user is received through the touch pad.
  • the step 405 of receiving the default screen time-out period includes the following sub-steps: prompting the user to specify a value for the default screen time-out period; and designating a user-specified value as the default screen time-out period.
  • the step 405 of receiving the default screen time-out period includes the following sub-steps: prompting the user to enter a screen dimming period that specifies an amount of time that passes without the mobile computing device receiving any touch input from the user before the screen of the mobile computing device dims; and calculating the default screen time-out period based on a user-entered screen dimming period.
  • the steps 405 - 440 of the method 400 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 405 - 440 in FIG. 10 .
  • the method 400 may further include the following steps: continuing the displaying of the content on the screen of the mobile computing device; dimming the screen after the content has been displayed for a further amount of time shorter than the default screen time-out period by X number of seconds; thereafter receiving, from the user, a further touch input to the screen; un-dimming the screen in response to receiving the further touch input; and increasing, in response to receiving the further touch input, the first new screen time-out period to a second new screen time-out period in response to receiving the further touch input from the user.
  • FIG. 11 is a simplified flowchart illustrating a method 500 for performing the dynamic screen time-out discussed above.
  • the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, a wear-able electronic device such as a smart watch or glass.
  • the method 500 includes a step 505 , in which a screen time-out period is set for a mobile computing device.
  • the screen time-out period specifies an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off.
  • the method 500 includes a step 510 , in which content is displayed on the screen of the mobile computing device.
  • the method 500 includes a step 515 , in which a frequency of touch inputs from the user is monitored while the content is being displayed.
  • the method 500 includes a step 520 , in which a plurality of predefined ranges of frequency of touch inputs is provided.
  • the method 500 includes a step 525 , in which a plurality of different predefined screen time-out periods is associated with the predefined ranges of frequency of touch inputs, respectively.
  • the method 500 includes a step 530 , in which a first predefined range is identified.
  • the first predefined range encompasses the monitored frequency of touch inputs.
  • the method 500 includes a step 535 , in which the screen time-out period is adjusted in response to the monitoring.
  • the adjusting the screen time-out period includes setting the predefined screen time-out period associated with the first predefined range as a new screen time-out period.
  • the adjusting the screen time-out period includes setting a new screen time-out period as an inverse function of the frequency of touch inputs from the user.
  • the steps 505 - 535 of the method 500 are not necessarily performed in numerical order. It is also understood that addition process additional steps may be performed before, during, or after the steps 505 - 535 in FIG. 11 .
  • the method 500 may further include a step of associating the monitored frequency of touch inputs with a user profile.
  • the method 500 may further include a step of adjusting the screen time-out period as a function of an application running on the mobile computing device. For reasons of simplicity, additional steps are not discussed herein.
  • Existing mobile computing devices may allow the user to perform an unlock function via the lock screen, or launch shortcuts to one or more applications of the mobile computing devices from the lock screen directly.
  • the lock screens of existing mobile computing devices may not offer the customizability and security expected for an advanced mobile computing device.
  • the present disclosure offers a lock screen that is customizable and offers security control.
  • an example lock screen 550 is illustrated on the display 110 of the mobile computing device 100 .
  • the lock screen 550 may contain information such as time of the day, the day of the week, and the date.
  • the lock screen 550 may also contain a plurality of icons, some examples of which are illustrated in FIG. 12 as icons 560 - 563 . These icons 560 - 563 each represent a different application that can be run on the mobile computing device 100 . Stated differently, each application includes a task that is executable on the mobile computing device 100 .
  • these icons 560 - 563 correspond to “Phone”, “Web”, “Email”, and “Gallery”, respectively.
  • a user may perform a predefined engagement action with one of the icons 560 - 563 , such as clicking on the icon or dragging it across a portion of the lock screen 550 to directly launch the application associated with the icon. If no additional security checks are implemented (e.g., prompting the user to enter a password), the launching of the application also unlocks the mobile computing device 100 . However, the lack of additional security checks may not be desirable for some users, who prefer to restrict access to the mobile computing device 100 to authorized users only.
  • step 1 choose an application to launch by performing an action with the icon representing the application on the lock screen; and step 2: enter the necessary password (or an alternative security verification mechanism such as the face-unlock and/or voice-unlock discussed above) before the application is actually launched. If the user has to go through such unlocking procedures every time he/she wants to use the phone, it may prove to be cumbersome and annoying to the user.
  • the mobile computing device 100 implements a lock screen from which the user can quickly launch applications by entering one or more symbols predefined by the user.
  • the mobile computing device 100 performs a handwriting analysis or image comparison analysis on the symbol entered by the user with a previous symbol defined by the user and stored in a digital memory of the mobile computing device 100 .
  • the handwriting analysis or the image comparison analysis each serves as a security check for user authentication. The details of such lock screen are discussed below with reference to FIGS. 13-26 .
  • the mobile computing device 100 displays a list of names for applications that can be directly launched from the lock screen.
  • the list of applications shown in FIG. 13A is merely an example and does not necessarily include every single available application.
  • the names of applications may be different from embodiment to embodiment.
  • the icons representing the applications may be displayed instead of, or in addition to, the names of the applications.
  • the list of applications shown in FIG. 13A is displayed when the user has gained access to the mobile computing device 100 at some point (via whatever suitable method such as password-unlock, face-unlock, voice-unlock, etc).
  • the user has also decided to customize the applications that are launch-able from the lock screen. Therefore, the user may invoke the display shown in FIG. 13A through appropriate settings on the mobile computing device 100 .
  • the user may now select a particular application for which a customized launch shortcut is to be generated. In the example shown in FIG. 13A , the application selected is “Phone.”
  • the mobile computing device 100 prompts the user to draw a symbol for launching the application “Phone” from the lock screen.
  • the mobile computing device 100 displays an empty box 570 inside which the user can draw the symbol.
  • the user may draw the symbol either with his/her finger or with a stylus, or another suitable device for interacting with the touch-sensitive interface of the mobile computing device 100 .
  • the user draws a letter “p” to represent the application “Phone.”
  • the mobile computing device 100 associates the user-defined symbol (i.e., the letter “p” in this example) with the application “Phone” and saves the association into the digital or electronic memory of the mobile computing device 100 .
  • the mobile computing device 100 also conducts an electronic handwriting analysis on the symbol drawn by the user.
  • the characteristics of the symbol drawn by the user in accordance with the electronic handwriting analysis may also be saved into the memory of the mobile computing device 100 .
  • the mobile computing device 100 may record an image of the symbol defined by the user.
  • FIGS. 14A-14B to FIGS. 16A-16B illustrate additional examples for selecting an application (to be launched from the lock screen) and defining symbols to be associated with the selected application.
  • the application selected is “Movies”, and the user draws a letter “m” to be associated with the application “Movies.”
  • the letter chosen to represent the application need not be the initial letter of the name of the application though.
  • the application “Music” is selected, and since its first letter is “m” just like the applications “Maps” and “Movies”, it may not make sense for the user to assign the same letter to all three applications. Thus, the user may assign the letter “u” to the application “Music.”
  • the user may assign a letter that does not even appear in the name of the application, as long as the user can remember such association.
  • the symbol defined by the user need not even by a letter in a recognized alphabet.
  • the user can draw a spiral-like symbol to represent and be associated with the selected application “Compass.”
  • the mobile computing device may once again conduct a handwriting analysis on each user-defined symbol, or record an image of each user-defined symbol and save them into an electronic memory.
  • the user may now be able to directly launch the application “Phone” from the lock screen 550 , as shown in FIG. 17 .
  • the user may use his/her hand 360 (or with a stylus) to draw the letter “p” in the lock screen 550 .
  • the mobile computing device 100 detects this gesture-based input through the touch-sensitive display 110 and performs a comparison between the symbol that was just entered by the user with a list of previously-defined symbols by the user.
  • the comparison involves conducting another handwriting analysis on the symbol just entered by the user in an attempt to launch the “Phone” application from the lock screen. Based on the results of the handwriting analysis, the mobile computing device 100 determines whether the symbol just entered by the user sufficiently matches any of the previously-defined symbols. If the answer is yes, then the corresponding application—in this case the “Phone” application—is launched directly from the lock screen, as shown in FIG. 18 . An extra security verification step is bypassed, since the handwriting analysis can verify that it is indeed the authorized user who is trying to launch the application (and unlock the mobile computing device 100 in the process).
  • the mobile computing device 100 may deny access to the user, or in the alternative ask the user to go through a back-up security verification process (e.g., password unlock, face-unlock, etc.).
  • a back-up security verification process e.g., password unlock, face-unlock, etc.
  • the mobile computing device 100 may simply record an image of the just-entered symbol and compare that symbol with the images of the list of previously-defined symbols. A machine/computer analysis may be done on these images to see if any of the images of the previously-defined symbols sufficiently matches with the image of the symbol just captured. If the answer is yes, the mobile computing device 100 can then directly launch the application whose associated symbol image matches the image of the symbol just entered by the user. If the answer is no, then additional user authentication steps may be needed, or the user may be denied access to the mobile computing device 110 . Once again, not only is the desired application directly launched from the lock screen for the user's convenience, but the extra step of security verification is also bypassed, as the symbol image comparison analysis serves the purpose of the security check.
  • the mobile computing device 100 need not perform the handwriting analysis nor the symbol image comparison analysis. Instead, the mobile computing device 100 merely needs to determine which letter the symbol that was just entered by the user corresponds to. In other words, if the symbol previously defined by the user was deemed to constitute the letter “p”, then the symbol that was just entered by the user does not necessarily have to match up with the previously-entered “p” in terms of handwriting characteristics. As long as the mobile computing device 100 can determine that the symbol just entered corresponds to the letter “p”, the mobile computing device 100 can directly launch the “Phone” application from the lock screen.
  • FIGS. 19-20 illustrate another example of directly launching an application from the lock screen 550 based on a user-defined symbol.
  • the symbol entered by the user corresponds with the spiral-like symbol previously defined by the user to be associated with the “Compass” application, as shown in FIGS. 16A-16B .
  • the mobile computing device 100 may perform another handwriting analysis or symbol image comparison analysis to determine whether the symbol just entered by the user matches up with any of the previously-defined symbols.
  • the mobile computing device 100 may employ a lower threshold for verifying the just-entered symbol.
  • the rationale is that an unauthorized user who is trying to gain illegal access to the mobile computing device 100 is less likely to known about the existence of such uncommon symbol. This means if the uncommon symbol (or something similar thereto) was entered by a user, there is a higher likelihood that such symbol was actually entered by the authorized user who knew of the uncommon symbol's existence and previous association with one of the applications.
  • Another rationale is that, unlike familiar letters in a well-known alphabet, the user may not have developed a patterned way in writing the uncommon symbol (such as the spiral-like symbol shown in FIGS. 16B and 19 ).
  • the mobile computing device 100 may be programmed to “forgive” or “overlook” minor deviations between the previously-defined symbol VS the one just entered, since these minor deviations do not necessarily indicate that the symbol was just drawn by an unauthorized user.
  • the mobile computing device 100 may employ a stricter standard of verification to ensure that the symbol was indeed entered by the authorized user. In any case, once the spiral-like symbol entered by the user in FIG. 19 is deemed to match with the spiral-like symbol previously stored in FIG. 16B , the mobile computing device 100 directly launces the “Compass” application, as shown in FIG. 20 .
  • the symbol defined by the user and the application to be associated need not necessarily have a one-to-one correspondence.
  • multiple applications may be associated with the same symbol.
  • the user may be allowed to define a hand-written letter “m” to be associated with all three of these applications.
  • the computing device 100 may prompt the user to choose which one of the applications associated with the letter “m” should be launched, for example as shown in FIG. 23 .
  • the user may choose the “Music” app, for example, and the computing device 100 will launch the “Music” app, which is shown in FIG. 24 .
  • multiple symbols may be associated with the same symbol.
  • the user may additionally define another symbol 580 to also be associated with the application “Phone”, as shown in FIGS. 25A-25B .
  • the symbol 580 is not a common letter from a well-known alphabet, but rather a user-customized symbol that somewhat resembles a virtual representation of a telephone.
  • the user may define any arbitrary or random symbol to be associated with any of the applications, as long as the user can remember the symbols and their associations.
  • the user may also launch the “Phone” application by drawing the symbol 580 on the lock screen.
  • FIG. 26 is a simplified flowchart illustrating a method 600 for launching applications from the lock screen based on entering a user-defined symbol as discussed above.
  • One or more steps of the 600 are performed by a mobile computing device of the user.
  • the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, or a wear-able electronic device such as a smart watch or glass.
  • the method 600 includes a step 605 of associating a plurality of tasks executable by a mobile computing device with a plurality of predefined symbols, respectively.
  • one of the tasks is an unlocking of the mobile computing device.
  • the method 600 includes a step 610 of turning on a touch-sensitive display of the mobile computing device in response to a request from a user, the display showing a lock screen.
  • the method 600 includes a step 615 of detecting, through the touch-sensitive display, a gesture input from the user.
  • the gesture input represents one of the predefined symbols that has been associated with one of the tasks.
  • the gesture input is made by a finger of the user.
  • the gesture input is made by a stylus of the user.
  • the method 600 includes a step 620 of executing the task associated with the detected symbol from the gesture input.
  • the predefined symbols include letters of an alphabet.
  • the associating in step 605 is performed so that a name of each task contains a respective one of the letters that has been associated with the task.
  • the associating in step 605 includes the following sub-steps: displaying the tasks to the user; prompting the user to associate the predefined symbols to the tasks, respectively; receiving, from the user, associations between the tasks and the predefined symbols.
  • the step of displaying the tasks may include a step of listing names of the tasks or a step of displaying virtual representations of the tasks.
  • the step of prompting includes prompting the user to define one of the symbols by drawing the symbol on the touch-sensitive display.
  • the method 600 may further include the following steps: storing the symbol drawn by the user, wherein the storing comprises storing handwriting characteristics of the user; conducting, after the detecting the gesture input from the user, a handwriting analysis with respect to the detected symbol; comparing the handwriting characteristics of the stored symbol with handwriting characteristics of the detected symbol; granting the user access to the mobile electronic device if the handwriting characteristics of the stored symbol matches the handwriting characteristics of the detected symbol; and denying the user access to the mobile electronic device if the handwriting characteristics of the stored symbol fails to match the handwriting characteristics of the detected symbol.
  • the step of comparing includes the following sub-steps: determining whether the detected symbol belongs to a well-known alphabet; employing a higher verification standard if the detected symbol belongs to a well-known alphabet; and employing a lower verification standard if the detected symbol does not belong to a well-known alphabet.
  • the associating in step 605 includes associating multiple symbols to at least one of the tasks.
  • the associating in step 605 includes associating multiple tasks to at least one of the symbols.
  • the steps 605 - 620 of the method 600 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 605 - 620 in FIG. 26 .
  • the method 600 may further include a step of present the user an additional security verification mechanism after the detecting in step 615 but before the executing the task in step 620 .
  • the additional security verification mechanism may include a password-unlock, a face-unlock, or a voice-unlock. For reasons of simplicity, other additional steps are not specifically discussed in FIG. 26 herein.
  • Exist mobile computing devices relate to their audio volume control. These devices allow their users to adjust the volume, for example through a volume slider bar or via a mute/unmute button.
  • existing mobile computing devices are not “intelligent” enough to automatically adjust the volume settings based on factors such as environment, context, or location. As an example, suppose a user prefers to put his/her phone (an example mobile computing device) in a “vibrate” or “mute” mode so that incoming calls or emails do not disturb him/her, or at least not in a loud audible manner. However, that user may also need to use phone as a navigation device from time to time, in which case the user may need audio navigation instructions from the phone.
  • the user would have to manually increase the volume of the phone before or during navigation. This may be difficult if the user is driving a car, as the user needs to fiddle with the volume settings of the phone while he/she is supposed to be paying full attention to the road. This is not only frustrating to the user, but also dangerous. Even if the user is not using navigation that requires a loud audio output, the user may still prefer to have the phone not on “vibrate” or “mute”, because incoming phone calls and/or messages may be missed due to the loud noise produced by the car during driving. Again, fiddling with audio controls of the phone while driving is not desirable.
  • the user may forget to change the volume settings of the phone to comply with these situations because he/she may still be operating under the assumption that the phone was already in a “vibrate” or “mute” mode, since that is the standard default setting for the phone. Again, this may lead to user frustration with the phone.
  • the present disclosure offers a contextually-aware mobile computing device.
  • the contextually-aware mobile computing device is a smartphone, but it is understood that the contextually-aware mobile computing device may be a tablet computer, a laptop computer, or a wear-able electronic device such as a smart watch or glass in other embodiments.
  • FIG. 27 illustrates a simplified environment in which the contextually-aware smartphone of the present disclosure operates.
  • a vehicle 650 is illustrated.
  • the vehicle 650 may be an automobile as illustrated, but may be another type of transportation device in other embodiments.
  • a mount 660 is disposed within the vehicle 650 .
  • the mount 660 is configured to hold and interface with a mobile computing device, such as the mobile computing device 100 discussed above.
  • the mount 660 is pre-installed in the vehicle 650 .
  • the mount 660 may be an aftermarket part that a user of the vehicle 650 installs in the vehicle.
  • a smartphone 670 is also located in the vehicle 650 . In the illustrated embodiment, the smartphone 670 is mounted on the mount 660 , but this may not be necessary in alternative embodiments.
  • the smartphone 670 detects that it has been placed in the vehicle 650 . In some embodiments, the detection is made in response to the smartphone 670 communicating with an electronic communications interface of the mount 660 .
  • the electronic communications interface may include one or more ports that mate with those on the smartphone 670 , such as USB ports. Since the mount is located in the vehicle 650 , the smartphone 670 “knows” that it must be in the vehicle 650 now.
  • the smartphone 670 is equipped with sensors such as accelerometers. These sensors can be used to determine whether the smartphone 670 is moving at a fast speed, for example faster than the average running speed of a human. The detection of the smartphone 670 traveling at such high speed is another indication that the smartphone 670 is now located in a vehicle.
  • the smartphone 670 is determined to be in a vehicle only if the speed at which the smartphone 670 is traveling is faster than the maximum speed of a human but still within a normal range of speeds for an automobile, for example in a range from about 15 miles per hour to about 120 miles per hour. If the smartphone 670 is traveling at a speed faster than this range, then it might be an indication that the smartphone 670 (and its user) is actually in an aircraft, and not in a vehicle, in which case the smartphone 670 should remain silent.
  • the smartphone 670 After the smartphone 670 detects that it is inside of the vehicle 650 , it increases the audio output volume on the smartphone 670 .
  • the audio output volume is automatically set to a maximum volume. In other embodiments, the audio output volume is automatically increased to a volume greater than the volume before it is placed within the vehicle 650 .
  • This volume need not necessarily be the maximum volume, for example it can be a volume preset by the user, such as 80% or 75% of the maximum volume. By doing so, the user need not fiddle with the volume controls on the smartphone 670 while trying to obtain navigation instructions.
  • the automatic increase of the audio output volume temporarily overrides a “vibrate” or “mute” setting (i.e., the previous audio setting) for the smartphone 670 .
  • the automatic increase of the audio output volume of the smartphone 670 can be made more restrictive. For example, if the user wants the smartphone 670 to be loud only for receiving navigation directions, but does not mind potentially missing phone calls or other incoming alerts, then the audio output volume can be programmed to automatically increase only if the user has requested navigation instructions. Therefore, in such embodiments, the mere fact that the smartphone 670 has detected its placement within the vehicle 650 will not necessarily trigger the automatic increase of its audio output volume. The smartphone 670 must also detect a request to receive navigation instructions from a user before the audio output volume is automatically increased.
  • the smartphone 670 determines that it has been placed inside of the vehicle 650 , it first detects and records its current volume setting before increasing the volume. For example, if the smartphone 670 was on “vibrate” right before it was placed inside of the vehicle 650 , this “vibrate” setting is recorded by the smartphone 670 before the volume is increased (e.g., increased to a maximum volume).
  • the smartphone 670 will detect this event.
  • the smartphone 670 detects it being taken outside of the vehicle 650 by detecting a break with the electronic communication interface of the mount 660 .
  • the smartphone 670 detects it being taken outside of the vehicle 650 by detecting that the speed of the smartphone is no longer within the range of a normal operating speed of the vehicle 650 .
  • the smartphone 670 may require both a break with the electronic communication interface of the mount 660 AND a detection of a speed outside of the normal operating speed of the vehicle to deem that the smartphone 670 has been taken outside of the vehicle 650 .
  • the smartphone 670 After the smartphone 670 deems that it is no longer inside the vehicle 650 , it will reset the audio output volume to the recorded volume setting right before the smartphone 670 was placed inside the vehicle 650 (and thus before the audio output volume of the smartphone 670 was increased). This ensures that the user need not worry about resetting the volume control settings manually after reaching the target destination. In other words, by being contextually-aware (i.e., no longer being inside the vehicle 650 ), the smartphone 670 is automatically programmed to perform a task (i.e., restoring the volume to its previous setting) that the user would have had to do once the target destination is reached.
  • a task i.e., restoring the volume to its previous setting
  • the smartphone 670 restores its audio output volume back to a “vibrate” mode after detecting that it is no longer inside the vehicle 650 , since the “vibrate” setting was the previous audio setting used for the smartphone 670 prior to it being placed inside the vehicle 650 .
  • the smartphone 670 will restore the volume again to 50% after the smartphone 670 is taken outside of the vehicle 650 .
  • the user will not be potentially disrupted by loud phone calls, emails, or messages in an inappropriate environment.
  • contextually-aware audio output volume adjustment mode discussed above may be entirely disabled by the user (e.g., through the necessary settings) if the user so desires. However, if such mode is enabled, the smartphone 670 can automatically output a loud volume in a vehicle when it is desired, and automatically restore the smartphone 670 to its original volume once it is outside the vehicle, without requiring user intervention. As such, user satisfaction with respect to using the smartphone 670 should be increased.
  • FIG. 29 is a simplified flowchart illustrating a method 700 for operating the contextually-aware mobile computing device discussed above.
  • the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, or a smart watch or glass.
  • the method 700 includes a step 705 , in which a determination is made that a mobile computing device has been placed in a vehicle.
  • the mobile computing device having a programmable audio output volume.
  • the vehicle is an automobile.
  • the method 700 includes a step 710 , in which an audio output volume setting of the mobile computing device is recorded after the determination is made that the mobile computing device is placed in the vehicle.
  • the method 700 includes a step 715 , in which in response to the determination made in step 705 , the audio output volume of the mobile computing device is automatically increased to a predefined audio output volume.
  • the method 700 includes a step 720 , in which a request to provide navigational instructions is received from a user.
  • the method 700 includes a step 725 , in which the navigational instructions are provided to the user.
  • the navigational instructions include audio navigational instructions at the predefined audio output volume.
  • the method 700 includes a step 730 , in which after the audio output volume is automatically increased, a detection is made that the mobile computing device has been taken out of the vehicle.
  • the method 700 includes a step 735 , in which the mobile computing device is automatically restored to the recorded audio output volume setting.
  • the predefined output volume is a maximum output volume of the mobile computing device.
  • the determining step in 705 includes a step of detecting that the mobile computing device has been plugged into a dock inside the vehicle. In some embodiments, the step of detecting includes detecting that one or more ports of the mobile computing device have been connected to an electronic interface of the dock.
  • the mobile computing device includes one or more sensors that measure a movement speed of the mobile computing device, in which case the step of determining in step 705 includes: a step of measuring, via the one or more sensors, the movement speed of the mobile computing device; and a step of determining that the mobile computing device has been placed in the vehicle in response to the measured movement speed of the mobile computing device being within a predefined speed range.
  • the predefined speed range is above a maximum speed of a human and within a normal speed range of the vehicle.
  • the steps 705 - 735 of the method 700 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 705 - 735 in FIG. 29 .
  • the method 700 may further include a step of decreasing, before the determining, the audio output volume of the mobile computing device in response to user request.
  • the step of decreasing the audio output volume of the mobile computing device comprises muting the mobile computing device.
  • the step of automatically increasing the audio output volume comprises overriding the muting of the mobile computing device.
  • FIG. 30 illustrates another scenario in which the contextually-aware mobile computing device 100 performs automatic volume control.
  • the user 150 is at a facility 800 .
  • the facility 800 is a church (and it is hereinafter referred to as the church 800 ), but it is understood that the facility 800 could be any other type of business, establishment, building, etc., in alternative embodiments.
  • the user 150 normally keeps the smartphone 670 (an example embodiment of the mobile computing device 100 ) on a relatively loud volume, but he/she would prefer to keep the smartphone 670 in a silent type mode when he/she is inside the church 800 .
  • the silent mode may be either a “vibrate” mode where the smartphone 670 vibrates instead of rings for incoming calls/messages/alerts, or a “mute” mode where the smartphone 670 mutes all sounds and does not vibrate either.
  • the user 150 would have to remember to actively set the smartphone 670 in the silent mode when he/she is inside the church 800 , and then remember to take the smartphone 670 out of the silent mode after leaving the church 800 .
  • the contextually-aware smartphone 670 performs automatic volume control based on the detected location of the smartphone 670 .
  • the smartphone 670 detects, via Global Positioning System (GPS) sensors on the smartphone 670 , that the user 150 always puts the smartphone 670 in a silent mode at the GPS coordinates corresponding to the church 800 . If this pattern is repeated frequently, the smartphone 670 may determine that the user 150 always intends to put the smartphone 670 in the silent mode while he/she is at these GPS coordinates (i.e., the user 150 being at the church 800 ).
  • GPS Global Positioning System
  • the smartphone 670 detects that it is at, or very close to, these GPS coordinates corresponding to the church 800 , it will record the audio output volume settings of the smartphone 670 (e.g., at 80% of the maximum volume), and then automatically set the smartphone 670 in the silent mode.
  • the smartphone 670 detects that it is no longer at those GPS coordinates (i.e., the user 150 has left the church 800 )
  • the smartphone 670 will now take itself out of the silent mode and restore the volume to the recorded audio output volume settings (e.g., back to 80% of the maximum volume). Therefore, the user need not remember to tinker with the smartphone 670 's volume settings each time he/she goes to, and leaves, the church 800 .
  • the smartphone 670 is configured to receive a request from the user 150 to “remember” the present location (i.e., the GPS coordinates corresponding to the location of the church 800 ).
  • the request also specifies that the smartphone 670 should be set in a silent mode when the smartphone 670 is at this location.
  • the user 150 lets the smartphone 670 “know” at which location it should be silent.
  • the smartphone 670 records the location of the church 800 , for example through its GPS coordinates and saves it into an electronic database, which could be either locally maintained on the smartphone 670 itself, or remotely maintained in a remote database.
  • the smartphone 670 will record the audio output volume settings of the smartphone 670 before it is put in the silent mode. Thereafter, the next time the user 150 visits the church 800 , the smartphone 670 will automatically put itself in silent mode as long as the user 150 is in the church, and it will take itself out of the silent mode and restore the originally audio output volume when the user 150 leaves the church.
  • the smartphone 670 will also gather time-related information when the smartphone 670 is put into the silent mode by the user 150 .
  • the time-related information may include the time of the day, the day of the week, or the day or week of the month, etc.
  • the smartphone 670 may be able to determine if there is a pattern associated with when the user 150 wants to put the smartphone 670 into the silent mode. For example, if the user 150 has consistently attempted to put the smartphone 670 into silent mode every Sunday morning from 9 AM to 11 AM, then this information may be used later to assist with the automatic volume control.
  • the smartphone 670 may activate the GPS sensors between about 9 AM to 11 AM on Sundays to detect the location of the smartphone 670 (and thus the user 150 ). If the GPS sensors detect the smartphone 670 being in the church 800 during this time frame, then the smartphone 670 may perform the automatic volume adjustment discussed above (i.e., silencing the smartphone 670 while at the church 800 and restoring the original volume after leaving the church 800 ). 8
  • the user 150 also does not necessarily have to physically be at the church 800 to accomplish automatic volume control discussed above.
  • the user 150 may launch a map application 810 on the smartphone 670 , or on another suitable computing device capable of launching the map application 810 , to which the user 810 has an account.
  • the user 150 may use the map application to locate the church 800 and select it as a location where the user would want the smartphone 670 to be put in the silent mode.
  • the map application 810 can obtain the GPS coordinates of the church 800 and save it in a database of predetermined or predefined locations where the smartphone 670 should be silenced.
  • the smartphone 670 will detect (e.g., via the GPS sensors) that it has arrived at one of the locations where the smartphone 670 should be put into silent mode.
  • the smartphone 670 will thus put itself in the silent mode without requiring user intervention and remain silent as long as the user 150 (and thus the smartphone 670 itself) is still at the church 800 .
  • the smartphone 670 will detect the departure from the church 800 and will then restore the audio volume output to its original volume (which is saved before the smartphone went into silent mode). In this manner, the user 150 need not actually be at a particular location in order to instruct the smartphone 670 that it should be in the silent mode while at that location, and not in the silent mode when it is no longer at that location.
  • FIG. 32 illustrates another embodiment of choosing one or more predetermined locations where the smartphone 670 should be silenced.
  • the smartphone 670 displays a list of different types or categories of businesses or establishments.
  • the user 150 is prompted to choose one or more of these types of business or establishments where he/she would want the smartphone 670 to be in silent mode.
  • the user has selected “Churches”, “Doctor's Offices”, and “Movie Theaters” as places where the smartphone 670 should be kept silent.
  • the smartphone 670 can obtain a list of churches, doctor's offices, and movie theatres in the user's city (or a plurality of cities where the user 150 frequently visits).
  • the smartphone 670 then retrieves the GPS coordinates (or other suitable positional information) for each church, doctor's office, and movie theater in the obtained list. These GPS coordinates are then saved in a database similar to the ones discussed above. Again, when the user 150 visits any of these places in the future, the smartphone 670 will detect the location and put itself in a silent mode. When the user 150 leaves these places, the smartphone 670 will restore its original volume settings.
  • the smartphone 670 need not save the GPS coordinates of the churches, doctor's offices, or movie theaters into a database. Instead, when the user visits a place, the smartphone 670 will look up the place being visited to see if it belongs to one of the categories selected by the user. The look-up may be done by retrieving the GPS coordinates of the place being visited, or by telecommunications with other devices, or by triangulation by cell towers, etc. Again, once a match is found, the automatic volume adjustment discussed above may be performed as well.
  • FIG. 33 is a simplified flowchart illustrating a method 900 for operating the contextually-aware mobile computing device discussed above.
  • the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, or a smart watch or glass.
  • the method 900 includes a step 905 of receiving a request from a user of the mobile computing device.
  • the request specifies that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location.
  • the method includes a step 910 of detecting an arrival of the mobile computing at the predetermined location after the step 905 .
  • the method includes a step 915 of recording, in response to the step of detecting the arrival in step 910 , an audio output volume setting of the mobile computing device.
  • the method includes a step 920 of setting, in response to the detecting the arrival and after the recording, the mobile computing device in the mute mode or the vibrate mode.
  • the method includes a step 925 of detecting a departure of the mobile computing device from the predetermined location after the step 920 .
  • the method includes a step 930 of restoring, in response to the step of detecting the departure in step 925 , the mobile computing device to the recorded audio output volume setting.
  • the step 905 of receiving the request is performed while the mobile computing device is at the predetermined location.
  • the step 905 of receiving the request is performed while the mobile computing device is located remotely from the predetermined location.
  • the method 900 may further include a step of identifying, in response to user input and before the receiving the request, the predetermined location from an electronic map.
  • the step 905 of receiving the request is performed such that a facility at the predetermined location belongs to one of the following categories: a church, a movie theatre, and a doctor's office.
  • the steps 910 and 925 of detecting the arrival and the detecting the departure each include a step of determining a location of the mobile computing device via a Global Positioning System sensor.
  • steps 905 - 930 of the method 900 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 905 - 930 in FIG. 33 . For reasons of simplicity, these additional process steps are not discussed in detail herein.
  • FIG. 34 is a simplified block diagram of an electronic device 1300 according to the various aspects of the present disclosure.
  • the electronic device 1300 may be implemented as an embodiment of the mobile computing device 100 discussed above.
  • the electronic device 1300 includes a telecommunications module 1310 .
  • the telecommunications module 1310 contains various electronic circuitry components configured to conduct telecommunications with one or more external devices.
  • the electronic circuitry components allow the telecommunications module 1310 to conduct telecommunications in one or more of the wired or wireless telecommunications protocols, including communications protocols such as IEEE 802.11 (WiFi), IEEE 802.15 (Bluetooth), GSM, CDMA, LTE, WIMAX, DLNA, HDMI, etc.
  • the telecommunications module 1310 includes antennas, filters, low-noise amplifiers, digital-to-analog (DAC) converters, analog-to-digital (ADC) converters, and transceivers.
  • the transceivers may further include circuitry components such as mixers, amplifiers, oscillators, phase-locked loops (PLLs), and/or filters. Some of these electronic circuitry components may be integrated into a single discrete device or an integrated circuit (IC) chip.
  • circuitry components such as mixers, amplifiers, oscillators, phase-locked loops (PLLs), and/or filters.
  • the electronic device 1300 may include a computer memory storage module 1320 .
  • the memory storage module 1320 may contain various forms of digital memory, such as hard disks, FLASH, SRAM, DRAM, ROM, EPROM, memory chips or cartridges, etc.
  • Computer programming code may be permanently or temporarily stored in the memory storage module 1320 , for example.
  • the computer memory storage module 1320 may include a cache memory where files can be temporarily stored.
  • the electronic device 1300 may also include a computer processing module 1330 .
  • the computer processing module 1330 may contain one or more central processing units (CPUs), graphics processing units (GPUs), or digital signal processors (DSPs), which may each be implemented using various digital circuit blocks (including logic gates such as AND, OR, NAND, NOR, XOR gates, etc) along with certain software code.
  • the computer processing module 1330 may be used to execute the computer programming code stored in the memory storage module 1320 .
  • the electronic device 1300 may also include an input/output module 1340 , which may serve as a communications interface for the electronic device 1300 .
  • the input/output module 1340 may include one or more touch-sensitive screens, physical and/or virtual buttons (such as power and volume buttons) on or off the touch-sensitive screen, physical and/or virtual keyboards, mouse, track balls, speakers, microphones, light-sensors, light-emitting diodes (LEDs), communications ports (such as USB or HDMI ports), joy-sticks, image-capture devices (for example cameras), etc.
  • the touch-sensitive screen may be used to display visual objects discussed above.
  • a non-touch screen display may be implemented as a part of the input/output module 1340 .
  • FIG. 35 is a simplified diagrammatic view of a system 1400 that may be used to perform certain aspects of the present disclosure discussed above.
  • the system 1400 may include an electronic device 1410 .
  • the electronic device 1410 may be implemented as an embodiment of the electronic device 1300 of FIG. 34 .
  • the electronic device 1410 includes a tablet computer, a mobile telephone, a laptop, a smart watch, or a smart glass.
  • the system 1400 also includes a remote server 1420 .
  • the remote server 1420 may be implemented in a “cloud” computing environment and may include one or more databases that store files, for example the various files that can also be stored locally in the electronic device 1410 as discussed above.
  • the electronic device 1410 and the remote server 1420 may be communicatively coupled together through a network 1430 .
  • the network 1430 may include cellular towers, routers, switches, hubs, repeaters, storage units, cabling (such as fiber-optic cabling or telephone cabling), and other suitable devices.
  • the network 1430 may be implemented using any of the suitable wired or wireless networking protocols.
  • the electronic device 1410 and the remote server 1420 may also be able to communicate with other devices on the network 1430 and either carry out instructions received from the network, or send instructions through the network to these external devices to be carried out.
  • a service provider that hosts or operates the remote server 1420 may provide a user interface module 1440 .
  • the user interface module 1440 may include software programming code and may be installed on the electronic device 1410 (for example in a memory storage module).
  • the user interface module 440 may include a downloadable “app”, for example an app that is downloadable through a suitable service such as APPLE's® ITUNES®, THE APP STORE® from APPLE®, ANDROID's® PLAY STORE®, AMAZON's® INSTANT VIDEO®, MICROSOFT's® WINDOWS STORE®, RESEARCH IN MOTION's® BLACKBERRY APP WORLD®, etc.
  • the user interface module 1440 includes an instance of the “app” that has been downloaded and installed on the electronic device 1410 .
  • the app may also be used to perform the various aspects of the present disclosure discussed above, such as with respect to face-unlock, voice-unlock, dynamically adjustable screen time-out, customizable lock screen, and/or the contextually-aware volume controls discussed above.
  • a user 1450 may interact with the system 1400 by sending instructions to the electronic device 1410 through the user interface module 1440 .
  • the user 1450 may be a subscriber of the services offered by the service provider running/hosting/operating the remote server 1420 .
  • the user 1450 may attempt to log in to the remote server 1420 by launching the “app” of the user interface 1440 .
  • the user's login credentials are electrically sent to the remote server 1420 through the network 1430 .
  • the remote server 1420 may instruct the user interface module 1440 to display a suitable interface to interact with the user in a suitable manner.
  • the mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module.
  • the computer processor module is configured to execute the computer programming code to perform the following operations: receiving, from a user, a request to gain access to a mobile computing device; detecting, via the mobile computing device, an ambient lighting condition; comparing the detected ambient lighting condition with a predefined threshold; determining that the detected ambient lighting condition is below the predefined threshold; performing at least one of the following tasks in response to the determining: activating a front-facing light-emitting diode (LED) mechanism of the mobile computing device, or illuminating at least a portion of a screen of the mobile computing device; and performing, while the LED mechanism is activated or while the portion of the screen of the mobile computing device is illuminated, a face-unlock action on the mobile computing device to authenticate the user.
  • LED front-facing light-emitting diode
  • the method includes: receiving, from a user, a request to gain access to a mobile computing device; detecting, via the mobile computing device, an ambient lighting condition; comparing the detected ambient lighting condition with a predefined threshold; determining that the detected ambient lighting condition is below the predefined threshold; performing at least one of the following tasks in response to the determining: activating a front-facing light-emitting diode (LED) mechanism of the mobile computing device, or illuminating at least a portion of a screen of the mobile computing device; and performing, while the LED mechanism is activated or while the portion of the screen of the mobile computing device is illuminated, a face-unlock action on the mobile computing device to authenticate the user.
  • LED front-facing light-emitting diode
  • the mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module.
  • the computer processor module is configured to execute the computer programming code to perform the following operations: engaging in a voice-based interaction with a user of the mobile computing device; recording a spoken phrase from the user during the voice-based interaction; selecting a segment of the spoken phrase as a password for authenticating the user; saving a recording of the segment of the spoken phrase as a recorded password in a database; receiving, from the user, a request to gain access to the mobile computing device; prompting, in response to the receiving the request, the user to speak the password to the mobile computing device; thereafter recording one or more words spoken by the user in response to the prompting; comparing the one or more words spoken by the user with the recorded password; and authenticating the user in response to the comparing.
  • One more aspect of the present disclosure involves a method.
  • the method includes: engaging in a voice-based interaction with a user of the mobile computing device; recording a spoken phrase from the user during the voice-based interaction; selecting a segment of the spoken phrase as a password for authenticating the user; saving a recording of the segment of the spoken phrase as a recorded password in a database; receiving, from the user, a request to gain access to the mobile computing device; prompting, in response to the receiving the request, the user to speak the password to the mobile computing device; thereafter recording one or more words spoken by the user in response to the prompting; comparing the one or more words spoken by the user with the recorded password; and authenticating the user in response to the comparing.
  • the mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module.
  • the computer processor module is configured to execute the computer programming code to perform the following operations: receiving a default screen time-out period for a mobile computing device, the screen time-out period specifying an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off; displaying content on the screen of the mobile computing device; dimming the screen after the content has been displayed, without receiving any touch input from the user, for an amount of time shorter than the default screen time-out period by X number of seconds; thereafter receiving a touch input from the user; un-dimming the screen in response to receiving the touch input; and increasing, in response to receiving the touch input, the default screen time-out period to a first new screen time-out period, the first new screen time-out period being longer than the default screen time-out period.
  • the method includes: receiving a default screen time-out period for a mobile computing device, the screen time-out period specifying an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off; displaying content on the screen of the mobile computing device; dimming the screen after the content has been displayed, without receiving any touch input from the user, for an amount of time shorter than the default screen time-out period by X number of seconds; thereafter receiving a touch input from the user; un-dimming the screen in response to receiving the touch input; and increasing, in response to receiving the touch input, the default screen time-out period to a first new screen time-out period, the first new screen time-out period being longer than the default screen time-out period.
  • the mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module.
  • the computer processor module is configured to execute the computer programming code to perform the following operations: setting a screen time-out period for a mobile computing device, the screen time-out period specifying an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off; displaying content on the screen of the mobile computing device; monitoring a frequency of touch inputs from the user while the content is being displayed; and adjusting the screen time-out period in response to the monitoring.
  • One more aspect of the present disclosure involves a method.
  • the method includes: setting a screen time-out period for a mobile computing device, the screen time-out period specifying an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off; displaying content on the screen of the mobile computing device; monitoring a frequency of touch inputs from the user while the content is being displayed; and adjusting the screen time-out period in response to the monitoring.
  • the mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module.
  • the computer processor module is configured to execute the computer programming code to perform the following operations: associating a plurality of tasks executable by a mobile computing device with a plurality of predefined symbols, respectively; turning on a touch-sensitive display of the mobile computing device in response to a request from a user, the display showing a lock screen; detecting, through the touch-sensitive display, a gesture input from the user, the gesture input representing one of the predefined symbols that has been associated with one of the tasks; and executing the task associated with the detected symbol from the gesture input.
  • One more aspect of the present disclosure involves a method.
  • the method includes: associating a plurality of tasks executable by a mobile computing device with a plurality of predefined symbols, respectively; turning on a touch-sensitive display of the mobile computing device in response to a request from a user, the display showing a lock screen; detecting, through the touch-sensitive display, a gesture input from the user, the gesture input representing one of the predefined symbols that has been associated with one of the tasks; and executing the task associated with the detected symbol from the gesture input.
  • the mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module.
  • the computer processor module is configured to execute the computer programming code to perform the following operations: determining that a mobile computing device has been placed in a vehicle, the mobile computing device having a programmable audio output volume; and in response to the determining, automatically increasing the audio output volume of the mobile computing device to a predefined audio output volume.
  • One more aspect of the present disclosure involves a method.
  • the method includes: determining that a mobile computing device has been placed in a vehicle, the mobile computing device having a programmable audio output volume; and in response to the determining, automatically increasing the audio output volume of the mobile computing device to a predefined audio output volume.
  • the mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module.
  • the computer processor module is configured to execute the computer programming code to perform the following operations: receiving a request from a user of the mobile computing device, the request specifying that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location; thereafter detecting an arrival of the mobile computing at the predetermined location; recording, in response to the detecting the arrival, an audio output volume setting of the mobile computing device; setting, in response to the detecting the arrival and after the recording, the mobile computing device in the mute mode or the vibrate mode; thereafter detecting a departure of the mobile computing device from the predetermined location; and restoring, in response to the detecting the departure, the mobile computing device to the recorded audio output volume setting.
  • One more aspect of the present disclosure involves a method.
  • the method includes: receiving a request from a user of the mobile computing device, the request specifying that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location; thereafter detecting an arrival of the mobile computing at the predetermined location; recording, in response to the detecting the arrival, an audio output volume setting of the mobile computing device; setting, in response to the detecting the arrival and after the recording, the mobile computing device in the mute mode or the vibrate mode; thereafter detecting a departure of the mobile computing device from the predetermined location; and restoring, in response to the detecting the departure, the mobile computing device to the recorded audio output volume setting.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A request is received from a user of the mobile computing device. The request specifies that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location. Thereafter, an arrival of the mobile computing at the predetermined location is determined. In response to the detecting the arrival, an audio output volume setting of the mobile computing device is recorded. In response to the detecting the arrival and after the recording, the mobile computing device is set in the mute mode or the vibrate mode. Thereafter, a departure of the mobile computing device from the predetermined location is detected. In response to the detecting the departure, the mobile computing device is restored to the recorded audio output volume setting.

Description

    PRIORITY DATA
  • The present application is a divisional application of U.S. patent application Ser. No. 13/935,034, filed on July 3, entitled “Automatic Volume Control Based on Context and Location”, the disclosure of which is hereby incorporated by reference in its entirety.
  • BACKGROUND
  • Technical Field
  • The present disclosure generally relates to improving the interface and usability of a mobile computing device, such as a smartphone or a tablet computer.
  • Related Art
  • In recent years, the rapid advances in computer technology and broadband telecommunications have enhanced the popularity of mobile computing devices such as tablet computers and smartphones. Among other things, these mobile computing devices can be used to browse the web, play games, music, or videos, take pictures, send/receive emails, etc. However, existing mobile computing devices may still have various drawbacks that limit their usability or versatility. These drawbacks may lead to user frustration.
  • Therefore, while existing mobile computing devices have been generally adequate for their intended purposes, they have not been entirely satisfactory in every aspect.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagrammatic view of an example mobile computing device and a user for performing a face-unlock according to various aspects of the present disclosure.
  • FIG. 2 is a flowchart illustrating an example method for using the mobile computing device to perform a face-unlock according to various aspects of the present disclosure.
  • FIGS. 3-6 are diagrammatic views of an example mobile computing device and a user for performing a voice-unlock according to various aspects of the present disclosure.
  • FIG. 7 is a flowchart illustrating an example method for using the mobile computing device to perform a voice-unlock according to various aspects of the present disclosure.
  • FIGS. 8-9 are diagrammatic views of an example mobile computing device for performing a dynamically adjustable screen time-out according to various aspects of the present disclosure.
  • FIGS. 10-11 are flowcharts illustrating example methods for using the mobile computing device to perform a dynamically adjustable screen time-out according to various aspects of the present disclosure.
  • FIGS. 12, 13A-16A, 13B-16B, 17-20, 21A-21B, 22-24, and 25A-25B are diagrammatic views of various interfaces of an example mobile computing device for an improved lock screen according to various aspects of the present disclosure.
  • FIG. 26 is a flowchart illustrating an example method for launching applications from a lock screen of the mobile computing device according to various aspects of the present disclosure.
  • FIGS. 27-28 are diagrammatic views of various environments in which a mobile computing device performs contextually-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIG. 29 is a simplified flowchart illustrating a method of using a mobile computing device to perform a contextually-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIG. 30 is a diagrammatic view of an example environment in which a mobile computing device performs locationally-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIGS. 31-32 are diagrammatic views of various interfaces of an example mobile computing device for performing locationally-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIG. 33 is a simplified flowchart illustrating a method of using a mobile computing device to perform locationally-aware automatic volume adjustment according to various aspects of the present disclosure.
  • FIG. 34 is a simplified block diagram of an example mobile computing device for performing one or more of the processes of FIGS. 1-33 according to various aspects of the present disclosure.
  • FIG. 35 is a simplified block diagram of an example system for performing one or more of the processes of FIGS. 1-33 according to various aspects of the present disclosure.
  • DETAILED DESCRIPTION
  • It is to be understood that the following disclosure provides many different embodiments, or examples, for implementing different features of the present disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Various features may be arbitrarily drawn in different scales for simplicity and clarity. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed
  • As used herein, the term “about” refers to a +/−5% variation from the nominal value. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plurality forms as well, unless the context clearly and specifically indicates otherwise. In addition, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
  • In recent years, the rapid advances in computer technology and broadband telecommunications have led the growing popularity of mobile computing devices such as tablet computers and mobile telephones. A user of these mobile computing devices can perform a plurality of tasks on these mobile computing devices, for example tasks that previously required a conventional desktop or laptop computer. Among other things, a user can play movies/videos, browse the web, play games, view photographs, listen to digital music, read e-books, receive navigational instructions, send and receive emails, conduct audio or video telephone calls, perform word processing/spreadsheet calculation/presentation management tasks, or take advantage of additional functionalities offered by applications (apps) that can be downloaded from online app stores.
  • However, the mobile computing devices may still have certain drawbacks that limit the versatility and the usability of these devices. The present disclosure offers solutions to overcome these drawbacks, as discussed in more detail below.
  • Maintaining secured access to a user's mobile computing device (e.g., smartphone or a tablet computer) has always been a challenge to their manufacturers (or makers of an operating system that runs on these devices). For example, it may be desirable to limit the access to a smartphone or a tablet computer to an authorized user only. This may be referred to as “unlocking” the smartphone or tablet computer. Some methods of adding security to the unlocking process involve using password protection, that is, a user needs to input a correct password to gain access to the smartphone or tablet computer. Recently, a “face-unlock” method has also been employed to unlock a smartphone or tablet computer. To perform the face-unlock process, a user basically lets the camera of the smartphone or tablet computer capture a facial image of the user. The smartphone or tablet computer then compares the captured facial image to a previously-stored facial image and determines whether the user who is attempting to unlock the smartphone or tablet computer is the authorized user having the previously-stored facial image.
  • Although the face-unlock process has proven useful to some degree, one drawback that it suffers is tied to inconsistent performance based on poor lighting conditions. For example, if the user attempts to perform the face-unlock process in a poorly-lit environment, the camera of the smartphone or tablet computer often times is not able to capture a good enough facial image of the user, and the face-unlock process may fail as a result. The user may then be asked to perform an alternative method of unlocking the phone. This is frustrating to the user, who may then abandon the face-unlock process altogether after suffering through a few failed attempts.
  • The present disclosure proposes methods and devices that offer an improved face-unlock experience for the user.
  • Referring to FIG. 1, a simplified diagrammatic view of a mobile computing device 100 is illustrated. In some embodiments, the mobile computing device 100 may be a laptop computer, or a tablet computer (for example, APPLE's® IPAD®, an ANDROID® tablet, a WINDOWS® powered tablet, or a BLACKBERRY® tablet), or a mobile telephone (for example, APPLE's® IPHONE®, an ANDROID® smartphone, a WINDOWS® smartphone, or a BLACKBERRY® smartphone), or a wear-able electronic device (e.g., a smart watch or glass).
  • In some embodiments, the mobile computing device 100 may include a touch-sensitive display (or touch screen) 110 for displaying one or more visual objects. However, it is understood that the various aspects of the present disclosure may apply to a non-touch screen display as well. For example, whereas a touch screen device may detect user input via sensing the contact and the movement of the user's fingers or a stylus on the touch screen, a non-touch screen device may detect user input via more traditional mechanisms such as a mouse, a keyboard, a remote control, a gesture, or voice commands.
  • The mobile computing device 100 may further include a camera 120. The camera 120 may be any type of image-capturing device suitable for implementation on a smartphone or a tablet computer, for example a camera that contains a CMOS image sensor. The camera 120 is configured to capture both still shots (photographs) of a person or an object, or a motion video of the person or object.
  • The mobile computing device 100 also includes a lighting mechanism 130 that is capable of producing or illuminating light. In the embodiment shown in FIG. 1, the lighting mechanism 130 includes a plurality of light-emitting diodes (LEDs), but it may include additional or other types of light-producing devices in alternative embodiments. When activated, the LEDs emit light. The intensity and color of the emitted light can be controlled by software implemented on the mobile computing device 100. It is understood that both the camera 120 and the lighting mechanism 130 are implemented on a “front” side of the mobile computing device 100 (i.e., the same side as the display 110). However, the mobile computing device 100 may further include a different camera and/or another lighting mechanism on the back side of the mobile computing device 100 as well.
  • The mobile computing device 100 further includes an ambient light sensor 140. The ambient light sensor 140 is configured to gauge how much light is available in an area near the mobile computing device. In other words, the ambient light sensor 140 can be used to determine whether the mobile computing device 140 is located in a well-lit or a poorly-lit environment.
  • Suppose a user 150 now wishes to perform a face-unlock process. The user 150 may press a home button 160 or a power button 170 of the mobile computing device 100, which may turn on the display 110 and thereafter initiate the face-unlock process automatically. In some embodiments, however, the user 150 may also be first asked to perform another task before the face-unlock process is initiated. For example, the user 150 may be asked to swipe an object on the display 110 after the home button 160 or the power button 170 is pressed before the face-unlock process is initiated. By performing these actions, the user 150 essentially sends a request to the mobile computing device 100 that he/she wishes to gain access to the mobile computing device.
  • After the mobile computing device 100 receives the user's request to gain access to the device, it instructs the ambient light sensor 140 to sense or detect an ambient light condition near the mobile computing device 100. The detected ambient light condition is then compared to a predefined lighting condition threshold. For example, the predefined lighting condition threshold may be a minimum lighting condition that allows a user's face to be clearly captured by the camera 120. In some embodiments, this predefined lighting condition threshold may be preset by a manufacturer of the mobile computing device 100 or by the maker of the operating system running on the mobile computing device 100.
  • In other embodiments, the predefined lighting condition threshold is actually determined through a calibration process. For example, at the time when the user 150 initially configures (or sets up) the face-unlock action with the mobile computing device 100, the user 150 may be asked to position his/her face in front of the mobile computing device 100 such that his/her face can be captured by the camera 120. While the user's face is being continuously captured by the camera 120, the mobile computing device 100 turns on the lighting mechanism 130 and gradually increases its lighting output. The increasing of the light output of the lighting mechanism 130 causes the ambient lighting situation to continuously improve as well. Thus, the ambient light sensor 140 is instructed to continuously detect the ambient lighting condition while the light output of the lighting mechanism 130 is increased. At some point, the camera 120 can capture a satisfactorily clear facial image of the user 150, and the ambient lighting condition detected by the ambient light sensor 140 corresponding to the satisfactory capture of the user's facial image is stored in the mobile computing device 100 as the predefined lighting condition threshold. To ensure the accuracy of the calibration process, the user 150 may be asked to go into a dimly-lit environment to perform the calibration process.
  • Returning to FIG. 1, regardless of how the predefined lighting condition threshold is determined, the mobile computing device 100 compares the detected ambient lighting condition with the predefined lighting condition threshold. Based on the comparison, the mobile computing device 100 determines whether the ambient lighting condition is below the predefined lighting condition threshold. If not, the mobile computing device 100 may continue with the face-unlock action, in which the user 150 may be asked to position his/her face in front of the mobile computing device 100 such that his/her facial image 180 is within a specified location of the display 110, for example within the dotted lines as shown in FIG. 1. In other words, if the detected ambient lighting condition reveals that the mobile computing device 100 (and therefore the user 150) is located in a well-lit environment, the face-unlock action is unlikely to encounter any problems and may therefore continue “normally.”
  • On the other hand, if the mobile computing device 100 determines the ambient lighting condition is below the predefined lighting condition threshold, the mobile computing device 100 activates the lighting mechanism 130 such that it illuminates light. The light is sufficient to clearly illuminate the user 150's face. The mobile computing device 100 then performs the face-unlock action for the user 150 while the light is illuminated by the lighting mechanism. By doing so, the user 150's face can be satisfactorily captured by the camera 120 (i.e., captured as a clear facial image 180) even if the user 150 and the mobile computing device 100 are located in a poorly-lit environment. Therefore, the face-unlock action may be performed with a substantially reduced failure rate. It is understood that the specific sequence of activating the lighting mechanism 130 and prompting the user 150 to position his/her face in front of the camera 120 is not important. One can be performed before the other, or vice versa, or they can be performed simultaneously.
  • In some embodiments, instead of activating the lighting mechanism 130 (e.g., LEDs) to illuminate light, the mobile computing device 100 may turn on a portion of (or all of) the display 110. The display 110 of the mobile computing device 100 may contain a plurality of LEDs each capable of emitting light, or some other light-producing element. As such, the mobile computing device 100 may instruct a portion of the display 110 (for example the portion outside of the dotted lines) to emit bright white light, which also serves to illuminate the user 150's face. In other words, the illuminated display 110 serves a similar function as light emitted by the lighting mechanism 130, and vice versa. Hence, they can be interchangeably used, or used in combination with each other to achieve even more light in certain embodiments.
  • Regardless of how the light is output by the mobile computing device 100, the user 150's face is captured as a clear facial image 180, which may then be compared with a stored facial image of the user 150 to determine if the user 150 is an authorized user. The stored facial image of the user 150 may be generated when the user 150 initially sets up the face-unlock for the mobile computing device 100, for example during the calibration process discussed above. In some embodiments, the mobile computing device 100 may capture a “live” video of the facial image 180 of the user 150 and compares the video to the stored facial image (or video) of the authorized user. In that sense, the term “facial image” herein may refer to both a still photograph and a motion video.
  • Assuming the user 150 is an authorized user, he/she may then be authenticated and gain access to the mobile computing device 100 even if he/she is in a poorly-lit environment. By detecting the ambient lighting environment and implementing a corresponding light illumination, the present disclosure substantially improves the reliability of a face-unlock action for the mobile computing device 100 and enhances the likelihood that it will be actually used by prospective users as a security-protection mechanism for accessing the mobile computing device 100.
  • FIG. 2 is a simplified flowchart illustrating a method 200 for performing the face-unlock process discussed above. One or more steps of the 200 are performed by a mobile computing device of the user. In some embodiments, the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, or a wear-able electronic device.
  • The method 200 includes a step 205, in which a request to gain access to a mobile computing device is received from a user. The method 200 includes a step 210, in which an ambient lighting condition is detected via the mobile computing device. The method 200 includes a step 215, in which the predefined threshold is determined through a calibration process. The method 200 includes a step 220, in which the detected ambient lighting condition is compared with a predefined threshold. The method 200 includes a step 225, in which the detected ambient lighting condition is determined to be below the predefined threshold. The method 200 includes a step 230, in which at least one of the following tasks is performed in response to the step 225: activating a front-facing light-emitting diode (LED) mechanism of the mobile computing device, or illuminating at least a portion of a screen of the mobile computing device. The method 200 includes a step 235, in which a face-unlock action is performed on the mobile computing device to authenticate the user. The step 235 is performed while the LED mechanism is activated or while the portion of the screen of the mobile computing device is illuminated.
  • In some embodiments, the step 235 includes the following steps: activating a front-facing camera of the mobile computing device; displaying, on the screen of the mobile computing device, a facial image of the user captured by the front-facing camera; prompting the user to move the captured facial image to a specified location on the screen; and comparing the captured facial image to a stored facial image of the user; granting access to the user if the captured facial image matches the stored facial image of the user; and denying access to the user if the captured facial image fails to match the stored facial image of the user.
  • In some embodiments, the predefined threshold in step 215 is a minimum lighting condition that allows the user's face to be clearly captured by a camera of the mobile computing device. In some embodiments, the step of receiving the request from the user in step 205 includes detecting the mobile computing device being powered on. In some embodiments, the step of detecting the ambient lighting condition in step 210 is performed using an ambient light sensor implemented on the mobile computing device.
  • It is understood that addition process steps may be performed before, during, or after the steps 205-235 in FIG. 2, but they are not specifically illustrated in FIG. 2 for reasons of simplicity. It is also understood that, unless otherwise specified, the steps 205-235 of the method 200 are not necessarily performed in numerical order.
  • The discussions above with reference to FIGS. 1-2 describe an improved face-unlock process according to the present disclosure. In addition to face-unlock, the present disclosure also offers an improved voice-unlock process. In more detail, a voice-unlock process generally involves recording a voice-based password from a user, who will then be prompted to speak that recorded voice-based password when such user wants to access the mobile computing device. However, such voice-based password is a static password. That is, once the mobile computing device of the user (or a remote server) stores the user-spoken voice-based password, it remains the password unless the user decides to change it. The static nature of the voice-based password decreases reliability and security of the voice-unlock process. For example, a hacker may somehow obtain that static voice-based password (e.g., through a secret recording) and may then use that password to gain illegal access to the user's mobile computing device. In other words, when the password always remains the same, it is more prone to theft or cracking, and therefore does not offer the optimal security protection for a user.
  • In comparison, the present disclosure offers a dynamic voice-unlock process. The voice-based password dynamically changes from time to time, which may occur without the user actively demanding or initiating the change. The dynamic voice-unlock process is discussed in more detail below with reference to FIGS. 3-7. Similar elements appearing in FIGS. 1, 3-6 are labeled the same for reasons of clarity and consistency.
  • Referring to FIG. 3, the user 150 of a mobile computing device 100 may engage in voice-based interactions with the mobile computing device 100 periodically. In some embodiments, these voice-based interactions include voice-based commands issued by the user 150 to initiate tasks to be performed by the mobile computing device 100. For example, the user 150 may issue a voice command such as “call dad's work phone”, “set a reminder for me to do laundry at 5:30 tonight”, “send a text message to Chris”, “turn up the volume”, “navigate me to 100 Drury lane” etc. The user may also issue a query to the mobile computing device 100, for example “what is the score of the Yankees-Red Sox game last night”, “what is the weather for this weekend”, “how many feet are in a mile”, etc. In some other embodiments, the voice-based interactions include voice-dictations made by the user 150. For example, the mobile computing device 100 may allow the user 150 to dictate the content of an email, a text, or a voice search. With a click of a button, the user 150 may dictate, as an example text message, “I am unable to make it to dinner tonight, sorry guys, maybe next time.” In yet other embodiments, the voice-based interaction may include a telephone call made by the user 150.
  • According to the various aspects of the present disclosure, the mobile computing device 100 may be activated periodically to record a spoken phrase 250 from the user 150 while the mobile computing device 100 is engaged in the voice-based interactions with the user 150. For example, while the user 150 is issuing the command “set a reminder for me to do laundry at 5:30 tonight,” the mobile computing device 100 is activated to record this phrase. As another example, while the user 150 is issuing the query “what is the score of the Yankees-Red Sox game last night,” the mobile computing device 100 is activated to record this phrase as well. As yet another example, while the user 150 is dictating the text message “I can't make it to dinner tonight, sorry guys, maybe next time”, the mobile computing device 100 is activated to record this phrase as well.
  • In some embodiments, the mobile computing device 100 records the spoken voice phrase 250 from the voice-based interaction only if the phrase 250 is intelligible or comprehensible. That is, the mobile computing device 100 needs to have a high confidence level that the voice phrase it recorded is indeed what it “thinks” it is. For example, this may be verified by the mobile computing device 100 repeating (either by voice or text) the phrase and then prompting the user 150 to confirm that the recorded phrase is indeed what the user 150 meant to say. In other embodiments, the mobile computing device 100 may initially record voice phrases 250 that may or may not be 100% accurate. The subsequent actions from the user may help the mobile computing device 100 determine the accuracy of the recorded phrase though. For example, if the user dictates a sentence, and the mobile computing device displays that sentence on the screen 110, if the user 150 does not enter any corrections to that sentence, the mobile computing device 100 may deem that sentence correctly entered. In other words, the mobile computing device 100 may deem that the voice phrase 250 spoken by the user 150 has been accurately captured.
  • In some embodiments, the recording of the voice phrase 250 by the user 150 is performed without alerting the user 150. Stated differently, the user 150 need not be informed that a voice phrase 250 he/she just spoke was captured or recorded by the mobile computing device 100. In other embodiments, however, the mobile computing device 100 may alert the user 150 that the mobile computing device 100 is indeed recording the voice phrase 250 spoken by the user as a part of the voice-based interaction. For example, the mobile computing device 100 may display an alert such as “your voice command is now being recorded” or something similar on the display 110.
  • According to the present disclosure, the mobile computing device 100 will use the recorded phrase 250 (or a portion thereof) to generate a dynamically-changing voice password. For example, with the example shown in FIG. 3, the voice phrase 250 is “set a reminder for me to do laundry at 5:30 tonight.” The mobile computing device 100 may then select one or more words of this phrase as the password. The selection may be done so that a random segment of the phrase 250 is selected to be the password. For example, the words “set a reminder” may be selected as a voice password. As another example, the words “do laundry” may be selected as a voice password. The selected words may not necessarily be words that are adjacent to each other. For example, the words “do laundry tonight” may be selected as a voice password, even though the words “at 5:30” is between the words “do laundry” and “tonight.” In some embodiments, multiple segments of a single phrase may each be saved as a password. Therefore, in the example above, the words “set a reminder”, “do laundry”, and “do laundry tonight” may all each be saved as a password.
  • The selected password is saved either locally in a database in a local memory storage of the mobile computing device 100, or in a remote electronic database, or both. The password may now be used as a password for performing a voice-unlocking process. Referring to FIG. 4, after the mobile computing device 100 receives a request from the user 150 to gain access to the mobile computing device (e.g., detecting the power button 170 or the home button 160 has been pressed), the mobile computing device 100 displays a message on the display 110 to prompt the user 150 to speak a password. As an example, the mobile computing device 100 may prompt the user 150 to speak any one of the following passwords “set a reminder”, “do laundry”, or “do laundry tonight.” In some other embodiments, the mobile computing device 100 may only display one of the passwords (instead of displaying multiple passwords) and prompt the user 150 to speak only that one password.
  • In response to the prompt displayed by the mobile computing device 100, the user 150 may speak a password 255. If the user 150 is the authorized user, he/she can speak the correct password with the correct voice associated with that password. In other words, the user 150 had previously spoken those words in the password displayed by the mobile computing device. Therefore, if the user 150 speaks those words again, the voice behind the words will match. On the other hand, if the user trying to gain access to the mobile computing device 100 is not the authorized user, he/she will not be able to reproduce the correct voice associated with the password. In other words, even though the user can read and understand the prompt, and therefore can speak the “correct” password, the voice associated with the spoken password 255 is from a different person. Since different people have different voices (e.g., in terms of voice amplitude and frequency), person A (e.g., authorized user) and person B (e.g., unauthorized user) may both speak the same word or words, but their voices will not be identical or at least substantially similar.
  • Based on this principle, the mobile computing device 100 records the password 255 spoken by the user attempting to gain access. The mobile computing device 100 then compares the recorded spoken password 255 with the previously recorded password. This comparison process involves a voice-matching process to ensure that not only does the user have to speak the correct password, but that he/she also must speak the correct password with the correct voice associated with such password. If the comparison process indicates that the voice from the password 255 matches the recorded password, then access will be granted to the user. However, if the comparison process indicates that the voice from the password 255 fails to match the recorded password, then access will be denied to the user, or the user will be required to go through an alternative authentication process (e.g., entering a password, etc.).
  • What constitutes “matching” may be set by the mobile computing device 100 using a predetermined threshold. For example, if the amplitude and/or frequency of the voice associated with the password 255 is within an X % (X<=100) of the amplitude and/or frequency of the voice associated with the previously stored password, then the password 255 spoken by the user is considered to match the correct password, and access will be granted to the user. As an example, X may be in a range from about 90%-99.99%.
  • The process discussed above with reference to FIGS. 3-4 may repeat itself after the user 150 gains access to the mobile computing device. For example, referring to FIG. 5, after gaining access to the mobile computing device 100, the user 150 may dictate (via a spoken phrase 260) a text message “I can't make it to dinner tonight, sorry guys, maybe next time” to a friend. The mobile computing device 100 may then record this spoken phrase 260 and select random segments of such phrase as a new voice password. For example, the words “I can't make it”, “dinner tonight”, or “next time” may each be selected as the new voice password. Again, the recording of the phrase 260 may be done with or without alerting the user 150.
  • Referring now to FIG. 6, at some time after the new voice password is selected and saved either in a local database on the mobile computing device 100 or in a remote electronic database, the user 150 is attempting to gain access to the mobile computing device 100 again. The mobile computing device 100 again prompts the user 150 to speak one of the following new voice passwords “I can't make it”, “dinner tonight”, or “next time.” In the illustrated embodiment, one or more of the previous voice passwords (e.g., “set a reminder”) may also be displayed (and they still function as effective passwords). In other embodiments, once the new voice passwords are generated, the previous voice passwords are erased and are no longer effective. In those cases, only the newly-generated voice password will be displayed.
  • If the user 150 is the authorized user whose voice was recorded for the correct password, then he/she can speak the password and have the mobile computing device 100 verify that he/she is indeed the authorized user, because the voice will match the stored password. Otherwise, the user will be denied access if he/she is an unauthorized user, as the voice will fail to match that of the stored password.
  • Based on the above discussions, it can be seen that the voice password discussed herein is a dynamic password, because it can change constantly or periodically. The changing of the voice password may occur without the user requesting to change it. Rather, a new voice password may be dynamically generated based on the voice-based interactions between the mobile computing device 100 and the user, for example through voice commands, dictations, or telephone calls. Since the voice password is generated periodically and randomly, the user himself/herself may not even know what the current voice password is until he/she is being prompted to enter it during a voice unlock process. Nevertheless, the authorized user can still gain access to the mobile computing device 100 without encountering problems, since the authorized user can readily reproduce the correct voice associated with the updated password. On the other hand, even if a voice password has been illegally obtained by an authorized person (e.g., by a secret recording), that person may still not be able to gain access to the mobile computing device, because such voice password may have been changed already by the time the authorized person attempts to use the recorded voice password to unlock the mobile computing device 100. Thus, the constantly-changing nature of the voice password means that it is harder to be faked or stolen, thereby increasing its effectiveness and security.
  • Based on user preferences, the mobile computing device 100 may also decide how often to update or change the voice password. For example, in a settings menu, the user may choose from options to change the voice password every day, every week, every month, or every specified number of days, etc. Based on the user selection, the mobile computing device 100 can implement the updating of the voice password accordingly. In addition, the mobile computing device 100 may implement another unlock mechanism as a back-up unlock method for the voice-unlock mechanism discussed above. For example, the face-unlock may be used as a back-up mechanism for the voice-unlock, or vice versa. Other unlock mechanisms such as text passwords, drawing predefined patterns, etc., may also be used as back-up unlock mechanisms for unlocking the mobile computing device 100 in case the voice-unlock fails. Furthermore, although the dynamic voice-unlock process is discussed above using the mobile computing device 100 (e.g., smartphones, tablet computers, laptop computers, wear-able electronic devices) as an example, other computer systems may also benefit from the dynamic voice-unlock process disclosed herein. For example, a desktop workstation or a server may also be “unlocked” via the dynamic voice unlock process discussed herein. The authorized user's voice may be recorded from online chatting sessions, for example.
  • FIG. 7 is a simplified flowchart illustrating a method 300 for performing the dynamic voice-unlock process discussed above. One or more steps of the 300 are performed by a mobile computing device of the user. In some embodiments, the mobile computing device includes a mobile telephone, a tablet computer, or a laptop computer.
  • The method 300 includes a step 305, in which the mobile computing device is engaged in a voice-based interaction with a user of the mobile computing device. The method 300 includes a step 310, in which a spoken phrase from the user is recorded during the voice-based interaction. The method 300 includes a step 315, in which a segment of the spoken phrase is selected as a password for authenticating the user. The method 300 includes a step 320, in which a recording of the segment of the spoken phrase is recorded as a recorded password in a database. The method 300 includes a step 325, in which a request to gain access to the mobile computing device is received from the user. The method 300 includes a step 330, in which the user is prompted to speak the password to the mobile computing device. The method 300 includes a step 335, in which one or more words spoken by the user in response to the prompting is recorded. The method 300 includes a step 340, in which the one or more words spoken by the user is compared with the recorded password. The method 300 includes a step 345, in which the user is authenticated in response to the comparing.
  • In some embodiments, the comparing in step 340 includes matching a voice associated with the one or more words spoken by the user with a voice from the recorded password.
  • In some embodiments, the spoken phrase in step 310 contains one or more identifiable words. In some embodiments, the selecting in step 315 is performed so that the at least one of the identifiable words is randomly selected as the segment of the spoken phrase.
  • In some embodiments, the engaging in step 305 includes conducting a telephone conversation. In some other embodiments, the engaging in step 305 includes performing a voice dictation. In yet other embodiments, the engaging in step 305 includes receiving a voice command from the user and initiating a task using the mobile computing device in response to the voice command.
  • In some embodiments, the recording of step 310 is performed without alerting the user.
  • It is understood that, unless otherwise specified, the steps 305-345 of the method 300 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 305-345 in FIG. 7. For example, the spoken phrase may be a first spoken phrase, the password may be a first password, and the request may be a first request. The method 300 may further include the following steps: after the authenticating, recording, via the mobile computing device, a second spoken phrase from the user, the second spoken phrase being different from the first spoken phrase; selecting a segment of the second spoken phrase as a second password for authenticating the user; saving a recording of the segment of the second spoken phrase as a recorded second password in the database; receiving, from the user, a second request to gain access to the mobile computing device; prompting, in response to the receiving the second request, the user to speak the second password to the mobile computing device; thereafter recording one or more words spoken by the user in response to the prompting; comparing the one or more words spoken by the user with the second recorded password; and thereafter authenticating the user in response to the comparing the one or more words spoken by the user with the second recorded password.
  • In some embodiments, the method 300 may further include the following steps: receiving, from the user, a third request to gain access to the mobile computing device; prompting, in response to the receiving the third request, the user to speak one of: the first password or the second password to the mobile computing device; thereafter recording one or more words spoken by the user in response to the prompting; comparing the one or more words spoken by the user with the first record password or the second recorded password; and thereafter authenticating the user in response to the comparing the one or more words spoken by the user with the first recorded password or the second recorded password.
  • These additional steps, and other suitable steps of the method 300 are not specifically illustrated in FIG. 7 for reasons of simplicity.
  • Another drawback of the existing mobile computing devices relates to screen time-out. To conserve battery life, mobile computing devices may be programmed to dim its screen (i.e., display) after a period of inactivity from the user. The screen will then be turned off shortly thereafter if the user does not attempt to keep the screen on, for example by touching it. Most modern day mobile computing devices offer the user an option to set the screen time-out period, for example anywhere from 30 seconds to a few minutes. However, if the screen time-out period is set too low, the user may be constantly interrupted by the dimming of the screen while the user is still viewing content on the mobile computing device, which requires the user to touch the screen (or engage the mobile computing device in some other suitable manner) to indicate that the user is still actively using it. These constant interruptions may annoy or frustrate the user. On the other hand, if the screen time-out period is set too high, the mobile computing device may be accidently left on by the user from time to time, and the unnecessarily long screen time-out period may waste the battery of the mobile computing device.
  • The present disclosure offers an adaptive screen time-out to address the problems facing existing mobile computing devices. The adaptive screen time-out is discussed in more detail below with reference to FIGS. 8-12. Similar elements appearing in FIGS. 1 and 8-9 are labeled the same for reasons of clarity and consistency.
  • Referring to FIG. 8, the display 110 of the mobile computing device 100 has detected a period of inactivity from the user (e.g., the user 150 shown in FIG. 6). Inactivity from the user may correspond to a lack of user input, such as an absence of a touch input to the display 110. The mobile computing device 100 may have a preset default screen time-out period, which may be set by the manufacturer of the mobile computing device 100. In some embodiments, the screen time-out period may also be set by the user. For example, the user may go into a settings menu of the mobile computing device 100 and choose the default screen time-out period from a list of screen time-out periods. Alternatively, the user may also specify a screen dimming period, which occurs X seconds (e.g., 5 seconds) before the screen time-out period.
  • To facilitate the following discussion, in the example discussed below, the default screen time-out period is one minute, which means the display will be turned off in one minute if no touch input is received from the user for one continuous minute. The mobile computing device 100 dims the display 110 before the display 110 is turned off, so as to give the user an opportunity to “revive” the display 110 (e.g., un-dim it) or otherwise indicate that he/she is still using the mobile computing device 100. For example, when the default screen time-out period has been set to one minute, the display 110 may be dimmed after 55 seconds of inactivity from the user, and if the user does not engage the mobile computing device 100 in some manner during the next 5 seconds (e.g., touching the display 110), the display 110 will be turned off.
  • Suppose the user is still actively using the mobile computing device 100 (e.g., viewing a web page on the display 110). The user may then engage the mobile computing device 100. In the illustrated embodiment, the user engages the mobile computing device 100 by touching the display 110 with his/her hand 360 while the display 110 is dimmed but not turned off completely. In other embodiments, the user may engage the mobile computing device 100 by another suitable mechanism, for example by touching the display 110 with a stylus, or by voice command, or by shaking/tilting/moving the mobile computing device 100, etc.
  • Referring now to FIG. 9, when the mobile computing device 100 detects the engagement of the mobile computing device 100 from the user (e.g., the user's hand touching the display 110), the mobile computing device 100 undims the display 110. The mobile computing device 100 also increases the default screen time-out period. For example, the screen time-out period is increased from 1 minute to 2 minutes. In other words, the display 110 will be turned off after two minutes of continuous user inactivity, which also means the display 110 will be dimmed after 1 minute and 55 seconds of continuous user inactivity.
  • Suppose now that 1 minute and 55 seconds has gone by without any user engagement of the display 110, the display 110 will dim again, similar to what is shown in FIG. 8. If the user is still actively using the mobile computing device 100, he/she may again engage the mobile computing device 100 in a manner similar to those discussed above. The mobile computing device 100 will undim the display 110 in response to detecting the user's engagement, and increase the screen time-out period again, for example from 2 minutes to 4 minutes. In some embodiments, the increase in the screen time-out period is linear (e.g., increasing by 1 predetermined number every time). In other embodiments, the increase in the screen time-out period may be non-linear (e.g., geometric). As an example, the first increase in the screen time-out period may be from 1 minute to 2 minutes, the second increase in the screen time-out period may be from 2 minute to 4 minutes, the third increase in the screen time-out period may be from 4 minute to 8 minutes, etc. In alternative embodiments, the increase in the screen time-out period may be governed by a predefined algorithm.
  • This process may be repeated as long as the user still engages the mobile computing device 100 in some manner to prevent the display 110 from being turned off. Consequently, the screen time-out period keeps on increasing. As the screen time-out period keeps on getting higher, the user is interrupted less frequently by the screen dimming. Therefore, it can be seen that the screen time-out period according to the present disclosure becomes adaptive to the user's behavior. The user's engagement of the display 110 likely means that the user is still using the mobile computing device 100, and therefore the mobile computing device 100 “learns” not to time out the display 110 too soon. The increasingly-longer intervals before the user is disrupted by the dimming of the display 110 results in less user frustration and annoyance. For example, within a 20-minute span, a user may have been interrupted 20 times by the screen dimming for an existing mobile computing device (i.e., dimming about every minute). In comparison, the user may be only interrupted 4 times by using the mobile computing device 100 with the adaptive screen time-out (e.g., interruptions at 1 minute, 3 minutes, 7 minutes, and 15 minutes). It can be seen that according to the present disclosure, the disruptions caused by the screen time-out occur far less-frequently, thereby enhancing user satisfaction. In addition, the battery is not being wasted because the user is still using the mobile computing device 100.
  • If at some point, for example when the screen time-out period has been increased to 8 minutes, and user inactivity has been detected for 8 continuous minutes without the user's engagement of the mobile computing device 100, the display 110 will be turned off. In the present embodiments, the screen time-out period is now reset to the default screen time-out period, i.e., 1 minute in this example discussed above. When the user turns on the display 110 the next time, the display 110 will be timed out again after 1 minute (and be dimmed after 55 seconds). The subsequent user engagement of the mobile computing device 100 (or the lack thereof) will determine whether the screen time-out period will be adjusted.
  • The longer screen timeout period hardly results in waste of the battery resources, since the only possible scenario where battery waste occurs is when the screen time-out period has been increased to a fairly long period, AND if the user has forgotten to manually turn off the display 110 after he/she finishes using it. This is unlikely to occur, but even if it does occur, the battery waste happens only once, as the screen time-out period will revert back to the (relatively) short default screen time-out period when the display 110 times itself out after user inactivity. In addition, the screen time-out period according to the present disclosure may still be capped at some maximum amount, for example ten or twenty minutes. This ensures that an inadvertent user error (i.e., forgetting to turn the display 110 off) will not lead to a catastrophic failure (e.g., the display remaining on for too long and draining most, if not all, of the battery).
  • It is also understood that although the adaptive screen time-out of the present disclosure has been described using a touch-sensitive screen of a mobile computing device 100 as an example, the concept may be applied to other computing devices that relies on turning off the screen to conserve power. For example, most traditional laptops use a trackpad and/or a keyboard as inputs. The screen of the laptop may dim and then turn itself off if user inactivity has been detected for a predetermined period of time. For example, the user has not typed anything through the keyboard or touched the touch pad for X seconds or minutes. According to the adaptive screen time-out discussed above, the laptop will also increase its screen time-out period if the user engages the laptop (e.g., touching the touchpad or typing on the keyboard, or even through a speaker/microphone). Such laptop may achieve substantially the same benefits from utilizing the adaptive screen time-out discussed above.
  • Another aspect of the adaptive screen time-out of the present disclosure involves monitoring a frequency of the user engagement with the mobile computing device 100, and adjusting the screen time-out period in response to the monitored frequency. For example, while content is being displayed on the display 110, as shown in FIG. 9, the mobile computing device 100 monitors how frequently the user's hand (or stylus) touches the display 110. If a user normally touches the display 110 frequently, for example at least once every few seconds, then a prolonged inactivity for that user (for example 30 seconds or a minute) is an indication that the user is no longer using the mobile computing device 100. This is because the relatively prolonged inactivity (not touching the display for 1 minute or so) is a deviation from the user's normal behavior (e.g., touching the display 110 every few seconds), which probably indicates that the user is no longer viewing content on the display 110. On the other hand, if the user rarely touches the display 110, then even a relatively prolonged inactivity from the user does not necessarily mean that he is no longer using the mobile computing device 100, because such inactivity could be well within the normal behavior of such user.
  • Thus, based on the monitored frequency of user engagement with the mobile computing device 100, the screen time-out period may be adjusted accordingly to reflect the likelihood of whether the user is using the mobile computing device 100 or not. If the user is frequently engaged with the mobile computing device 100, then the screen time-out period may be shortened, since an absence of user input even within a short time span likely means the user is no longer using the mobile computing device 100. On the other hand, if the user is infrequently engaged with the mobile computing device 100, then the screen time-out period may be lengthened, since an absence of user input even within a relatively prolonged time span does not necessarily mean that the user is no longer using the mobile computing device 100.
  • In some embodiments, the mobile computing device 100 provides a plurality of predefined ranges of frequency of engagement with the mobile computing device 100. As an example, these ranges may include: a first range where the user engages with the mobile computing device 100 (e.g., by touching the display 110) every few seconds, a second range where the user engages with the mobile computing device 100 every tens of seconds, and a third range where the user engages with the mobile computing device 100 every minute or so. It is understood, of course, that the above example does not require the user's engagements with the mobile computing device 100 to be evenly or uniformly spaced apart. For example, they may just indicate that the user engagement of the mobile computing device 100 occurs with the above-listed frequency on average (e.g., the user touches the display 110 every few seconds on average), or alternatively that a period of inactivity does not exceed the above-listed frequency (e.g., the user does not go more than a few seconds without touching the display 110 while still using the mobile computing device 100).
  • The mobile computing device 100 may then associate a plurality of predefined screen time-out periods with the predefined ranges of frequency of user engagements, respectively. As an example, the user may associate a first screen time-out period of 30 seconds with the first range of frequency of user engagement with the mobile computing device 100, a second screen time-out period of 1 minute with the second range of frequency of user engagement with the mobile computing device 100, and a third screen time-out period of 5 minutes with the third range of frequency of user engagement with the mobile computing device 100 discussed above.
  • The mobile computing device 100 then dynamically adjusts the screen time-out period for the display 110 based on the monitored user engagement frequency. For example, if the user has been monitored to touch the display every 5 seconds or so, then the observed user engagement frequency falls within the first predefined range, and corresponding the mobile computing device 100 sets the screen time-out period to 30 seconds. This is because since the user touches the display 110 so frequently, a mere 30 seconds of the display 110 not being touched is a good indication of the user no longer using the display 110. As another example, if the user has been monitored to touch the display every minute or so, then the observed user engagement frequency falls within the third predefined range, and correspondingly the mobile computing device 100 sets the screen time-out period to 5 minutes. This is because since the user rarely touches the display 110, a minute (or even a few minutes) of the display 110 not being touched is not necessarily a good indication of the user no longer using the display 110. Consequently, the display 110 should be programmed to remain turned on for a little bit longer.
  • In this manner discussed above, the screen time-out period is dynamically and adaptively adjusted as an inverse (though not necessarily linear) function of the frequency of user engagement with the mobile computing device 100. In other words, the more frequently the user engages the mobile computing device 100 (e.g., touching the display 110), the shorter the screen time-out period is set. The less frequently the user engages the mobile computing device 100, the longer the screen time-out period is set.
  • In some embodiments, the monitoring of the user frequency of engagement with the mobile computing device 100 is done consistently, for example every time the user uses the mobile computing device 100. In other embodiments, however, the monitoring of the user frequency of engagement with the mobile computing device 100 is done at predetermined time intervals, for example every few hours, days, or even weeks.
  • It is understood that while the user engagement of the mobile computing device 100 are described using touching the display 110 as examples, the user engagement of the mobile computing device 100 may be in other forms in alternative embodiments, such as via moving the mobile computing device 100, tilting the mobile computing device 100, talking to the mobile computing device 100, etc.
  • In some embodiments, the mobile computing device 100 may be compatible with multiple user profiles, where multiple authorized users may each be authorized to use the mobile computing device 100. In these cases, the present disclosure may track each user's frequency of engagement with the mobile computing device 100 and associate that engagement frequency with the use's profile. For example, a husband and a wife may both be authorized users of the mobile computing device 100, so they each have an account with the mobile computing device 100. Through monitoring their frequency of engagement with the mobile computing device 100, the mobile computing device 100 determines that the husband user tends to engage the mobile computing device 100 frequently, while the wife user tends to engage the mobile computing device 100 infrequently. These engagement tendencies may be electronically stored with their respective user profiles.
  • Thereafter, when the husband user is using the mobile computing device 100, the screen time-out period is dynamically adjusted to be relatively short. On the other hand, when the wife user is using the mobile computing device 100, the screen time-out period is dynamically adjusted to be relatively long. These adjustments are done automatically without necessarily requiring the user's request to do so. Of course, the user may still actively override these dynamically adjusted screen time-out periods via the settings. In addition, the frequency of user engagement with the mobile computing device 100 associated with each user profile may also be updated at predetermined time intervals, for example every few hours, days, or weeks.
  • In some embodiments, the mobile computing device 100 also takes into account of the particular application the user is running while the engagement frequency is monitored. For example, the user may have different engagement frequencies with the mobile computing device 100 with respect to a web page application and an email application. Thus, the predefined screen time-out periods discussed above may be associated with a particular application as well. For example, if user inactivity has been observed for 20 seconds while the user is browsing a web site, the screen time-out period may be adjusted to a particular predefined screen time-out period. But if the user activity (for the same user) has been observed for 20 seconds while the user is using an email client, the screen time-out period may be adjusted to a different predefined screen time-out period. Again, this is due to the fact that even the same user may have different tendencies of engaging the mobile computing device 100 while using different applications (or programs) on the mobile computing device 100.
  • It can also be appreciated that all the various aspects of the present disclosure with respect to dynamic screen time-out can be combined in certain embodiments. For instance, the mobile computing device 100 may derive an initial (or default) screen time-out period based on monitoring the frequency of user engagement with the mobile computing device 100. This initial screen time-out period may then be dynamically adjusted subsequently by the user's engagement with the mobile computing device 100 when the display 110 is dimmed.
  • FIG. 10 is a simplified flowchart illustrating a method 400 for performing the dynamic screen time-out discussed above. One or more steps of the 400 are performed by a mobile computing device of the user. In some embodiments, the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, a wear-able electronic device such as a smart watch, or a smart glass.
  • The method 400 includes a step 405, in which a default screen time-out period is received for a mobile computing device. The screen time-out period specifies an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off. The method 400 includes a step 410, in which content is displayed on the screen of the mobile computing device. The method 400 includes a step 415, in which the screen is dimmed after the content has been displayed, without receiving any touch input from the user, for an amount of time shorter than the default screen time-out period by X number of seconds. X may be a predefined integer, for example. The method 400 includes a step 420, in which a touch input is received from the user. The method 400 includes a step 425, in which the screen is undimmed in response to receiving the touch input. The method 400 includes a step 430, in which the default screen time-out period is increased in response to receiving the touch input. The default screen time-out period is increased to a first new screen time-out period, the first new screen time-out period being longer than the default screen time-out period. The method 400 includes a step 435, in which the screen is turned off after the first new screen time-out period has passed without receiving any touch input from the user. The method 400 includes a step 440, in which the first new screen time-out period is reset back to the default screen time-out period.
  • In some embodiments, the screen of the mobile computing device is a touch-sensitive screen. The touch input from the user is received through the touch-sensitive screen.
  • In some embodiments, the mobile computing device comprises a touch pad that is separate from the screen of the mobile computing device. The touch input from the user is received through the touch pad.
  • In some embodiments, the step 405 of receiving the default screen time-out period includes the following sub-steps: prompting the user to specify a value for the default screen time-out period; and designating a user-specified value as the default screen time-out period. In other embodiments, the step 405 of receiving the default screen time-out period includes the following sub-steps: prompting the user to enter a screen dimming period that specifies an amount of time that passes without the mobile computing device receiving any touch input from the user before the screen of the mobile computing device dims; and calculating the default screen time-out period based on a user-entered screen dimming period.
  • It is understood that, unless otherwise specified, the steps 405-440 of the method 400 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 405-440 in FIG. 10. For example, the method 400 may further include the following steps: continuing the displaying of the content on the screen of the mobile computing device; dimming the screen after the content has been displayed for a further amount of time shorter than the default screen time-out period by X number of seconds; thereafter receiving, from the user, a further touch input to the screen; un-dimming the screen in response to receiving the further touch input; and increasing, in response to receiving the further touch input, the first new screen time-out period to a second new screen time-out period in response to receiving the further touch input from the user.
  • FIG. 11 is a simplified flowchart illustrating a method 500 for performing the dynamic screen time-out discussed above. One or more steps of the 500 are performed by a mobile computing device of the user. In some embodiments, the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, a wear-able electronic device such as a smart watch or glass.
  • The method 500 includes a step 505, in which a screen time-out period is set for a mobile computing device. The screen time-out period specifies an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off. The method 500 includes a step 510, in which content is displayed on the screen of the mobile computing device. The method 500 includes a step 515, in which a frequency of touch inputs from the user is monitored while the content is being displayed. The method 500 includes a step 520, in which a plurality of predefined ranges of frequency of touch inputs is provided. The method 500 includes a step 525, in which a plurality of different predefined screen time-out periods is associated with the predefined ranges of frequency of touch inputs, respectively. The method 500 includes a step 530, in which a first predefined range is identified. The first predefined range encompasses the monitored frequency of touch inputs. The method 500 includes a step 535, in which the screen time-out period is adjusted in response to the monitoring. The adjusting the screen time-out period includes setting the predefined screen time-out period associated with the first predefined range as a new screen time-out period. In some embodiments, the adjusting the screen time-out period includes setting a new screen time-out period as an inverse function of the frequency of touch inputs from the user.
  • It is understood that, unless otherwise specified, the steps 505-535 of the method 500 are not necessarily performed in numerical order. It is also understood that addition process additional steps may be performed before, during, or after the steps 505-535 in FIG. 11. For example, the method 500 may further include a step of associating the monitored frequency of touch inputs with a user profile. As another example, the method 500 may further include a step of adjusting the screen time-out period as a function of an application running on the mobile computing device. For reasons of simplicity, additional steps are not discussed herein.
  • Another drawback of existing mobile computing devices relates to their lock screen versatility. Existing mobile computing devices may allow the user to perform an unlock function via the lock screen, or launch shortcuts to one or more applications of the mobile computing devices from the lock screen directly. However, the lock screens of existing mobile computing devices may not offer the customizability and security expected for an advanced mobile computing device.
  • To overcome the problems of the lock screens associated with existing mobile computing devices, the present disclosure offers a lock screen that is customizable and offers security control.
  • Referring to FIG. 12, an example lock screen 550 is illustrated on the display 110 of the mobile computing device 100. The lock screen 550 may contain information such as time of the day, the day of the week, and the date. The lock screen 550 may also contain a plurality of icons, some examples of which are illustrated in FIG. 12 as icons 560-563. These icons 560-563 each represent a different application that can be run on the mobile computing device 100. Stated differently, each application includes a task that is executable on the mobile computing device 100.
  • In the embodiment illustrated, these icons 560-563 correspond to “Phone”, “Web”, “Email”, and “Gallery”, respectively. A user may perform a predefined engagement action with one of the icons 560-563, such as clicking on the icon or dragging it across a portion of the lock screen 550 to directly launch the application associated with the icon. If no additional security checks are implemented (e.g., prompting the user to enter a password), the launching of the application also unlocks the mobile computing device 100. However, the lack of additional security checks may not be desirable for some users, who prefer to restrict access to the mobile computing device 100 to authorized users only. On the other hand, if a security mechanism such as a password is required, the user essentially must go through two steps to unlock the mobile computing device 100: step 1: choose an application to launch by performing an action with the icon representing the application on the lock screen; and step 2: enter the necessary password (or an alternative security verification mechanism such as the face-unlock and/or voice-unlock discussed above) before the application is actually launched. If the user has to go through such unlocking procedures every time he/she wants to use the phone, it may prove to be cumbersome and annoying to the user.
  • According to the various aspects of the present disclosure, the mobile computing device 100 implements a lock screen from which the user can quickly launch applications by entering one or more symbols predefined by the user. In some embodiments, the mobile computing device 100 performs a handwriting analysis or image comparison analysis on the symbol entered by the user with a previous symbol defined by the user and stored in a digital memory of the mobile computing device 100. The handwriting analysis or the image comparison analysis each serves as a security check for user authentication. The details of such lock screen are discussed below with reference to FIGS. 13-26.
  • Referring to FIG. 13A, the mobile computing device 100 displays a list of names for applications that can be directly launched from the lock screen. Note that the list of applications shown in FIG. 13A is merely an example and does not necessarily include every single available application. In addition, the names of applications may be different from embodiment to embodiment. Furthermore, in some alternative embodiments, the icons representing the applications may be displayed instead of, or in addition to, the names of the applications.
  • The list of applications shown in FIG. 13A is displayed when the user has gained access to the mobile computing device 100 at some point (via whatever suitable method such as password-unlock, face-unlock, voice-unlock, etc). The user has also decided to customize the applications that are launch-able from the lock screen. Therefore, the user may invoke the display shown in FIG. 13A through appropriate settings on the mobile computing device 100. The user may now select a particular application for which a customized launch shortcut is to be generated. In the example shown in FIG. 13A, the application selected is “Phone.”
  • Referring to FIG. 13B, the mobile computing device 100 prompts the user to draw a symbol for launching the application “Phone” from the lock screen. The mobile computing device 100 displays an empty box 570 inside which the user can draw the symbol. The user may draw the symbol either with his/her finger or with a stylus, or another suitable device for interacting with the touch-sensitive interface of the mobile computing device 100. In the example shown in FIG. 13B, the user draws a letter “p” to represent the application “Phone.” The mobile computing device 100 then associates the user-defined symbol (i.e., the letter “p” in this example) with the application “Phone” and saves the association into the digital or electronic memory of the mobile computing device 100. In some embodiments, the mobile computing device 100 also conducts an electronic handwriting analysis on the symbol drawn by the user. The characteristics of the symbol drawn by the user in accordance with the electronic handwriting analysis may also be saved into the memory of the mobile computing device 100. In other embodiments, the mobile computing device 100 may record an image of the symbol defined by the user.
  • FIGS. 14A-14B to FIGS. 16A-16B illustrate additional examples for selecting an application (to be launched from the lock screen) and defining symbols to be associated with the selected application. In FIGS. 14A-14B, the application selected is “Movies”, and the user draws a letter “m” to be associated with the application “Movies.” The letter chosen to represent the application need not be the initial letter of the name of the application though. For example, in FIGS. 15A-15B, the application “Music” is selected, and since its first letter is “m” just like the applications “Maps” and “Movies”, it may not make sense for the user to assign the same letter to all three applications. Thus, the user may assign the letter “u” to the application “Music.”
  • In other embodiments, the user may assign a letter that does not even appear in the name of the application, as long as the user can remember such association. Moreover, the symbol defined by the user need not even by a letter in a recognized alphabet. For example, as shown in FIGS. 16A-16B, the user can draw a spiral-like symbol to represent and be associated with the selected application “Compass.” The mobile computing device may once again conduct a handwriting analysis on each user-defined symbol, or record an image of each user-defined symbol and save them into an electronic memory.
  • Using the example shown in FIGS. 13A-13B, where the user defined a hand-drawn letter “p” to be associated with the application “Phone”, the user may now be able to directly launch the application “Phone” from the lock screen 550, as shown in FIG. 17. For example, after turning on the display 110 and arriving at the lock screen 550, the user may use his/her hand 360 (or with a stylus) to draw the letter “p” in the lock screen 550. The mobile computing device 100 detects this gesture-based input through the touch-sensitive display 110 and performs a comparison between the symbol that was just entered by the user with a list of previously-defined symbols by the user. In some embodiments, the comparison involves conducting another handwriting analysis on the symbol just entered by the user in an attempt to launch the “Phone” application from the lock screen. Based on the results of the handwriting analysis, the mobile computing device 100 determines whether the symbol just entered by the user sufficiently matches any of the previously-defined symbols. If the answer is yes, then the corresponding application—in this case the “Phone” application—is launched directly from the lock screen, as shown in FIG. 18. An extra security verification step is bypassed, since the handwriting analysis can verify that it is indeed the authorized user who is trying to launch the application (and unlock the mobile computing device 100 in the process). Of course, if the handwriting analysis results indicate that the symbol just entered by the user does not match up with any of those symbols previously-defined, the mobile computing device 100 may deny access to the user, or in the alternative ask the user to go through a back-up security verification process (e.g., password unlock, face-unlock, etc.).
  • In some other embodiments, instead of performing a handwriting analysis on the symbol just entered, the mobile computing device 100 may simply record an image of the just-entered symbol and compare that symbol with the images of the list of previously-defined symbols. A machine/computer analysis may be done on these images to see if any of the images of the previously-defined symbols sufficiently matches with the image of the symbol just captured. If the answer is yes, the mobile computing device 100 can then directly launch the application whose associated symbol image matches the image of the symbol just entered by the user. If the answer is no, then additional user authentication steps may be needed, or the user may be denied access to the mobile computing device 110. Once again, not only is the desired application directly launched from the lock screen for the user's convenience, but the extra step of security verification is also bypassed, as the symbol image comparison analysis serves the purpose of the security check.
  • In yet some other embodiments, where the user actually prefers to have an extra security verification step, the mobile computing device 100 need not perform the handwriting analysis nor the symbol image comparison analysis. Instead, the mobile computing device 100 merely needs to determine which letter the symbol that was just entered by the user corresponds to. In other words, if the symbol previously defined by the user was deemed to constitute the letter “p”, then the symbol that was just entered by the user does not necessarily have to match up with the previously-entered “p” in terms of handwriting characteristics. As long as the mobile computing device 100 can determine that the symbol just entered corresponds to the letter “p”, the mobile computing device 100 can directly launch the “Phone” application from the lock screen. Even though these embodiments of the lock screen lack the security aspect, they still offer versatility compared to conventional lock screens, since the user may launch any number of applications (as long as he/she has already defined their corresponding symbols), rather than just the four or five applications whose icon representations are located on the conventional lock screen.
  • FIGS. 19-20 illustrate another example of directly launching an application from the lock screen 550 based on a user-defined symbol. In the example shown in FIG. 19, the symbol entered by the user corresponds with the spiral-like symbol previously defined by the user to be associated with the “Compass” application, as shown in FIGS. 16A-16B. After the user draws the spiral-like symbol on the display 110, the mobile computing device 100 may perform another handwriting analysis or symbol image comparison analysis to determine whether the symbol just entered by the user matches up with any of the previously-defined symbols.
  • In some embodiments, if the user-defined symbol is not within any well-known alphabets, the mobile computing device 100 may employ a lower threshold for verifying the just-entered symbol. The rationale is that an unauthorized user who is trying to gain illegal access to the mobile computing device 100 is less likely to known about the existence of such uncommon symbol. This means if the uncommon symbol (or something similar thereto) was entered by a user, there is a higher likelihood that such symbol was actually entered by the authorized user who knew of the uncommon symbol's existence and previous association with one of the applications. Another rationale is that, unlike familiar letters in a well-known alphabet, the user may not have developed a patterned way in writing the uncommon symbol (such as the spiral-like symbol shown in FIGS. 16B and 19). Thus, there may be a greater degree of deviation in terms of the symbol being drawn from time to time. The mobile computing device 100 may be programmed to “forgive” or “overlook” minor deviations between the previously-defined symbol VS the one just entered, since these minor deviations do not necessarily indicate that the symbol was just drawn by an unauthorized user. In comparison, for the symbols that correspond to familiar letters in a well-known alphabet, the mobile computing device 100 may employ a stricter standard of verification to ensure that the symbol was indeed entered by the authorized user. In any case, once the spiral-like symbol entered by the user in FIG. 19 is deemed to match with the spiral-like symbol previously stored in FIG. 16B, the mobile computing device 100 directly launces the “Compass” application, as shown in FIG. 20.
  • It is also understood that the symbol defined by the user and the application to be associated need not necessarily have a one-to-one correspondence. In some embodiments, multiple applications may be associated with the same symbol. For example, referring to FIGS. 21A-21B, since the applications “Maps”, “Movies”, and “Music” each have the letter “m” in their names, the user may be allowed to define a hand-written letter “m” to be associated with all three of these applications. Thereafter, once the user enters the letter “m” (that matches up with the previously-defined letter “m”) in the lock screen 550, as shown in FIG. 22, the computing device 100 may prompt the user to choose which one of the applications associated with the letter “m” should be launched, for example as shown in FIG. 23. The user may choose the “Music” app, for example, and the computing device 100 will launch the “Music” app, which is shown in FIG. 24.
  • Similarly, in some embodiments, multiple symbols may be associated with the same symbol. For example, after the user has already drawn the letter “p” as a symbol to be associated with the application “Phone” as shown in FIGS. 13A-13B, the user may additionally define another symbol 580 to also be associated with the application “Phone”, as shown in FIGS. 25A-25B. In the example illustrated in FIGS. 25A-25B, the symbol 580 is not a common letter from a well-known alphabet, but rather a user-customized symbol that somewhat resembles a virtual representation of a telephone. Of course, the user may define any arbitrary or random symbol to be associated with any of the applications, as long as the user can remember the symbols and their associations. In this example, the user may also launch the “Phone” application by drawing the symbol 580 on the lock screen.
  • FIG. 26 is a simplified flowchart illustrating a method 600 for launching applications from the lock screen based on entering a user-defined symbol as discussed above. One or more steps of the 600 are performed by a mobile computing device of the user. In some embodiments, the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, or a wear-able electronic device such as a smart watch or glass.
  • The method 600 includes a step 605 of associating a plurality of tasks executable by a mobile computing device with a plurality of predefined symbols, respectively. In some embodiments, one of the tasks is an unlocking of the mobile computing device. The method 600 includes a step 610 of turning on a touch-sensitive display of the mobile computing device in response to a request from a user, the display showing a lock screen. The method 600 includes a step 615 of detecting, through the touch-sensitive display, a gesture input from the user. The gesture input represents one of the predefined symbols that has been associated with one of the tasks. In some embodiments, the gesture input is made by a finger of the user. In other embodiments, the gesture input is made by a stylus of the user. The method 600 includes a step 620 of executing the task associated with the detected symbol from the gesture input.
  • In some embodiments, the predefined symbols include letters of an alphabet. In these embodiments, the associating in step 605 is performed so that a name of each task contains a respective one of the letters that has been associated with the task.
  • In some embodiments, the associating in step 605 includes the following sub-steps: displaying the tasks to the user; prompting the user to associate the predefined symbols to the tasks, respectively; receiving, from the user, associations between the tasks and the predefined symbols. In these embodiments, the step of displaying the tasks may include a step of listing names of the tasks or a step of displaying virtual representations of the tasks. In some embodiments, the step of prompting includes prompting the user to define one of the symbols by drawing the symbol on the touch-sensitive display. In some embodiments, the method 600 may further include the following steps: storing the symbol drawn by the user, wherein the storing comprises storing handwriting characteristics of the user; conducting, after the detecting the gesture input from the user, a handwriting analysis with respect to the detected symbol; comparing the handwriting characteristics of the stored symbol with handwriting characteristics of the detected symbol; granting the user access to the mobile electronic device if the handwriting characteristics of the stored symbol matches the handwriting characteristics of the detected symbol; and denying the user access to the mobile electronic device if the handwriting characteristics of the stored symbol fails to match the handwriting characteristics of the detected symbol. In some embodiments, the step of comparing includes the following sub-steps: determining whether the detected symbol belongs to a well-known alphabet; employing a higher verification standard if the detected symbol belongs to a well-known alphabet; and employing a lower verification standard if the detected symbol does not belong to a well-known alphabet.
  • In some embodiments, the associating in step 605 includes associating multiple symbols to at least one of the tasks.
  • In some embodiments, the associating in step 605 includes associating multiple tasks to at least one of the symbols.
  • It is understood that, unless otherwise specified, the steps 605-620 of the method 600 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 605-620 in FIG. 26. For example, the method 600 may further include a step of present the user an additional security verification mechanism after the detecting in step 615 but before the executing the task in step 620. The additional security verification mechanism may include a password-unlock, a face-unlock, or a voice-unlock. For reasons of simplicity, other additional steps are not specifically discussed in FIG. 26 herein.
  • Another drawback of existing mobile computing devices relates to their audio volume control. These devices allow their users to adjust the volume, for example through a volume slider bar or via a mute/unmute button. However, existing mobile computing devices are not “intelligent” enough to automatically adjust the volume settings based on factors such as environment, context, or location. As an example, suppose a user prefers to put his/her phone (an example mobile computing device) in a “vibrate” or “mute” mode so that incoming calls or emails do not disturb him/her, or at least not in a loud audible manner. However, that user may also need to use phone as a navigation device from time to time, in which case the user may need audio navigation instructions from the phone. Therefore, the user would have to manually increase the volume of the phone before or during navigation. This may be difficult if the user is driving a car, as the user needs to fiddle with the volume settings of the phone while he/she is supposed to be paying full attention to the road. This is not only frustrating to the user, but also dangerous. Even if the user is not using navigation that requires a loud audio output, the user may still prefer to have the phone not on “vibrate” or “mute”, because incoming phone calls and/or messages may be missed due to the loud noise produced by the car during driving. Again, fiddling with audio controls of the phone while driving is not desirable.
  • Even if the user has managed to safely increase the audio output volume to a desirable level while the user is in the car, another potential complication may arise if the user forgets to put the phone back into the “vibrate” or “mute” mode after reaching the target destination. Thereafter, the user may be surprised by a loud phone call or incoming message at an inopportune time, since the phone's audio output volume was increased while the user was in the car. This may be disruptive if the user is in a place where mobile devices are supposed to be turned off or at least remain quiet, such as in a meeting, a church service, a movie theatre, or even if the user is merely attempting to sleep. The user may forget to change the volume settings of the phone to comply with these situations because he/she may still be operating under the assumption that the phone was already in a “vibrate” or “mute” mode, since that is the standard default setting for the phone. Again, this may lead to user frustration with the phone.
  • To overcome these issues discussed above with the volume controls (or the lack thereof) with existing phones, the present disclosure offers a contextually-aware mobile computing device. In the example discussed below, the contextually-aware mobile computing device is a smartphone, but it is understood that the contextually-aware mobile computing device may be a tablet computer, a laptop computer, or a wear-able electronic device such as a smart watch or glass in other embodiments.
  • FIG. 27 illustrates a simplified environment in which the contextually-aware smartphone of the present disclosure operates. Referring to FIG. 27, a vehicle 650 is illustrated. The vehicle 650 may be an automobile as illustrated, but may be another type of transportation device in other embodiments. A mount 660 is disposed within the vehicle 650. The mount 660 is configured to hold and interface with a mobile computing device, such as the mobile computing device 100 discussed above. In some embodiments, the mount 660 is pre-installed in the vehicle 650. In other embodiments, the mount 660 may be an aftermarket part that a user of the vehicle 650 installs in the vehicle. A smartphone 670 is also located in the vehicle 650. In the illustrated embodiment, the smartphone 670 is mounted on the mount 660, but this may not be necessary in alternative embodiments.
  • The smartphone 670 detects that it has been placed in the vehicle 650. In some embodiments, the detection is made in response to the smartphone 670 communicating with an electronic communications interface of the mount 660. The electronic communications interface may include one or more ports that mate with those on the smartphone 670, such as USB ports. Since the mount is located in the vehicle 650, the smartphone 670 “knows” that it must be in the vehicle 650 now. In other embodiments, the smartphone 670 is equipped with sensors such as accelerometers. These sensors can be used to determine whether the smartphone 670 is moving at a fast speed, for example faster than the average running speed of a human. The detection of the smartphone 670 traveling at such high speed is another indication that the smartphone 670 is now located in a vehicle. In some embodiments, the smartphone 670 is determined to be in a vehicle only if the speed at which the smartphone 670 is traveling is faster than the maximum speed of a human but still within a normal range of speeds for an automobile, for example in a range from about 15 miles per hour to about 120 miles per hour. If the smartphone 670 is traveling at a speed faster than this range, then it might be an indication that the smartphone 670 (and its user) is actually in an aircraft, and not in a vehicle, in which case the smartphone 670 should remain silent.
  • After the smartphone 670 detects that it is inside of the vehicle 650, it increases the audio output volume on the smartphone 670. In some embodiments, the audio output volume is automatically set to a maximum volume. In other embodiments, the audio output volume is automatically increased to a volume greater than the volume before it is placed within the vehicle 650. This volume need not necessarily be the maximum volume, for example it can be a volume preset by the user, such as 80% or 75% of the maximum volume. By doing so, the user need not fiddle with the volume controls on the smartphone 670 while trying to obtain navigation instructions. The user also need not worry about missing a potential phone call, email, or text message while driving, as the loud audio output volume of the smartphone 670 is likely to alert the user of the new call/email/text message (though for safety reasons, the user need not answer these calls/emails/text messages). In some embodiments, the automatic increase of the audio output volume temporarily overrides a “vibrate” or “mute” setting (i.e., the previous audio setting) for the smartphone 670.
  • In some embodiments, the automatic increase of the audio output volume of the smartphone 670 can be made more restrictive. For example, if the user wants the smartphone 670 to be loud only for receiving navigation directions, but does not mind potentially missing phone calls or other incoming alerts, then the audio output volume can be programmed to automatically increase only if the user has requested navigation instructions. Therefore, in such embodiments, the mere fact that the smartphone 670 has detected its placement within the vehicle 650 will not necessarily trigger the automatic increase of its audio output volume. The smartphone 670 must also detect a request to receive navigation instructions from a user before the audio output volume is automatically increased.
  • In some embodiments, once the smartphone 670 determines that it has been placed inside of the vehicle 650, it first detects and records its current volume setting before increasing the volume. For example, if the smartphone 670 was on “vibrate” right before it was placed inside of the vehicle 650, this “vibrate” setting is recorded by the smartphone 670 before the volume is increased (e.g., increased to a maximum volume).
  • Referring to FIG. 28, after the user 150 finishes the driving session and takes the smartphone 670 outside of the vehicle 650, the smartphone 670 will detect this event. In some embodiments, the smartphone 670 detects it being taken outside of the vehicle 650 by detecting a break with the electronic communication interface of the mount 660. In some other embodiments, the smartphone 670 detects it being taken outside of the vehicle 650 by detecting that the speed of the smartphone is no longer within the range of a normal operating speed of the vehicle 650. In yet some other alternative embodiments, the smartphone 670 may require both a break with the electronic communication interface of the mount 660 AND a detection of a speed outside of the normal operating speed of the vehicle to deem that the smartphone 670 has been taken outside of the vehicle 650.
  • In any case, after the smartphone 670 deems that it is no longer inside the vehicle 650, it will reset the audio output volume to the recorded volume setting right before the smartphone 670 was placed inside the vehicle 650 (and thus before the audio output volume of the smartphone 670 was increased). This ensures that the user need not worry about resetting the volume control settings manually after reaching the target destination. In other words, by being contextually-aware (i.e., no longer being inside the vehicle 650), the smartphone 670 is automatically programmed to perform a task (i.e., restoring the volume to its previous setting) that the user would have had to do once the target destination is reached. Therefore, in this example above, the smartphone 670 restores its audio output volume back to a “vibrate” mode after detecting that it is no longer inside the vehicle 650, since the “vibrate” setting was the previous audio setting used for the smartphone 670 prior to it being placed inside the vehicle 650. Similarly, had the volume been set at 50% of the maximum audio output volume by its previous audio setting, the smartphone 670 will restore the volume again to 50% after the smartphone 670 is taken outside of the vehicle 650. By resetting the audio output volume to its previous setting, the user will not be potentially disrupted by loud phone calls, emails, or messages in an inappropriate environment.
  • It is understood that contextually-aware audio output volume adjustment mode discussed above may be entirely disabled by the user (e.g., through the necessary settings) if the user so desires. However, if such mode is enabled, the smartphone 670 can automatically output a loud volume in a vehicle when it is desired, and automatically restore the smartphone 670 to its original volume once it is outside the vehicle, without requiring user intervention. As such, user satisfaction with respect to using the smartphone 670 should be increased.
  • FIG. 29 is a simplified flowchart illustrating a method 700 for operating the contextually-aware mobile computing device discussed above. In some embodiments, the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, or a smart watch or glass.
  • The method 700 includes a step 705, in which a determination is made that a mobile computing device has been placed in a vehicle. The mobile computing device having a programmable audio output volume. In some embodiments, the vehicle is an automobile. The method 700 includes a step 710, in which an audio output volume setting of the mobile computing device is recorded after the determination is made that the mobile computing device is placed in the vehicle. The method 700 includes a step 715, in which in response to the determination made in step 705, the audio output volume of the mobile computing device is automatically increased to a predefined audio output volume. The method 700 includes a step 720, in which a request to provide navigational instructions is received from a user. The method 700 includes a step 725, in which the navigational instructions are provided to the user. The navigational instructions include audio navigational instructions at the predefined audio output volume. The method 700 includes a step 730, in which after the audio output volume is automatically increased, a detection is made that the mobile computing device has been taken out of the vehicle. The method 700 includes a step 735, in which the mobile computing device is automatically restored to the recorded audio output volume setting.
  • In some embodiments, the predefined output volume is a maximum output volume of the mobile computing device.
  • In some embodiments, the determining step in 705 includes a step of detecting that the mobile computing device has been plugged into a dock inside the vehicle. In some embodiments, the step of detecting includes detecting that one or more ports of the mobile computing device have been connected to an electronic interface of the dock.
  • In some embodiments, the mobile computing device includes one or more sensors that measure a movement speed of the mobile computing device, in which case the step of determining in step 705 includes: a step of measuring, via the one or more sensors, the movement speed of the mobile computing device; and a step of determining that the mobile computing device has been placed in the vehicle in response to the measured movement speed of the mobile computing device being within a predefined speed range. In some embodiments, the predefined speed range is above a maximum speed of a human and within a normal speed range of the vehicle.
  • It is understood that, unless otherwise specified, the steps 705-735 of the method 700 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 705-735 in FIG. 29. For example, the method 700 may further include a step of decreasing, before the determining, the audio output volume of the mobile computing device in response to user request. In some embodiments, the step of decreasing the audio output volume of the mobile computing device comprises muting the mobile computing device. In some embodiments, the step of automatically increasing the audio output volume comprises overriding the muting of the mobile computing device.
  • FIG. 30 illustrates another scenario in which the contextually-aware mobile computing device 100 performs automatic volume control. Referring now to FIG. 30, the user 150 is at a facility 800. In the embodiment illustrated in FIG. 30, the facility 800 is a church (and it is hereinafter referred to as the church 800), but it is understood that the facility 800 could be any other type of business, establishment, building, etc., in alternative embodiments. Suppose that the user 150 normally keeps the smartphone 670 (an example embodiment of the mobile computing device 100) on a relatively loud volume, but he/she would prefer to keep the smartphone 670 in a silent type mode when he/she is inside the church 800. The silent mode may be either a “vibrate” mode where the smartphone 670 vibrates instead of rings for incoming calls/messages/alerts, or a “mute” mode where the smartphone 670 mutes all sounds and does not vibrate either. Previously, the user 150 would have to remember to actively set the smartphone 670 in the silent mode when he/she is inside the church 800, and then remember to take the smartphone 670 out of the silent mode after leaving the church 800. Problems often arise when the user 150 either forgets to put the smartphone 670 in the silent mode while in the church 800 (e.g., a loud phone call during a quiet church service), or forgets to take the smartphone 670 out of the silent mode after leaving the church 800 (e.g., missing potential phone calls/emails/alerts/messages due to the smartphone 670 being silent).
  • To address these issues, the contextually-aware smartphone 670 performs automatic volume control based on the detected location of the smartphone 670. For example, in some embodiments, the smartphone 670 detects, via Global Positioning System (GPS) sensors on the smartphone 670, that the user 150 always puts the smartphone 670 in a silent mode at the GPS coordinates corresponding to the church 800. If this pattern is repeated frequently, the smartphone 670 may determine that the user 150 always intends to put the smartphone 670 in the silent mode while he/she is at these GPS coordinates (i.e., the user 150 being at the church 800). The next time the smartphone 670 detects that it is at, or very close to, these GPS coordinates corresponding to the church 800, it will record the audio output volume settings of the smartphone 670 (e.g., at 80% of the maximum volume), and then automatically set the smartphone 670 in the silent mode. When the smartphone 670 detects that it is no longer at those GPS coordinates (i.e., the user 150 has left the church 800), the smartphone 670 will now take itself out of the silent mode and restore the volume to the recorded audio output volume settings (e.g., back to 80% of the maximum volume). Therefore, the user need not remember to tinker with the smartphone 670's volume settings each time he/she goes to, and leaves, the church 800.
  • In some other embodiments, instead of relying on the user 150's behavioral patterns to determine what the user 150 intends to do, the smartphone 670 is configured to receive a request from the user 150 to “remember” the present location (i.e., the GPS coordinates corresponding to the location of the church 800). The request also specifies that the smartphone 670 should be set in a silent mode when the smartphone 670 is at this location. In other words, the user 150 lets the smartphone 670 “know” at which location it should be silent. In response to this request, the smartphone 670 records the location of the church 800, for example through its GPS coordinates and saves it into an electronic database, which could be either locally maintained on the smartphone 670 itself, or remotely maintained in a remote database. Again, the smartphone 670 will record the audio output volume settings of the smartphone 670 before it is put in the silent mode. Thereafter, the next time the user 150 visits the church 800, the smartphone 670 will automatically put itself in silent mode as long as the user 150 is in the church, and it will take itself out of the silent mode and restore the originally audio output volume when the user 150 leaves the church.
  • In some embodiments, the smartphone 670 will also gather time-related information when the smartphone 670 is put into the silent mode by the user 150. The time-related information may include the time of the day, the day of the week, or the day or week of the month, etc. By gather time-related information, the smartphone 670 may be able to determine if there is a pattern associated with when the user 150 wants to put the smartphone 670 into the silent mode. For example, if the user 150 has consistently attempted to put the smartphone 670 into silent mode every Sunday morning from 9 AM to 11 AM, then this information may be used later to assist with the automatic volume control. In some embodiments, even if the GPS sensors are normally inactive or disabled, the smartphone 670 may activate the GPS sensors between about 9 AM to 11 AM on Sundays to detect the location of the smartphone 670 (and thus the user 150). If the GPS sensors detect the smartphone 670 being in the church 800 during this time frame, then the smartphone 670 may perform the automatic volume adjustment discussed above (i.e., silencing the smartphone 670 while at the church 800 and restoring the original volume after leaving the church 800).8
  • The user 150 also does not necessarily have to physically be at the church 800 to accomplish automatic volume control discussed above. For example, referring now to FIG. 31, the user 150 may launch a map application 810 on the smartphone 670, or on another suitable computing device capable of launching the map application 810, to which the user 810 has an account. The user 150 may use the map application to locate the church 800 and select it as a location where the user would want the smartphone 670 to be put in the silent mode. The map application 810 can obtain the GPS coordinates of the church 800 and save it in a database of predetermined or predefined locations where the smartphone 670 should be silenced.
  • Thereafter, if the user 150 goes to the church 800, the smartphone 670 will detect (e.g., via the GPS sensors) that it has arrived at one of the locations where the smartphone 670 should be put into silent mode. The smartphone 670 will thus put itself in the silent mode without requiring user intervention and remain silent as long as the user 150 (and thus the smartphone 670 itself) is still at the church 800. When the user 150 leaves the church 800, the smartphone 670 will detect the departure from the church 800 and will then restore the audio volume output to its original volume (which is saved before the smartphone went into silent mode). In this manner, the user 150 need not actually be at a particular location in order to instruct the smartphone 670 that it should be in the silent mode while at that location, and not in the silent mode when it is no longer at that location.
  • FIG. 32 illustrates another embodiment of choosing one or more predetermined locations where the smartphone 670 should be silenced. In the example in FIG. 32, the smartphone 670 displays a list of different types or categories of businesses or establishments. The user 150 is prompted to choose one or more of these types of business or establishments where he/she would want the smartphone 670 to be in silent mode. In the present example, the user has selected “Churches”, “Doctor's Offices”, and “Movie Theaters” as places where the smartphone 670 should be kept silent. Thereafter, using the map application 810 in FIG. 31 or another suitable application, the smartphone 670 can obtain a list of churches, doctor's offices, and movie theatres in the user's city (or a plurality of cities where the user 150 frequently visits). The smartphone 670 then retrieves the GPS coordinates (or other suitable positional information) for each church, doctor's office, and movie theater in the obtained list. These GPS coordinates are then saved in a database similar to the ones discussed above. Again, when the user 150 visits any of these places in the future, the smartphone 670 will detect the location and put itself in a silent mode. When the user 150 leaves these places, the smartphone 670 will restore its original volume settings.
  • In some alternative embodiments, the smartphone 670 need not save the GPS coordinates of the churches, doctor's offices, or movie theaters into a database. Instead, when the user visits a place, the smartphone 670 will look up the place being visited to see if it belongs to one of the categories selected by the user. The look-up may be done by retrieving the GPS coordinates of the place being visited, or by telecommunications with other devices, or by triangulation by cell towers, etc. Again, once a match is found, the automatic volume adjustment discussed above may be performed as well.
  • FIG. 33 is a simplified flowchart illustrating a method 900 for operating the contextually-aware mobile computing device discussed above. In some embodiments, the mobile computing device includes a mobile telephone, a tablet computer, a laptop computer, or a smart watch or glass.
  • The method 900 includes a step 905 of receiving a request from a user of the mobile computing device. The request specifies that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location. The method includes a step 910 of detecting an arrival of the mobile computing at the predetermined location after the step 905. The method includes a step 915 of recording, in response to the step of detecting the arrival in step 910, an audio output volume setting of the mobile computing device. The method includes a step 920 of setting, in response to the detecting the arrival and after the recording, the mobile computing device in the mute mode or the vibrate mode. The method includes a step 925 of detecting a departure of the mobile computing device from the predetermined location after the step 920. The method includes a step 930 of restoring, in response to the step of detecting the departure in step 925, the mobile computing device to the recorded audio output volume setting.
  • In some embodiments, the step 905 of receiving the request is performed while the mobile computing device is at the predetermined location.
  • In some embodiments, the step 905 of receiving the request is performed while the mobile computing device is located remotely from the predetermined location. In these embodiments, the method 900 may further include a step of identifying, in response to user input and before the receiving the request, the predetermined location from an electronic map.
  • In some embodiments, the step 905 of receiving the request is performed such that a facility at the predetermined location belongs to one of the following categories: a church, a movie theatre, and a doctor's office.
  • In some embodiments, the steps 910 and 925 of detecting the arrival and the detecting the departure each include a step of determining a location of the mobile computing device via a Global Positioning System sensor.
  • It is understood that, unless otherwise specified, the steps 905-930 of the method 900 are not necessarily performed in numerical order. It is also understood that addition process steps may be performed before, during, or after the steps 905-930 in FIG. 33. For reasons of simplicity, these additional process steps are not discussed in detail herein.
  • FIG. 34 is a simplified block diagram of an electronic device 1300 according to the various aspects of the present disclosure. The electronic device 1300 may be implemented as an embodiment of the mobile computing device 100 discussed above.
  • The electronic device 1300 includes a telecommunications module 1310. The telecommunications module 1310 contains various electronic circuitry components configured to conduct telecommunications with one or more external devices. The electronic circuitry components allow the telecommunications module 1310 to conduct telecommunications in one or more of the wired or wireless telecommunications protocols, including communications protocols such as IEEE 802.11 (WiFi), IEEE 802.15 (Bluetooth), GSM, CDMA, LTE, WIMAX, DLNA, HDMI, etc. In some embodiments, the telecommunications module 1310 includes antennas, filters, low-noise amplifiers, digital-to-analog (DAC) converters, analog-to-digital (ADC) converters, and transceivers. The transceivers may further include circuitry components such as mixers, amplifiers, oscillators, phase-locked loops (PLLs), and/or filters. Some of these electronic circuitry components may be integrated into a single discrete device or an integrated circuit (IC) chip.
  • The electronic device 1300 may include a computer memory storage module 1320. The memory storage module 1320 may contain various forms of digital memory, such as hard disks, FLASH, SRAM, DRAM, ROM, EPROM, memory chips or cartridges, etc. Computer programming code may be permanently or temporarily stored in the memory storage module 1320, for example. In some embodiments, the computer memory storage module 1320 may include a cache memory where files can be temporarily stored.
  • The electronic device 1300 may also include a computer processing module 1330. The computer processing module 1330 may contain one or more central processing units (CPUs), graphics processing units (GPUs), or digital signal processors (DSPs), which may each be implemented using various digital circuit blocks (including logic gates such as AND, OR, NAND, NOR, XOR gates, etc) along with certain software code. The computer processing module 1330 may be used to execute the computer programming code stored in the memory storage module 1320.
  • The electronic device 1300 may also include an input/output module 1340, which may serve as a communications interface for the electronic device 1300. In some embodiments, the input/output module 1340 may include one or more touch-sensitive screens, physical and/or virtual buttons (such as power and volume buttons) on or off the touch-sensitive screen, physical and/or virtual keyboards, mouse, track balls, speakers, microphones, light-sensors, light-emitting diodes (LEDs), communications ports (such as USB or HDMI ports), joy-sticks, image-capture devices (for example cameras), etc. In some embodiments, the touch-sensitive screen may be used to display visual objects discussed above. The various features of the present disclosure discussed above may also be accomplished at least in part using the touch-sensitive screen and/or other components of the input/output module 1340. In alternative embodiments, a non-touch screen display may be implemented as a part of the input/output module 1340.
  • FIG. 35 is a simplified diagrammatic view of a system 1400 that may be used to perform certain aspects of the present disclosure discussed above. In some embodiments, the system 1400 may include an electronic device 1410. The electronic device 1410 may be implemented as an embodiment of the electronic device 1300 of FIG. 34. In some embodiments, the electronic device 1410 includes a tablet computer, a mobile telephone, a laptop, a smart watch, or a smart glass.
  • The system 1400 also includes a remote server 1420. The remote server 1420 may be implemented in a “cloud” computing environment and may include one or more databases that store files, for example the various files that can also be stored locally in the electronic device 1410 as discussed above.
  • The electronic device 1410 and the remote server 1420 may be communicatively coupled together through a network 1430. The network 1430 may include cellular towers, routers, switches, hubs, repeaters, storage units, cabling (such as fiber-optic cabling or telephone cabling), and other suitable devices. The network 1430 may be implemented using any of the suitable wired or wireless networking protocols. The electronic device 1410 and the remote server 1420 may also be able to communicate with other devices on the network 1430 and either carry out instructions received from the network, or send instructions through the network to these external devices to be carried out.
  • To facilitate user interaction with its offered services, a service provider (that hosts or operates the remote server 1420) may provide a user interface module 1440. The user interface module 1440 may include software programming code and may be installed on the electronic device 1410 (for example in a memory storage module). In some embodiments, the user interface module 440 may include a downloadable “app”, for example an app that is downloadable through a suitable service such as APPLE's® ITUNES®, THE APP STORE® from APPLE®, ANDROID's® PLAY STORE®, AMAZON's® INSTANT VIDEO®, MICROSOFT's® WINDOWS STORE®, RESEARCH IN MOTION's® BLACKBERRY APP WORLD®, etc. In the embodiment shown, the user interface module 1440 includes an instance of the “app” that has been downloaded and installed on the electronic device 1410. The app may also be used to perform the various aspects of the present disclosure discussed above, such as with respect to face-unlock, voice-unlock, dynamically adjustable screen time-out, customizable lock screen, and/or the contextually-aware volume controls discussed above.
  • A user 1450 may interact with the system 1400 by sending instructions to the electronic device 1410 through the user interface module 1440. For example, the user 1450 may be a subscriber of the services offered by the service provider running/hosting/operating the remote server 1420. The user 1450 may attempt to log in to the remote server 1420 by launching the “app” of the user interface 1440. The user's login credentials are electrically sent to the remote server 1420 through the network 1430. After verifying the user login credentials, the remote server 1420 may instruct the user interface module 1440 to display a suitable interface to interact with the user in a suitable manner.
  • One aspect of the present disclosure involves a mobile computing device. The mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module. The computer processor module is configured to execute the computer programming code to perform the following operations: receiving, from a user, a request to gain access to a mobile computing device; detecting, via the mobile computing device, an ambient lighting condition; comparing the detected ambient lighting condition with a predefined threshold; determining that the detected ambient lighting condition is below the predefined threshold; performing at least one of the following tasks in response to the determining: activating a front-facing light-emitting diode (LED) mechanism of the mobile computing device, or illuminating at least a portion of a screen of the mobile computing device; and performing, while the LED mechanism is activated or while the portion of the screen of the mobile computing device is illuminated, a face-unlock action on the mobile computing device to authenticate the user.
  • Another aspect of the present disclosure involves a method. The method includes: receiving, from a user, a request to gain access to a mobile computing device; detecting, via the mobile computing device, an ambient lighting condition; comparing the detected ambient lighting condition with a predefined threshold; determining that the detected ambient lighting condition is below the predefined threshold; performing at least one of the following tasks in response to the determining: activating a front-facing light-emitting diode (LED) mechanism of the mobile computing device, or illuminating at least a portion of a screen of the mobile computing device; and performing, while the LED mechanism is activated or while the portion of the screen of the mobile computing device is illuminated, a face-unlock action on the mobile computing device to authenticate the user.
  • Yet another aspect of the present disclosure involves a mobile computing device. The mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module. The computer processor module is configured to execute the computer programming code to perform the following operations: engaging in a voice-based interaction with a user of the mobile computing device; recording a spoken phrase from the user during the voice-based interaction; selecting a segment of the spoken phrase as a password for authenticating the user; saving a recording of the segment of the spoken phrase as a recorded password in a database; receiving, from the user, a request to gain access to the mobile computing device; prompting, in response to the receiving the request, the user to speak the password to the mobile computing device; thereafter recording one or more words spoken by the user in response to the prompting; comparing the one or more words spoken by the user with the recorded password; and authenticating the user in response to the comparing.
  • One more aspect of the present disclosure involves a method. The method includes: engaging in a voice-based interaction with a user of the mobile computing device; recording a spoken phrase from the user during the voice-based interaction; selecting a segment of the spoken phrase as a password for authenticating the user; saving a recording of the segment of the spoken phrase as a recorded password in a database; receiving, from the user, a request to gain access to the mobile computing device; prompting, in response to the receiving the request, the user to speak the password to the mobile computing device; thereafter recording one or more words spoken by the user in response to the prompting; comparing the one or more words spoken by the user with the recorded password; and authenticating the user in response to the comparing.
  • Yet one more aspect of the present disclosure involves a mobile computing device. The mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module. The computer processor module is configured to execute the computer programming code to perform the following operations: receiving a default screen time-out period for a mobile computing device, the screen time-out period specifying an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off; displaying content on the screen of the mobile computing device; dimming the screen after the content has been displayed, without receiving any touch input from the user, for an amount of time shorter than the default screen time-out period by X number of seconds; thereafter receiving a touch input from the user; un-dimming the screen in response to receiving the touch input; and increasing, in response to receiving the touch input, the default screen time-out period to a first new screen time-out period, the first new screen time-out period being longer than the default screen time-out period.
  • One more aspect of the present disclosure involves a method. The method includes: receiving a default screen time-out period for a mobile computing device, the screen time-out period specifying an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off; displaying content on the screen of the mobile computing device; dimming the screen after the content has been displayed, without receiving any touch input from the user, for an amount of time shorter than the default screen time-out period by X number of seconds; thereafter receiving a touch input from the user; un-dimming the screen in response to receiving the touch input; and increasing, in response to receiving the touch input, the default screen time-out period to a first new screen time-out period, the first new screen time-out period being longer than the default screen time-out period.
  • Yet one more aspect of the present disclosure involves a mobile computing device. The mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module. The computer processor module is configured to execute the computer programming code to perform the following operations: setting a screen time-out period for a mobile computing device, the screen time-out period specifying an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off; displaying content on the screen of the mobile computing device; monitoring a frequency of touch inputs from the user while the content is being displayed; and adjusting the screen time-out period in response to the monitoring.
  • One more aspect of the present disclosure involves a method. The method includes: setting a screen time-out period for a mobile computing device, the screen time-out period specifying an amount of time that passes without the mobile computing device receiving any touch input from a user before a screen of the mobile computing device turns off; displaying content on the screen of the mobile computing device; monitoring a frequency of touch inputs from the user while the content is being displayed; and adjusting the screen time-out period in response to the monitoring.
  • Yet one more aspect of the present disclosure involves a mobile computing device. The mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module. The computer processor module is configured to execute the computer programming code to perform the following operations: associating a plurality of tasks executable by a mobile computing device with a plurality of predefined symbols, respectively; turning on a touch-sensitive display of the mobile computing device in response to a request from a user, the display showing a lock screen; detecting, through the touch-sensitive display, a gesture input from the user, the gesture input representing one of the predefined symbols that has been associated with one of the tasks; and executing the task associated with the detected symbol from the gesture input.
  • One more aspect of the present disclosure involves a method. The method includes: associating a plurality of tasks executable by a mobile computing device with a plurality of predefined symbols, respectively; turning on a touch-sensitive display of the mobile computing device in response to a request from a user, the display showing a lock screen; detecting, through the touch-sensitive display, a gesture input from the user, the gesture input representing one of the predefined symbols that has been associated with one of the tasks; and executing the task associated with the detected symbol from the gesture input.
  • Yet one more aspect of the present disclosure involves a mobile computing device. The mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module. The computer processor module is configured to execute the computer programming code to perform the following operations: determining that a mobile computing device has been placed in a vehicle, the mobile computing device having a programmable audio output volume; and in response to the determining, automatically increasing the audio output volume of the mobile computing device to a predefined audio output volume.
  • One more aspect of the present disclosure involves a method. The method includes: determining that a mobile computing device has been placed in a vehicle, the mobile computing device having a programmable audio output volume; and in response to the determining, automatically increasing the audio output volume of the mobile computing device to a predefined audio output volume.
  • Yet one more aspect of the present disclosure involves a mobile computing device. The mobile computing device includes: a computer memory storage module configured to store executable computer programming code; and a computer processor module operatively coupled to the computer memory storage module. The computer processor module is configured to execute the computer programming code to perform the following operations: receiving a request from a user of the mobile computing device, the request specifying that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location; thereafter detecting an arrival of the mobile computing at the predetermined location; recording, in response to the detecting the arrival, an audio output volume setting of the mobile computing device; setting, in response to the detecting the arrival and after the recording, the mobile computing device in the mute mode or the vibrate mode; thereafter detecting a departure of the mobile computing device from the predetermined location; and restoring, in response to the detecting the departure, the mobile computing device to the recorded audio output volume setting.
  • One more aspect of the present disclosure involves a method. The method includes: receiving a request from a user of the mobile computing device, the request specifying that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location; thereafter detecting an arrival of the mobile computing at the predetermined location; recording, in response to the detecting the arrival, an audio output volume setting of the mobile computing device; setting, in response to the detecting the arrival and after the recording, the mobile computing device in the mute mode or the vibrate mode; thereafter detecting a departure of the mobile computing device from the predetermined location; and restoring, in response to the detecting the departure, the mobile computing device to the recorded audio output volume setting.
  • It should be appreciated that like reference numerals in the present disclosure are used to identify like elements illustrated in one or more of the figures, wherein these labeled figures are for purposes of illustrating embodiments of the present disclosure and not for purposes of limiting the same.
  • The foregoing disclosure is not intended to limit the present disclosure to the precise forms or particular fields of use disclosed. As such, it is contemplated that various alternate embodiments and/or modifications to the present disclosure, whether explicitly described or implied herein, are possible in light of the disclosure. Having thus described embodiments of the present disclosure, persons of ordinary skill in the art will recognize that changes may be made in form and detail without departing from the scope of the present disclosure. Thus, the present disclosure is limited only by the claims.

Claims (20)

What is claimed is:
1. A mobile computing device, comprising:
an electronic memory storing executable instructions; and
one or more electronic processors configured to execute the instructions stored in the electronic memory to perform the following steps:
receiving a request from a user of the mobile computing device, the request specifying that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location;
thereafter detecting an arrival of the mobile computing at the predetermined location;
recording, in response to the detecting the arrival, an audio output volume setting of the mobile computing device;
setting, in response to the detecting the arrival and after the recording, the mobile computing device in the mute mode or the vibrate mode;
thereafter detecting a departure of the mobile computing device from the predetermined location; and
restoring, in response to the detecting the departure, the mobile computing device to the recorded audio output volume setting.
2. The mobile computing device of claim 1, wherein the receiving is performed while the mobile computing device is at the predetermined location.
3. The mobile computing device of claim 1, wherein the receiving is performed while the mobile computing device is located remotely from the predetermined location.
4. The mobile computing device of claim 3, wherein the steps further comprise: identifying, in response to user input and before the receiving the request, the predetermined location from an electronic map.
5. The mobile computing device of claim 1, wherein the receiving is performed such that a facility at the predetermined location belongs to one of the following categories: a church, a movie theatre, and a doctor's office.
6. The mobile computing device of claim 1, wherein the mobile computing device further comprises a Global Positioning System (GPS) sensor, and wherein the detecting the arrival and the detecting the departure each comprise determining a location of the mobile computing device via the GPS sensor.
7. The mobile computing device of claim 6, wherein the steps further comprise:
analyzing a behavior pattern of the user to determine at least one time slot when the user sets the mobile computing device in the mute mode or the vibration mode; and
thereafter activating the GPS sensor during the at least one time slot.
8. A non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising:
receiving a request from a user of a mobile computing device, the request specifying that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location;
thereafter detecting an arrival of the mobile computing at the predetermined location;
recording, in response to the detecting the arrival, an audio output volume setting of the mobile computing device;
setting, in response to the detecting the arrival and after the recording, the mobile computing device in the mute mode or the vibrate mode;
thereafter detecting a departure of the mobile computing device from the predetermined location; and
restoring, in response to the detecting the departure, the mobile computing device to the recorded audio output volume setting.
9. The non-transitory machine-readable medium of claim 8, wherein the receiving is performed while the mobile computing device is at the predetermined location.
10. The non-transitory machine-readable medium of claim 8, wherein the receiving is performed while the mobile computing device is located remotely from the predetermined location.
11. The non-transitory machine-readable medium of claim 10, wherein the steps further comprise: identifying, in response to user input and before the receiving the request, the predetermined location from an electronic map.
12. The non-transitory machine-readable medium of claim 8, wherein the receiving is performed such that a facility at the predetermined location belongs to one of the following categories: a church, a movie theatre, and a doctor's office.
13. The non-transitory machine-readable medium of claim 8, wherein the mobile computing device further comprises a Global Positioning System (GPS) sensor, and wherein the detecting the arrival and the detecting the departure each comprise determining a location of the mobile computing device via the GPS sensor.
14. The non-transitory machine-readable medium of claim 13, wherein the steps further comprise:
analyzing a behavior pattern of the user to determine at least one time slot when the user sets the mobile computing device in the mute mode or the vibration mode; and
thereafter activating the GPS sensor during the at least one time slot.
15. A method, comprising:
receiving a request from a user of the mobile computing device, the request specifying that the mobile computing device should be set in a mute mode or in a vibrate mode when the mobile computing device is at a predetermined location;
thereafter detecting an arrival of the mobile computing at the predetermined location;
recording, in response to the detecting the arrival, an audio output volume setting of the mobile computing device;
setting, in response to the detecting the arrival and after the recording, the mobile computing device in the mute mode or the vibrate mode;
thereafter detecting a departure of the mobile computing device from the predetermined location; and
restoring, in response to the detecting the departure, the mobile computing device to the recorded audio output volume setting.
16. The method of claim 15, wherein the receiving is performed while the mobile computing device is at the predetermined location.
17. The method of claim 15, wherein the receiving is performed while the mobile computing device is located remotely from the predetermined location.
18. The method of claim 17, further comprising: identifying, in response to user input and before the receiving the request, the predetermined location from an electronic map.
19. The method of claim 15, wherein the receiving is performed such that a facility at the predetermined location belongs to one of the following categories: a church, a movie theatre, and a doctor's office.
20. The method of claim 15, wherein the detecting the arrival and the detecting the departure each comprise determining a location of the mobile computing device via a Global Positioning System sensor.
US14/886,044 2013-07-03 2015-10-17 Automatic Volume Control Based on Context and Location Abandoned US20170187866A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/886,044 US20170187866A1 (en) 2015-10-17 2015-10-17 Automatic Volume Control Based on Context and Location
US15/809,637 US10237396B2 (en) 2013-07-03 2017-11-10 Launching applications from a lock screen of a mobile computing device via user-defined symbols

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/886,044 US20170187866A1 (en) 2015-10-17 2015-10-17 Automatic Volume Control Based on Context and Location

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/935,034 Division US20150011195A1 (en) 2013-07-03 2013-07-03 Automatic volume control based on context and location

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/809,637 Continuation US10237396B2 (en) 2013-07-03 2017-11-10 Launching applications from a lock screen of a mobile computing device via user-defined symbols

Publications (1)

Publication Number Publication Date
US20170187866A1 true US20170187866A1 (en) 2017-06-29

Family

ID=59086940

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/886,044 Abandoned US20170187866A1 (en) 2013-07-03 2015-10-17 Automatic Volume Control Based on Context and Location
US15/809,637 Active US10237396B2 (en) 2013-07-03 2017-11-10 Launching applications from a lock screen of a mobile computing device via user-defined symbols

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/809,637 Active US10237396B2 (en) 2013-07-03 2017-11-10 Launching applications from a lock screen of a mobile computing device via user-defined symbols

Country Status (1)

Country Link
US (2) US20170187866A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170180558A1 (en) * 2015-12-22 2017-06-22 Hong Li Technologies for dynamic audio communication adjustment
CN107679473A (en) * 2017-09-22 2018-02-09 广东欧珀移动通信有限公司 Solve lock control method and Related product
US20180232511A1 (en) * 2016-06-07 2018-08-16 Vocalzoom Systems Ltd. System, device, and method of voice-based user authentication utilizing a challenge
US20180290591A1 (en) * 2017-04-05 2018-10-11 Truemotion, Inc. Device-based systems and methods for detecting device usage
US20190050544A1 (en) * 2016-07-19 2019-02-14 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for unlocking terminal screen
GB2567959A (en) * 2017-09-14 2019-05-01 Lenovo Singapore Pte Ltd Dynamically changing sound settings of a device
US20210182017A1 (en) * 2017-12-05 2021-06-17 Samsung Electronics Co., Ltd. Display apparatus and audio outputting method
US11178267B1 (en) 2020-06-03 2021-11-16 Micron Technology, Inc. Managing accessibility features for mobile device
WO2021242495A1 (en) * 2020-05-28 2021-12-02 Micron Technology, Inc. Managing user-interface parameter for mobile device
US20220122608A1 (en) * 2019-07-17 2022-04-21 Google Llc Systems and methods to verify trigger keywords in acoustic-based digital assistant applications
US11338733B2 (en) 2017-04-05 2022-05-24 Cambridge Mobile Telematics Inc. Device-based systems and methods for detecting screen state and device movement
CN114765644A (en) * 2021-01-12 2022-07-19 Oppo广东移动通信有限公司 Volume control method and device, electronic equipment and storage medium
US20220253262A1 (en) * 2021-02-11 2022-08-11 Nokia Technologies Oy Apparatus, a method and a computer program for rotating displayed visual information
US20220408268A1 (en) * 2021-06-18 2022-12-22 Google Llc Resource connectivity for multiple devices
US20230045909A1 (en) * 2021-08-12 2023-02-16 Google Llc Personalized Application Configuration As A Service
WO2023024565A1 (en) * 2021-08-27 2023-03-02 荣耀终端有限公司 Method for switching sound mode, and electronic device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108521521B (en) * 2018-04-19 2021-04-02 Oppo广东移动通信有限公司 Volume adjusting method, mobile terminal and computer readable storage medium
CN110392298B (en) * 2018-04-23 2021-09-28 腾讯科技(深圳)有限公司 Volume adjusting method, device, equipment and medium
US11907342B2 (en) * 2020-11-20 2024-02-20 Qualcomm Incorporated Selection of authentication function according to environment of user device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060063563A1 (en) * 2004-09-18 2006-03-23 Kaufman Richard D Cell phone system with automatic ringer/vibrate/silent/operating mode settings based on entering/exiting public areas and theaters
US20090036100A1 (en) * 2007-08-01 2009-02-05 Samsung Electronics Co., Ltd. Mobile communication terminal having touch screen and method for locking and inlocking the terminal
US20100234038A1 (en) * 2004-11-08 2010-09-16 Thandu Balasubramaniam K Intelligent Utilization of Resources in Mobile Devices
US20120172027A1 (en) * 2011-01-03 2012-07-05 Mani Partheesh Use of geofences for location-based activation and control of services
US20160150096A1 (en) * 2014-05-19 2016-05-26 Mediatek Inc. Method and mobile communication device for changing setting of mobile communication device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7365737B2 (en) * 2004-03-23 2008-04-29 Fujitsu Limited Non-uniform gesture precision
KR20120024247A (en) * 2010-09-06 2012-03-14 삼성전자주식회사 Method for operating a mobile device by recognizing a user gesture and the mobile device thereof
KR101873741B1 (en) * 2011-10-26 2018-07-03 엘지전자 주식회사 Mobile terminal and method for controlling the same
US20130283199A1 (en) * 2012-04-24 2013-10-24 Microsoft Corporation Access to an Application Directly from a Lock Screen
US20140215496A1 (en) * 2013-01-29 2014-07-31 Nexovation, Inc. Device including a plurality of functionalities, and method of operating the device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060063563A1 (en) * 2004-09-18 2006-03-23 Kaufman Richard D Cell phone system with automatic ringer/vibrate/silent/operating mode settings based on entering/exiting public areas and theaters
US20100234038A1 (en) * 2004-11-08 2010-09-16 Thandu Balasubramaniam K Intelligent Utilization of Resources in Mobile Devices
US20090036100A1 (en) * 2007-08-01 2009-02-05 Samsung Electronics Co., Ltd. Mobile communication terminal having touch screen and method for locking and inlocking the terminal
US20120172027A1 (en) * 2011-01-03 2012-07-05 Mani Partheesh Use of geofences for location-based activation and control of services
US20160150096A1 (en) * 2014-05-19 2016-05-26 Mediatek Inc. Method and mobile communication device for changing setting of mobile communication device

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170180558A1 (en) * 2015-12-22 2017-06-22 Hong Li Technologies for dynamic audio communication adjustment
US10142483B2 (en) * 2015-12-22 2018-11-27 Intel Corporation Technologies for dynamic audio communication adjustment
US10635800B2 (en) * 2016-06-07 2020-04-28 Vocalzoom Systems Ltd. System, device, and method of voice-based user authentication utilizing a challenge
US20180232511A1 (en) * 2016-06-07 2018-08-16 Vocalzoom Systems Ltd. System, device, and method of voice-based user authentication utilizing a challenge
US10810287B2 (en) * 2016-07-19 2020-10-20 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for unlocking terminal screen
US20190050544A1 (en) * 2016-07-19 2019-02-14 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for unlocking terminal screen
US20180290591A1 (en) * 2017-04-05 2018-10-11 Truemotion, Inc. Device-based systems and methods for detecting device usage
US10214144B2 (en) * 2017-04-05 2019-02-26 Truemotion, Inc. Device-based systems and methods for detecting device usage
US11338733B2 (en) 2017-04-05 2022-05-24 Cambridge Mobile Telematics Inc. Device-based systems and methods for detecting screen state and device movement
US11833963B2 (en) 2017-04-05 2023-12-05 Cambridge Mobile Telematics Inc. Device-based systems and methods for detecting screen state and device movement
GB2567959B (en) * 2017-09-14 2020-08-12 Lenovo Singapore Pte Ltd Dynamically changing sound settings of a device
GB2567959A (en) * 2017-09-14 2019-05-01 Lenovo Singapore Pte Ltd Dynamically changing sound settings of a device
CN107679473A (en) * 2017-09-22 2018-02-09 广东欧珀移动通信有限公司 Solve lock control method and Related product
US20210182017A1 (en) * 2017-12-05 2021-06-17 Samsung Electronics Co., Ltd. Display apparatus and audio outputting method
US11494162B2 (en) * 2017-12-05 2022-11-08 Samsung Electronics Co., Ltd. Display apparatus and audio outputting method
US11869504B2 (en) * 2019-07-17 2024-01-09 Google Llc Systems and methods to verify trigger keywords in acoustic-based digital assistant applications
US20220122608A1 (en) * 2019-07-17 2022-04-21 Google Llc Systems and methods to verify trigger keywords in acoustic-based digital assistant applications
CN115552875A (en) * 2020-05-28 2022-12-30 美光科技公司 Managing user interface parameters for a mobile device
WO2021242495A1 (en) * 2020-05-28 2021-12-02 Micron Technology, Inc. Managing user-interface parameter for mobile device
US11178267B1 (en) 2020-06-03 2021-11-16 Micron Technology, Inc. Managing accessibility features for mobile device
WO2021247226A1 (en) * 2020-06-03 2021-12-09 Micron Technology, Inc. Managing accessibility features for mobile device
CN114765644A (en) * 2021-01-12 2022-07-19 Oppo广东移动通信有限公司 Volume control method and device, electronic equipment and storage medium
US20220253262A1 (en) * 2021-02-11 2022-08-11 Nokia Technologies Oy Apparatus, a method and a computer program for rotating displayed visual information
US20220408268A1 (en) * 2021-06-18 2022-12-22 Google Llc Resource connectivity for multiple devices
US20230045909A1 (en) * 2021-08-12 2023-02-16 Google Llc Personalized Application Configuration As A Service
US11726765B2 (en) * 2021-08-12 2023-08-15 Google Llc Personalized application configuration as a service
WO2023024565A1 (en) * 2021-08-27 2023-03-02 荣耀终端有限公司 Method for switching sound mode, and electronic device

Also Published As

Publication number Publication date
US10237396B2 (en) 2019-03-19
US20180091644A1 (en) 2018-03-29

Similar Documents

Publication Publication Date Title
US10237396B2 (en) Launching applications from a lock screen of a mobile computing device via user-defined symbols
US20150011195A1 (en) Automatic volume control based on context and location
US11727093B2 (en) Setting and terminating restricted mode operation on electronic devices
US11093659B2 (en) Controlling content visibility on a computing device based on wearable device proximity
US11683408B2 (en) Methods and interfaces for home media control
JP6711916B2 (en) How to limit the use of applications and terminals
US8909297B2 (en) Access management
US9916431B2 (en) Context-based access verification
US9867035B2 (en) System and method for determining compromised driving
US20130104187A1 (en) Context-dependent authentication
US11562051B2 (en) Varying computing device behavior for different authenticators
US11455411B2 (en) Controlling content visibility on a computing device based on computing device location
US10979896B2 (en) Managing dynamic lockouts on mobile computing devices
US20180225456A1 (en) Enhancing security of a mobile device based on location or proximity to another device
US9672337B2 (en) Dynamic authentication
WO2019196655A1 (en) Mode switching method and apparatus, and computer-readable storage medium, and terminal
US20180225457A1 (en) Enhancing security of a mobile device based on location or proximity to another device
US9858409B2 (en) Enhancing security of a mobile device using pre-authentication sequences
JP6900552B2 (en) How to limit the use of the application, and the terminal
US9430988B1 (en) Mobile device with low-emission mode
US10013537B1 (en) Varying the amount of time that a mobile device must be inactive before the mobile device re-locks access to a computerized resource
CN107180174B (en) Passcode for computing device
CN113692555B (en) Electronically controlled light transmission of a lens of a camera in variable illumination

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION