WO2024049509A1 - Generating mobile electronic device user-defined operation - Google Patents

Generating mobile electronic device user-defined operation Download PDF

Info

Publication number
WO2024049509A1
WO2024049509A1 PCT/US2023/018815 US2023018815W WO2024049509A1 WO 2024049509 A1 WO2024049509 A1 WO 2024049509A1 US 2023018815 W US2023018815 W US 2023018815W WO 2024049509 A1 WO2024049509 A1 WO 2024049509A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
electronic device
mobile electronic
action
performed operations
Prior art date
Application number
PCT/US2023/018815
Other languages
French (fr)
Inventor
Jiangsheng Yu
Xun Hu
Zonghuan Wu
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Priority to PCT/US2023/018815 priority Critical patent/WO2024049509A1/en
Publication of WO2024049509A1 publication Critical patent/WO2024049509A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72418User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting emergency services
    • H04M1/72424User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting emergency services with manual activation of emergency-service functions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion

Definitions

  • the present disclosure is related to techniques associated with electronic device operations, and in particular, to techniques for instructing electronic device operations.
  • a user may interact with a mobile computing device (e.g., intelligent terminal, smart phone, tablet computer, or smart home terminal, for example) to cause an action.
  • a mobile computing device e.g., intelligent terminal, smart phone, tablet computer, or smart home terminal, for example
  • a user may interact with a smartphone that is ringing with an incoming call by turning the smartphone from face-up to face-down. The user may expect this action to silence the ringing or send the incoming call to voicemail.
  • Each user of a mobile computing device may have different usage habits and expectations when interacting with the mobile computing device. For example, a user may prefer that turning over a ringing phone will result in silencing the ringing, but not rejecting the call. Enabling a user of a mobile computing device to select an action and linking that action to a device operation may improve efficiency. It may also encourage a user to engage further with the mobile computing device.
  • a method for generating a user-defined operation in a mobile electronic device including: receiving, by the mobile electronic device, a user selection of a device action to be performed by the mobile electronic device; receiving, by the mobile electronic device, a user specification of one or more user-performed operations to be performed in sequence; associating, by the mobile electronic device, the device action with the one or more user-performed operations; and initiating, by the mobile electronic device, a learning mode and learning the one or more user-performed operations.
  • the device action including a sequence of at least one device action.
  • the method further includes receiving an input action at the mobile electronic device; determining the input action matches the one or more user-performed operations; and initiating the device action responsive to the input action matching the one or more user-performed operations.
  • the initiating the learning mode and learning the one or more user-performed actions includes providing the one or more user-performed operations to a pre-trained classifier of an initial sensor perception model, generating a first identified action at the initial sensor perception model based on the one or more user-performed operations; and matching the first identified action with a first stored action, the first stored action associated with the one or more user-performed operations.
  • the initial sensor perception model is based on device-specific capabilities of a set of environmental sensors of the mobile electronic device.
  • the method further includes generating a personalized sensor perception model based on the initial sensor perception model, the one or more user-performed operations, and the first identified action.
  • the initiating the learning mode and learning the one or more user-performed operations further includes displaying a training request for the user to repeat the one or more user-performed operations, receiving one or more training inputs from the user; and updating the personalized sensor perception model based on the one or more training inputs, the one or more user-performed operations, the first identified action, and the initial sensor perception model.
  • the initiating the learning mode and learning the one or more user-performed operations further includes providing the one or more user- performed operations to an adaptive classifier of the personalized sensor perception model; generating a first identified input action sequence at the personalized sensor perception model; and matching the first identified input action sequence with a first stored action sequence, the first stored action sequence associated with the one or more user-performed operations.
  • the method further includes updating the personalized perception model based on the first identified input action sequence.
  • a user-performed operation of the one or more user-performed operations includes an action characteristic, the action characteristic including at least one of a repetition count, a repetition frequency, a motion acceleration, or a final orientation.
  • the device action includes at least one of sending an emergency request data communication, sending a notification communication to a set of user contacts, initiating an emergency services phone call, playing a prerecorded emergency request message, or generating and playing an audio location message based on a current location of the mobile electronic device.
  • the method further includes the mobile electronic device generating a transferrable operation model, the transferrable operation model including the association of the device action with the one or more user- performed operations; the mobile electronic device transmitting the transferrable operation model, a second mobile electronic device receiving the transferrable operation model; the second mobile electronic device determining a sensor difference between a first environmental sensor of the mobile electronic device and a second environmental sensor of the second mobile electronic device; the second mobile electronic device identifying a transferred input action based on the second environmental sensor of the second mobile electronic device and the sensor difference; and the second mobile electronic device associating the transferred input action with the device action.
  • the method further includes displaying on a display of the mobile electronic device and responsive to receiving the one or more user- performed operations, a first action image associated with the one or more user- performed operations; and receiving a first action confirmation input from the user; wherein the associating the device action with the one or more user- performed operations is responsive to the first action confirmation input.
  • the method further includes displaying a sequence of operations on the display of the mobile electronic device; receiving at least a first operation selection and a second operation selection on the mobile electronic device; and generating the device action based on the first operation selection and the second operation selection, the device action including the sequence of operations.
  • the one or more user-performed operations are performed by the user.
  • the one or more user-performed operations are performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order.
  • the one or more user-performed operations are performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order and for a pre-defined time duration.
  • the one or more user-performed operations including mobile electronic device physical motions.
  • the one or more user-performed operations including mobile electronic device physical motions performed by the user.
  • the one or more user-performed operations including physical motions performed on an input device of the mobile electronic device by the user.
  • the one or more user-performed operations including physical motions performed on a touchscreen device of the mobile electronic device by the user.
  • the one or more user-performed operations including audio inputs inputted to the mobile electronic device.
  • the one or more user-performed operations including one or more audio inputs inputted to the mobile electronic device, the one or more audio inputs including one or more audio inputs generated by the user, one or more audio command inputs, or one or more non-verbal audio inputs.
  • the one or more user-performed operations including one or more video inputs inputted to the mobile electronic device.
  • the one or more user-performed operations including one or more video inputs inputted to the mobile electronic device, the one or more video inputs including a human face, or a human hand or hands.
  • the one or more user-performed operations including one or more images inputted to the mobile electronic device.
  • the one or more user-performed operations including one or more images inputted to the mobile electronic device, the one or more images including a human face or a human hand or hands.
  • the one or more user-performed operations including user movement of the mobile electronic device performed according to a movement sequence.
  • the one or more user-performed operations including user movement of the mobile electronic device performed according to a movement sequence and according to a movement timing sequence.
  • the one or more user-performed operations including user contacts with the mobile electronic device performed according to a contact sequence.
  • the one or more user-performed operations including user contacts with the mobile electronic device according to a contact sequence and according to a contact timing sequence.
  • the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for training the one or more user-performed operations associated with the device action.
  • the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for subsequently recognizing the one or more user-performed operations associated with the device action.
  • the user-performed operations data including device velocity information of movement of the mobile electronic device.
  • the user-performed operations data including device velocity direction information of movement of the mobile electronic device.
  • the user-performed operations data including device motion acceleration information of movement of the mobile electronic device.
  • the user-performed operations data including device acceleration direction information of movement of the mobile electronic device.
  • the initiating the learning mode for learning the one or more user-performed operations includes prompting the user to perform at least one iteration of the one or more user-performed operations, monitoring the at least one iteration of the one or more user-performed operations using at least one device sensor of the mobile electronic device, and learning the one or more user- performed operations as performed by the user.
  • the method further includes a preliminary step of receiving, by the mobile electronic device, an initiation input from the user, the initiation input commencing the generating of the user-defined operation.
  • a mobile electronic device including: a memory storing instructions; at least one environmental sensor associated with the mobile electronic device; and at least one processor in communication with the memory' and the at least one environmental sensor, the at least one processor configured, upon executi on of the instructions, to perform the following steps: receive a user selection of a device action to be performed by the mobile electronic device; receive a user specification of one or more user-performed operations to be performed in sequence; associate the device action with the one or more user-performed operations, and initiate a learning mode and learning the one or more user-performed operations.
  • the device action including a sequence of at least one device action.
  • the mobile electronic device further including the at least one processor executing the instructions to perform the following steps: receiving an input action at the mobile electronic device; determining the input, action matches the one or more user-performed operations, and initiating the device action responsive to the input action matching the one or more user- performed operations.
  • the initiating the learning mode and learning the one or more user-performed actions include providing the one or more user-performed operations to a pre-trained classifier of an initial sensor perception model; generating a first identified action at the initial sensor perception model based on the one or more user-performed operations; and matching the first identified action with a first stored action, the first stored action associated with the one or more user-performed operations.
  • the initial sensor perception model is based on device-specific capabilities of a set of environmental sensors of the mobile electronic device.
  • the mobile electronic device further including the at least one processor executing the instructions to generate a personalized sensor perception model based on the initial sensor perception model, the one or more user-performed operations, and the first identified action.
  • the initiating the learning mode and learning the one or more user-performed operations further including: displaying a training request for the user to repeat the one or more user-performed operations; receiving one or more training inputs from the user; and updating the personalized sensor perception model based on the one or more training inputs, the one or more user-performed operations, the first identified action, and the initial sensor perception model.
  • the initiating the learning mode and learning the one or more user-performed operations further including: providing the one or more user-performed operations to an adaptive classifier of the personalized sensor perception model; generating a first identified input action sequence at the personalized sensor perception model; and matching the first identified input action sequence with a first stored action sequence, the first stored action sequence associated with the first sequence of operations.
  • the at least one processor executing the instructions to update the personalized perception model based on the first identified input action sequence.
  • a user-performed operation of the one or more user-performed operations includes an action characteristic, the action characteristic including at least one of a repetition count, a repetition frequency, a motion acceleration, or a final orientation.
  • the device action includes at least one of sending an emergency request data communication, sending a notification communication to a set. of user contacts, initiating an emergency sendees phone call, playing a prerecorded emergency request message, or generating and playing an audio location message based on a current location of the mobile electronic device.
  • the mobile electronic device further including the at least one processor executing the instructions to: generate a transferrable operation model, the transferrable operation model including the association of the device action with the one or more user-performed operations; transmit, the transferrable operation model; a second mobile electronic device receiving the transferrable operation model; determine a sensor difference between the at least one environmental sensor of the mobile electronic device and a second environmental sensor of the second mobile electronic device; identify a transferred input action based on the second environmental sensor of the second mobile electronic device and the sensor difference; and associate the transferred input action with the device action.
  • the mobile electronic device further including the at least one processor executing the instructions to: display on a display of the mobile electronic device and responsive to receiving the one or more user- performed operations, a first action image associated with the one or more user- performed operations; and receive a first action confirmation input from the user; wherein the associating the device action with the one or more user-performed operations is responsive to the first action confirmation input.
  • the mobile electronic device further including the at least one processor executing the instructions to: display a sequence of g operations on the display of the mobile electronic device; receive at least a first operation selection and a second operation selection on the mobile electronic device; and generate the device action based on the first operation selection and the second operation selection, the device action including the sequence of operations.
  • the one or more user-performed operations are performed by the user.
  • the one or more user-performed operations are performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order.
  • the one or more user-performed operations are performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order and for a pre-defined time duration.
  • the one or more user-performed operations including mobile electronic device physical motions.
  • the one or more user-performed operations including mobile electronic device physical motions performed by the user.
  • the one or more user-performed operations including physical motions performed on an input device of the mobile electronic device by the user.
  • the one or more user-performed operations including physical motions performed on a touchscreen device of the mobile el ectronic device by the user.
  • the one or more user-performed operations including audio inputs inputted to the mobile electronic device.
  • the one or more user-performed operations including one or more audio inputs inputted to the mobile electronic device, the one or more audio inputs including one or more audio inputs generated by the user, one or more audio command inputs, or one or more non-verbal audio inputs.
  • the one or more user-performed operations including one or more video inputs inputted to the mobile electronic device.
  • the one or more user-performed operations including one or more video inputs inputted to the mobile electronic device, the one or more video inputs including a human face, or a human hand or hands.
  • the one or more user-performed operations including one or more images inputted to the mobile electronic device.
  • the one or more user-performed operations including one or more images inputted to the mobile electronic device, the one or more images including a human face or a human hand or hands.
  • the one or more user-performed operations including user movement of the mobile electronic device performed according to a movement sequence.
  • the one or more user-performed operations including user movement of the mobile electronic device performed according to a movement sequence and according to a movement timing sequence.
  • the one or more user-performed operations including user contacts with the mobile electronic device performed according to a contact sequence.
  • the one or more user-performed operations including user contacts with the mobile electronic device according to a contact sequence and according to a contact timing sequence.
  • the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for training the one or more user- performed operations associated with the device action.
  • the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for subsequently recognizing the one or more user-performed operations associated with the device action.
  • the user- performed operations data including device velocity information of movement of the mobile electronic device.
  • the user- performed operations data including device velocity direction information of movement of the mobile electronic device.
  • the user- performed operations data including device motion acceleration information of movement of the mobile electronic device.
  • the user- performed operations data including device acceleration direction information of movement of the mobile electronic device.
  • the initiating the learning mode for learning the one or more user-performed operations including: prompting the user to perform at least one iteration of the one or more user- performed operations; monitoring the at least one iteration of the one or more user-performed operations using at least one device sensor of the mobile electronic device; and learning the one or more user-performed operations as performed by the user.
  • the mobile electronic device further including the at least one processor executing the instructions to receive an initiation input from the user, the initiation input causing a prompting of the user selection of the device action to be performed by the mobile electronic device and the user specification of the one or more user-performed operations to be performed in sequence.
  • a non-transitory computer-readable media storing computer instructions that configure at least one processor, upon execution of the instructions, to perform the following steps: receive a user selection of a device action to be performed by a mobile electronic device; receive a user specification of one or more user-performed operations to be performed in sequence; associate the device action with the one or more user-performed operations, and initiate a learning mode and learning the one or more user-performed operations.
  • a system for user-defined mobile electronic device including: an input reception means configured to receive an initiation input from a user, the initiation input commencing a generating of a user-defined operation; a device action selection means configured to receive a user selection of a device action to be performed by the mobile electronic device; a user specification means configured to receive a user specification of one or more user-performed operations to be performed in sequence; an association means configured to associate the device action with the one or more user-performed operations; and a learning means configured to initiate a learning mode and learning the one or more user-performed operations.
  • FIG. 1 is a diagram illustrating user-defined device operations, according to an example embodiment.
  • FIG. 2 is a diagram illustrating user-defined device operations, according to another example embodiment.
  • FIG. 3 is a flowchart of a method for generating a user-defined operation in a mobile electronics device, according to an example embodiment.
  • FIG. 4 is a sequence diagram illustrating a first user-defined operation, according to an example embodiment.
  • FIG. 5 is a sequence diagram illustrating a second user-defined operation, according to an example embodiment.
  • FIG. 6 is a flowchart illustrating an adaptive user-defined operation method, according to an example embodiment.
  • FIG. 7 is a flowchart illustrating a user-initiated operation method, according to an example embodiment.
  • FIG. 8 is a flowchart illustrating an input training method, according to an example embodiment.
  • FIG. 9 is a flowchart illustrating a model transfer method, according to an example embodiment.
  • FIG. 10 illustrates the structure of a neural network, according to some example embodiments.
  • FIG. 11 is a diagram illustrating a representative software architecture, according to some example embodiments.
  • FIG. 12 is a diagram of a computing device that implements algorithms and performs methods described herein, according to some example embodiments.
  • the functions or algorithms described herein may be implemented in software in one embodiment.
  • the software may consist of computer-executable instructions stored on computer-readable media or a computer-readable storage device such as one or more non-transitory memories or other types of hardware-based storage devices, either local or networked.
  • modules which may be software, hardware, firmware, or any combination thereof. Multiple functions may be performed in one or more modules as desired, and the embodiments described are merely examples.
  • the software may be executed on a digital signal processor, ASIC, microprocessor, or another type of processor operating on a computer system, such as a personal computer, server, or another computer system, turning such computer system into a specifically programmed machine.
  • I is a diagram illustrating user-defined device operations 100, according to an example embodiment.
  • a user of a mobile electronic device 105 may customize the operations of that device 105 to respond to a user-defined input.
  • the device may be configured to enable a user to identify one or more user-defined operations that may be used to define a shortcut to initiate a device action or sequence of device actions.
  • the user-defined customization of user inputs and device operations may improve the user’s efficiency in using the device, such as by improving accuracy and reliability of identifying a user’s device inputs and resulting device operations.
  • the one or more user-defined operations may include a single input, operation or a sequence (e.g., a permutation or an ordered combination) of input operations.
  • the input operation may include a repeated tapping 110 on the device 105.
  • the repeated tapping 110 or a similar simple action may improve the user’s ability to remember the action or to invoke the action.
  • a repeated tapping 110 may enable the user to request assistance without having to unlock a smartphone and dial an emergency contact.
  • the device 105 may provide improved detection of a user’s intended action.
  • the one or more user-defined operations may include a single operation or multiple sequential operations.
  • a repeated tapping 110 may cause the device 105 to call emergency services 120, play a prerecorded message 130 (e.g., requesting assistance or identifying the user), report a location 140 (e.g., sending global positioning system (GPS) coordinates or generating and playing a text-to-speech message indicating a street address), and sending a SOS text message 150 to one or more user-defined emergency contacts.
  • GPS global positioning system
  • these and other operations may improve user safety and improve emergency response time.
  • FIG. 2 is a diagram illustrating user-defined device operations 200, according to an example embodiment.
  • the user-defined device operations 200 show an embodiment in which the user initiates a device action or actions 210 without directly interacting with the device.
  • the mobile electronic device 205 may include a smart, phone within a vehicle, an interactive terminal within the vehicle, the computing system of the vehicle itself, or another device traveling with the vehicle.
  • the mobile electronic device 205 may determine when a user is traveling toward a home 225, such as by determining the device has disconnected from a work Wi-Fi network, by determining the vehicle has accessed an electronic toll collection system associated with a commuting route, or by determining the device is within a geofence threshold of the home 225.
  • the mobile electronic device 205 may communicate through a wireless network 215 to one or more devices within the home 225.
  • the communication may cause one or more operations at the home 225, such as activation of a smart garage door 220, activation of smart lock 230, playing a predetermined announcement message on a smart speaker 240, activation of a smart light 250, adjusting a smart thermostat 260 to a predetermined temperature, and activating or adjusting other smart devices 270.
  • operations at the home 225 such as activation of a smart garage door 220, activation of smart lock 230, playing a predetermined announcement message on a smart speaker 240, activation of a smart light 250, adjusting a smart thermostat 260 to a predetermined temperature, and activating or adjusting other smart devices 270.
  • the communication may include separate commands from the mobile electronic device 205 to each of the smart devices within the home 225.
  • the communication may include a single command from the mobile electronic device 205 (e.g., “run/execute home arrival operations”) to the wireless network 215 or to a smart hub within the home 225, which may interpret the message and initiate various operations.
  • Various smart devices may provide information from the home 225 through the wireless network 215 back to the mobile electronic device 205.
  • the information provided back to the mobile electronic device 205 may include an acknowledgement of the instruction, a confirmation of the operations to be initiated, an indication that one or more operations could not be instructed or completed, a confirmation that the operations have been completed, a confirmation that the operations will be completed by the time the user arrives at the home 225, or other smart device information.
  • FIG. 3 is a flowchart of a method 300 for generating a user- defined operation in a mobile electronics device according to an example embodiment.
  • the mobile electronic device receives an initiation input from the user.
  • the initiation input commences the generating of a user- defined operation in the mobile electronic device.
  • the user then can generate a user-defined operation that performs one or more device actions.
  • the user can subsequently associate one or more user-performed operations with a device action to be performed on (or to) the mobile electronic device.
  • the mobile electronic device receives a user selection of a device action to be performed by the mobile electronic device.
  • the device action comprises the action that the user wants the mobile electronic device to perform.
  • the device action can include a sequence of actions. For simplicity, this disclosure will refer to a device action, although it should be understood that a device action or sequence of device actions are within the scope of this disclosure.
  • the device action can comprise one or more actions (i.e., a sequence of device actions) that the mobile electronic device can perform, including sending a message, sending a sequence of messages, performing one or more actions, et cetera.
  • the device action can comprise the mobile electronic device performing one or more of: sending an emergencyrequest data communication, sending a notification communication to a set of user contacts (i.e., sending multiple e-mail messages), initiating an emergency services phone call, playing a pre-recorded emergency request message, or generating and playing an audio location message based on a cun-ent location of the mobile electronic device.
  • Other device actions are contemplated and are within the scope of the description and claims.
  • step 308 the mobile electronic device receives a user specification of one or more user-performed operations the user intends to associate with the device action.
  • the one or more user-performed operations are performed by the user.
  • each operation of the one or more user-performed operations is performed in a pre-defined order.
  • each operation of the one or more user-performed operations is performed in a pre-defined order and for a pre-defined time duration.
  • a user-performed operation of the one or more user-performed operations includes an action characteristic, the action characteristic including at least one of a repetition count, a repetition frequency, a motion acceleration, or a final orientation.
  • the one or more user-performed operations can include user movement of the mobile electronic device performed according to a movement sequence, for example.
  • the one or more user-performed operations can include user movement of the mobile electronic device performed according to a movement sequence and according to a movement timing sequence.
  • the one or more user-performed operations can include user contacts with the mobile electronic device performed according to a contact sequence.
  • the one or more user-performed operations can include user contacts with the mobile electronic device according to a contact sequence and according to a contact timing sequence.
  • the user-performed operations data includes device velocity information of movement of the mobile electronic device.
  • the user-performed operations data includes device velocity and direction information of movement of the mobile electronic device.
  • the user-performed operations data includes device motion acceleration information of movement of the mobile electronic device.
  • the user-performed operations data includes device acceleration and direction information of movement of the mobile electronic device.
  • the one or more user-performed operations can include mobile electronic device physical motions.
  • the one or more user- performed operations include mobile electronic device physical motions performed by the user. In some embodiments, this includes user contact with the exterior of the mobile electronic device.
  • the one or more user-performed operations include physical motions performed on an input device of the mobile electronic device by the user.
  • the one or more user-performed operations include physical motions performed on a touchscreen device of the mobile electronic device by the user.
  • the one or more user-performed operations can include a sequence of tapping operations the user will perform on the mobile electronic device.
  • the user can configure the sequence of tapping actions to initiate the device action.
  • the user can specify tapping four times on the mobile electronic device to perform a particular device action, including tapping anywhere on the mobile electronic device, tapping on a particular input button or soft key, or tapping on a touch-sensitive display screen, for example.
  • the user specification can also specify a timing of the four taps, wherein the timing can be unique, such as the user tapping four times in the approximate timing of a series of notes of a song or according to a rhythm.
  • the user can specify a sequence of other similar or dissimilar movements or actions, including specifying movements of the mobile electronic device, the orientation of the mobile electronic device, for example.
  • the one or more user-performed operations can include user inputs other than movement-based inputs, including audio inputs, image inputs, or video inputs.
  • the one or more user-performed operations can include audio inputs inputted to the mobile electronic device.
  • the one or more audio inputs can include one or more audio inputs generated by the user, one or more audio command inputs, or one or more non-verbal audio inputs, for example. Other audios are contemplated and are within the scope of the description and claims.
  • the one or more user-performed operations can include images inputted to the mobile electronic device.
  • the one or more images can include a human face or a human hand or hands, for example. Other images are contemplated and are within the scope of the description and claims.
  • the one or more user-performed operations can include video inputs inputted to the mobile electronic device.
  • the one or more video inputs can include a human face, or a human hand or hands, for example.
  • Other videos are contemplated and are within the scope of the description and claims.
  • the one or more user-performed operations can further comprise a mix of such user-performed operations.
  • the sequence could comprise three quick taps, a whistle generated by the user for at least a predefined time period, followed by the user waving the mobile electronic device from side to side.
  • Other such mixes of sequences are contemplated and are within the scope of the description and claims.
  • the mobile electronic device associates the device action with the one or more user-performed operations.
  • the associating enables the user to define operations of the mobile electronic device that will be performed when the mobile electronic device recognizes the one or more user- performed operations as performed by the user.
  • the associating can be performed within the mobile electronic device and stored within the mobile electronic device in some embodiments.
  • step 315 the user initiates a learning mode as a final step in the generating of the user-defined operation.
  • the learning mode enables the mobile electronic device to recognize the one or more user-performed operations.
  • the learning mode enables the mobile electronic device to recognize the one or more user-performed operations even though the one or more user-performed operations wall not be identically performed every time by the user.
  • a tapping sequence can be learned during the learning mode, wherein the user may not tap the proper sequence with a uniform impact force or uniform timing every time, and the learning phase enables the mobile electronic device to properly interpret the user’s tapping actions, even where some variability exists in the sequence.
  • the learning mode allows for some variation in how the user performs the one or more user-performed operations, including over time.
  • the learning mode stores user-performed operations data obtained during the learning mode.
  • the stored user-performed operations data is used for training the one or more user-performed operations associated with the device action.
  • the user-performed operations data is used for subsequently recognizing the one or more user-performed operations associated with the device action.
  • the initiating the learning mode and learning the one or more user-performed operations further comprises prompting the user to perform at least one iteration of the one or more user-performed operations, monitoring the at least one iteration of the one or more user- performed operations using at least one device sensor of the mobile electronic device, and learning the one or more user-performed operations as performed by the user.
  • the initiating the learning mode and learning the one or more user-performed operations further comprises displaying a training request for the user to repeat the one or more user-perfonned operations, receiving one or more training inputs from the user, and updating the personalized sensor perception model based on the one or more training inputs, the one or more user-performed operations, the first identified action, and the initial sensor perception model.
  • the initiating the Seaming mode and learning the one or more user-performed actions comprises providing the one or more user-performed operations to a pre-trained classifier of an initial sensor perception model, generating a first identified action at the initial sensor perception model based on the one or more user-performed operations, and matching the first identified action with a first stored action, the first stored action associated with the one or more user-performed operations.
  • the term “classifier” indicates a classification algorithm used by a trained machine learning (AIL) model.
  • AIL machine learning
  • a classification algorithm can be used when the outputs are restricted to a limited set of values (e.g., text categorization, image classification, etc.).
  • a “classifier” can also be referred to as an “identifier.”
  • the initiating the learning mode and learning the one or more user-performed operations further comprises providing the one or more user-performed operations to an adaptive classifier of the personalized sensor perception model, generating a first identified input action sequence at the personalized sensor perception model, and matching the first identified input action sequence with a first stored action sequence, the first stored action sequence associated with the first sequence of operations.
  • the initial sensor perception model is based on device-specific capabilities of a set of environmental sensors of the first mobile electronic device.
  • the method further comprises generating a personalized sensor perception model based on the initial sensor perception model, the one or more user-performed operations, and the first identified action.
  • the method further comprises updating the personalized perception model based on the first identified input action sequence.
  • the user can control actions of the mobil e electronic device according to the user-defined actions.
  • the user can perform the one or more user-defined actions.
  • the mobile electronic device will receive user input actions, determine the user input actions match the one or more user- performed operations, and the mobile electronic device will initiate the device action responsive to the input action matching the one or more user-performed operations.
  • FIG. 4 is a sequence diagram 400 illustrating a first, user-defined operation method, according to an example embodiment.
  • the sequence diagram 400 may be used by a mobile electronic device to correctly and efficiently identify the intention behind the user’s command.
  • the sequence diagram 400 may be used to receive and interpret a set of basic actions 410 from a user at a mobile electronic device, and allow 7 the user to designate the basic actions 410 as a trigger to initiate (e.g., run or execute a shortcut to) a sequence of operations 420.
  • basic actions 410 may include a series of continuous taps on a device screen, which may trigger the sequence of operations 420, with the sequence of operations 320 including sending a user location and request for help to a previously designated user emergency contact.
  • the sequence diagram 400 may be initiated by a user interacting with a mobile electronic device configured to detect the set of basic actions 410.
  • the sequence diagram 400 may be initiated by a user tapping continuously on the mobile electronic device, and the device may detect the set of basic actions 410 and prompt the user to associate the set of basic actions 410 with a sequence of operations 420.
  • the tapping may be performed by the user on buttons or input devices of the mobile electronic device.
  • the tapping may be performed on a display screen or touch-sensitive display screen of the mobile electronic device.
  • the tapping may be performed on the exterior of the mobile electronic device.
  • the sequence diagram 400 may also be initiated by a user interacting with an application on the mobile electronic device, or by initiating a mapping setting interface (e.g., shortcut settings) within the operating system of the mobile electronic device.
  • the detection of the set of basic actions 410 may be determined by user input, such as the user indicating when to start recording actions and when to stop recording actions.
  • the detection of the set of basic acti ons 410 may be determined by device detection, such as by a device identifying a group of movements within a predetermined time interval, or such as by a device identifying an end of a group of movements, with the end movement being detected before the expiration of a defined maximum period of time elapsed from a first detected action or from a last detected action.
  • the detection of the set of basic actions 410 may include training the mobile electronic device to recognize the set of basic actions 410. Training may include repeating the set of basic actions 410 two or more times, storing variations between each of the repetitions, and determining a set of tolerance thresholds associated with the set of basic actions 410.
  • a user may repeat a triple tapping action, the mobile electronic device may determine that the repetitions were all completed between two seconds and four seconds, and the device may set a threshold for the completing of the set of basic actions 410 between two seconds and four seconds.
  • the user may initiate a training of the set of basic actions 410, or the device may initiate the training of the set of basic actions 410.
  • the device may initiate further training based on a detected variability.
  • the device may detect a training action completed in two seconds and a repeated action completed after six seconds, and may prompt the user to repeat the training input with greater consistency.
  • the association of the set of basic actions 410 with the sequence of operations 420 may be stored on the device, such as in a memory' of the mobile electronic device, in a memory associated with a sensor on the mobile electronic device, or a combination thereof.
  • the set of basic actions 410 includes a tapping gesture stored on software, firmware, or hardware associated with an accelerometer sensor, the accelerometer sensor sends accelerometer data associated with a gesture indication to a central processing unit (CPU) of the mobile electronic device, and the CPU determines that the accelerometer data sufficiently matches the gesture indication and triggers the sequence of operations 420.
  • CPU central processing unit
  • the basic actions 410 may include actions that may be detected based on sensors within the mobile electronic device (i.e., recognized actions that, can be translated into the sequence of operations 420). These actions may include actions such as the user turning over the device, rotating the device (e.g., 90°, 180°, 270°, or 360°), shaking the device, tapping the device a predetermined number of times (e.g., two taps, three taps, or continual tapping), moving the device to a different location, the user speaking a command to the device, or other user input actions.
  • actions such as the user turning over the device, rotating the device (e.g., 90°, 180°, 270°, or 360°), shaking the device, tapping the device a predetermined number of times (e.g., two taps, three taps, or continual tapping), moving the device to a different location, the user speaking a command to the device, or other user input actions.
  • the basic actions 410 may be detected based on a data set generated by environmental sensors within the device.
  • environmental sensors include sensors associated with the device that produce an output sensor data set in response to sensing an event or change in the environment.
  • the environmental sensors may be within the device, attached to the device, external to and in communication with the device, or otherwise associated with the device.
  • the environmental sensors may include an audio sensor, a gyroscopic sensor to measure a rotational rate, an accelerometer sensor to measure an acceleration rate, a magnetometer to measure a magnetic field, a proximity sensor to detect a presence or distance of an object, a location sensor to determine the location of the device, or other environmental sensor.
  • the proximity sensor may include a sensor configured to detect the presence of one or more nearby objects without requiring physical contact.
  • the proximity sensor may include an inductive proximity sensor, a capacitive proximity sensor, a photoelectric proximity sensor, a wireless radio proximity sensor (e.g., a received signal strength indication (RS SI) sensor), or other proximity sensor, for example.
  • the location sensor may include a sensor configured to detect a relative or absolute position of the device.
  • the location sensor may include a near-field beacon sensor, a Wi-Fi beacon sensor, a cellular triangulation sensor, a Global Positioning System (GPS) sensor, or other location sensor.
  • the sensor data set may include an action characteristic, such as a repetition count, a repetition frequency, a motion acceleration, a final orientation, or other action characteristics.
  • the input sensor data set may be mapped to another sensor data set.
  • the input sensor data set includes an accelerometer sensor data set associated with a tapping gesture
  • the accelerometer-based input sensor data set may be associated with an audio tapping data set with characteristics (e.g., tapping count or tapping frequency ) that correspond to the tapping gesture.
  • these environmental sensors include sensors that are distinguished from user input devices such as a physical keyboard, a virtual keyboard (e.g., keyboard displayed on a touch-sensitive screen), touch inputs to an application running on the device, and other graphical user interface (GUI) inputs to an application or operating system running on the device.
  • GUI graphical user interface
  • a repeated tapping input may cause one or more actions by the application.
  • a repeated tapping input, on a device screen wallpaper may be interpreted as one of the basic actions 410, including where execution of the application is suspended or is not being indicated to the user.
  • the sequence of operations 420 may include sending a communication from a network radio circuit of the mobile electronic device to a target electronic device via a telecommunication network.
  • the sequence of operations 420 may include sending an emergency request data communication, sending a notification communication to a set of user contacts, initiating an emergency services phone call, playing a prerecorded emergency request message, generating an audio location message based on a current location of the mobile electronic device and playing that audio location message during an audio call, or other operations.
  • the detection of the user input may be improved using an initial perception model 430.
  • the initial perception model 430 may receive sensor data related to basic actions 410, and may apply a pre-trained classifier to identify one or more basic actions 410.
  • an input sensor data set may be provided to the initial perception model 430, and the initial perception model 430 may identify an action based on the input sensor data set.
  • accelerometer data may indicate a shaking motion
  • the initial perception model 430 may identify a shaking action with an associated shaking duration.
  • the initial perception model 430 may match an identified action with a stored action, where the stored action is associated with a particular sequence of operations 420.
  • the initial perception model 430 may analyze the identified shaking action and associated shaking duration, determine the identified shaking action is an intentional user input (e.g., not caused by movement in a vehicle), and match the shaking action with a stored shaking action associated with an emergency message operation.
  • the initial sensor perception model 430 may be based on devicespecific capabilities or device-specific sensor characteristics of a set of environmental sensors within the mobile electronic device.
  • a device may include an accelerometer sensor but not include a gyroscopic sensor, and the device-specific capabilities may indicate that a rotation action is detected using only accelerometer data.
  • a device may include a gyroscopic sensor and accelerometer sensor positioned close to an edge or corner of the device, and the device-specific sensor characteristics may indicate a leverarm compensation for interpreting the sensor data relative to the center of mass of the device.
  • the device-specific capabilities or device-specific sensor characteristics may be provided by a manufacturer of the mobile electronic device.
  • the accuracy of the device- specific capabilities or device-specific sensor characteristics may be improved by automatic calibration or manual calibration (e.g., guided user-driven calibration) of the device.
  • the initial sensor perception model 430 may be combined with samples of user actions 440 to improve action detection and classification. This is denoted by the symbol in FIGS. 4 and 5. Samples of user actions 440 may be generated by sampling sensor data from the device, which may be used to improve action detection and improve rejection of non-action sensor data. For example, acceleration and gyroscopic sensor data received throughout the day may be sampled to differentiate typical daily device motion from actions that indicate the user is intending to recreate basic actions 410 and cause the sequence of operations 420.
  • FIG. 5 is a second sequence diagram 500 illustrating a second user-defined operation method, according to an example embodiment.
  • the second operation flowchart 500 may be used to identify multiple basic actions 510.
  • the ordered combination (e.g., permutation or action sequence) may be identified as a permutation of actions 515, which may initiate a sequence of operations 520.
  • the permutation of actions 515 may be based on multiple basic input actions 510.
  • the basic input actions 510 may be w'ell-defined and reliably identified by a pre-trained initial perception model 530.
  • One or more of the permutation of actions 515 may be obtained by the user’s demonstration, such as by a user providing an example knocking action.
  • the permutation of actions 515 may initiate a sequence of operations 520 that may be represented by O :::: ⁇ oi, o? entre . . . , o m ⁇ .
  • One or more of the sequence of operations 520 may be determined by a user input, by a suggested action, or by a suggested sequence of actions.
  • the mapping of the permutation of actions 515 to the sequence of operations 520 may be represented as A O.
  • the accuracy of identification of multiple basic actions 510 maybe improved by a further training of the pre-trained initial perception model 530.
  • the pre-trained initial perception model 530 is modified based on samples of user actions 540 and samples of permutations of actions 515 to generate a personalized perception model 545.
  • the personalized perception model 545 may improve recognition of permutations of actions 515.
  • the personalized perception model 545 may include an adaptive classifier, which may be used to determine the permutation of actions 515.
  • the personalized perception model 545 may be improved further (e.g., further personalized for the user) by sampling from the permutation of actions 515.
  • the user may provide training samples to train personalized perception model 545, which may improve the ability of the personalized perception model 545 to identify subsequent user inputs selected from among the training samples. For example, the user may provide two-input pairs of training samples ⁇ (an, ap), (au, ap), . . (ain, aj U ) ⁇ that correspond with tw ?
  • the personalized perception model 545 identifies the two-action as ⁇ at, aj ⁇ O. Based on the further training, the personalized perception model 545 improves the ability of a device to understand a user’s intention and initiate the desired sequence of operations 520.
  • Various identification and matching methodologies may be used for the identification of basic actions 510 or a permutation of actions 515, or for the mapping of basic actions 510 or a permutation of actions 515 to operations 520.
  • the identification or mapping of actions may include deterministic analyses (e.g., threshold-based identification), probabilistic analyses (e.g., most likely action identification), neural network artificial intelligence inference analysis (e.g., machine learning action identification), or other identification or mapping.
  • supervised machine learning may be used to analyze labeled accelerometer training data sets (e.g., data labeled as tapping actions or data labeled as non-tapping accelerometer data) and generate an inferential tapping function for use in the initial perception model 530.
  • FIG. 6 is a flowchart illustrating an adaptive user-defined operation method 600, according to an example embodiment.
  • the adaptive user- defined operation method 600 shows an example method that may be used to associate a set of operations with a set of inputs defined by a user.
  • an input sensor data set is received at an environmental sensor of a mobile electronic device associated with a user.
  • the input sensor data set may include at least one of an audio sensor data set, a gyroscopic sensor data set, an accelerometer sensor data set, a magnetometer data set, a proximity sensor data set, or a location sensor data set.
  • the input sensor data set may include an action characteristic, such as a repetition count, a repetition frequency, a motion acceleration, a final orientation, or other action characteristics.
  • the input sensor data set may be mapped to another sensor data set.
  • the input sensor data set includes an accelerometer sensor data set associated with a tapping gesture
  • the accelerometer-based input sensor data set may be associated with an audio tapping data set with characteristics (e.g., tapping count or tapping frequency) that correspond to the tapping gesture.
  • the input sensor data set is identified as an input action.
  • the identification at step 620 may include application of a pre-trained classifier, which may include providing the input sensor data set to a pre-trained classifier of an initial sensor perception model, generating an identified action at the initial sensor perception model based on the input sensor data set, and matching the identified action with a stored action associated with the sequence of operations.
  • the input data set includes accelerometer data indicative of a repeated tapping motion, the data set is matched with a previously stored tapping input, and the stored tapping input is associated with a sequence of emergency assistance operations.
  • the initial sensor perception model may be based on device-specific capabilities of a set of environmental sensors within the mobile electronic device, and may be provided by a manufacturer of the mobile electronic device.
  • a personalized perception model may be generated.
  • the personalized perception model may be generated based on the initial sensor perception model, the input sensor data set, and the identified action.
  • a sequence of input actions may be identified by the user.
  • the sequence may be identified by receiving a second input action at the mobile electronic device after receiving the initial input sensor data set, and identifying the input sensor data set and the second input action as a sequence of input actions.
  • the receiving and identifying of the second input action may include repeating steps 610 and 620 for the second input action.
  • the identification of additional input actions (e.g., repeating steps 610 and 620) may be repeated for additional input actions.
  • the identification of the sequence of input actions at step 630 may include providing the sequence of input actions to an adaptive classifier of the personalized perception model, generating an identified input action sequence at the personalized perception model based on the sequence of input actions, and matching the identified input action sequence with a stored action sequence.
  • the stored action sequence may be associated with the sequence of operations.
  • the personalized perception model may be updated based on the sequence of input actions and the identified input action sequence.
  • the input sensor data, set is associated with a sequence of operations of the mobile electronic device.
  • the sequence of operations may include sending a communication from a network radio circuit of the mobile electronic device via a telecommunication network to a target electronic device.
  • the sequence of operations may include sending an emergency request data communication, sending a notification communication to a set of user contacts, initiating an emergency sendees phone call, playing a prerecorded emergency request message, generating an audio location message based on a current location of the mobile electronic device and playing that generated message during a phone call, and other operations.
  • FIG-. 7 is a flowchart, illustrating a. user-initiated operation method 700, according to an example embodiment.
  • the user-initiated operation method 700 shows an example method that may be used to initiate a set of operations by applying a set of inputs to a mobile electronic device.
  • the user- initiated operation method 700 may be based on an association between a set of operations and a set of inputs that was previously defined by a user, such as an association previously defined using adaptive user-defined operation method 600.
  • the input sensor data set is received at the mobile electronic device.
  • the received input sensor data set is matched to a previously identified input data set, such as the input data set identified in step 620 of adaptive user-defined operation method 600.
  • the mobile electronic device may initiate the sequence of operations. The determination of whether the subsequent input action matches the input sensor data set may be based on a matching tolerance threshold.
  • the input sensor data set may include a triple tapping action, and the matching tolerance threshold may be set such that any triple tapping action completed between two seconds and four seconds matches the input data set.
  • the sequence of operations includes sending a communication from the mobile electronic device via a telecommunication network to a target electronic device.
  • FIG. 8 is a flowchart illustrating an input training method 800, according to an example embodiment.
  • the user may be prompted to train the input. This may include displaying a training request for the user to repeat the input sensor data set, receiving a training input from the user, and updating a personalized perception model, such as the personalized perception model generated following the identification of the input sensor data set as an input action at step 620.
  • the updating of the personalized perception model maybe based on the training input, the input sensor data set, the identified action, and the initial sensor perception model.
  • This updated personalized perception model may provide improved detection of a user’s input.
  • the user may be prompted with an image of an input action for training the detection of a user input action.
  • an action image associated with the input sensor data set may be displayed on a display of the mobile electronic device.
  • the display may include a prompt for the user to confirm the action, and an action confirmation input may be received from the user.
  • the action confirmation input may be used to confirm actions that are consistent with the input sensor data set or reject actions that are inconsistent with the input data set.
  • a user may make a mistake when generating a sensor input, reject the mistaken input when prompted at step 820, and be prompted to reenter the input.
  • the user-confirmed input may be used to train (e.g., update) the personalized perception model.
  • the user-confirmed input may also be used to complete the identification of the input data in step 620.
  • FIG. 9 is a flowchart, illustrating a model transfer method 900, according to an example embodiment.
  • a transferrable model may be generated by a first mobile electronic device.
  • the transferrable model may include an association of the input sensor data set with the sequence of operations and information about sensors on the first device that are associated with the input data sensor set.
  • the transferrable model may include an association between a shaking input and a request for emergency services, along with information about the accelerometer sensor and gyroscopic sensor positioned close to an edge or corner of the first, device, such as a lever-arm compensation for interpreting the sensor data relative to the center of mass of the first device.
  • the transferrable operation model may be received at a second mobile electronic device.
  • the second device may determine a sensor difference between a sensor of the first mobile electronic device and a similar sensor of the second mobile electronic device.
  • the sensor differences may be based on device size, sensor placement, device sensor sensitivities, and other sensor differences.
  • the second device may generate a transferred model based on the detected sensor differences.
  • the transferred model may include a mapping of a first lever- arm compensation for a sensor on the first device to a second lever-arm compensation for a similar sensor on the second device. The transferred model may be used by the second device to detect the input action on the second device and trigger the sequence of operations previously defined by the user.
  • FIG. 10 illustrates the structure 1000 of a neural network 1020, according to some example embodiments.
  • the neural network 1020 takes source domain data 1010 as input, processes the source domain data 1010 using the input layer 1030; the intermediate, hidden layers 1040A, 1040B, 1040C, 1040D, and I040E; and the output layer 1050 to generate a result 1060.
  • the neural network 1020 may be used to identify one or more steps within the adaptive intelligent terminal self-defined operations.
  • the source domain data 1010 includes a set of sensor data, and the result 1060 identifies a set of user input actions associated with the set of sensor data.
  • the source domain data 1010 includes a set of sensor data or the set of user input actions, and the result 1060 identifies a sequence of operations.
  • a neural network is a computing system based on consideration of biological neural networks of animal brains. Such systems progressively improve performance, which is referred to as learning, to perform tasks, typically without task-specific programming. For example, in image recognition, a neural network may be taught to identify images that contain an object by analyzing example images that have been tagged with a name for the object, and having learned the object and name, may use the analytic results to identify the object in untagged images.
  • a neural network is based on a collection of connected units called neurons, where each connection, called a synapse, between neurons can transmit a unidirectional signal with an activating strength that varies with the strength of the connection. The receiving neuron can activate and propagate a signal to downstream neurons connected to it, typically based on whether the combined incoming signals, which are from potentially many transmitting neurons, are of sufficient strength, where strength is a parameter.
  • Each of the layers 1030-1050 comprises one or more nodes (or “neurons”).
  • the nodes of the neural network 1020 are shown as circles or ovals in FIG. 10. Each node takes one or more input values, processes the input values using zero or more internal variables, and generates one or more output values.
  • the inputs to the input layer 1030 are values from the source domain data 1010.
  • the output of the output layer 1050 is the result 1060.
  • the intermediate layers 1040 A through 1040E are referred to as “hidden” because they do not interact directly with either the input or the output and are completely internal to the neural network 1020. Though five hidden layers are shown in FIG. 10, more or fewer hidden layers may be used.
  • a model may be run against a training dataset for several epochs, in which the training dataset is repeatedly fed into the model to refine its results.
  • the entire training dataset is used to train the model.
  • Multiple epochs (e.g., iterations over the entire training dataset) may be used to train the model.
  • the number of epochs is ten, one hundred, five hundred, nine hundred, or more.
  • one or more batches of the training dataset are used to train the model.
  • the batch size ranges between 1 and the size of the training dataset while the number of epochs is any positive integer value.
  • the model parameters are updated after each batch (e.g., using gradient descent).
  • the training dataset comprises selflabeled input examples.
  • a set of color images could be automatically converted to black-and-white images.
  • Each color image may be used as a “label” for the corresponding black-and-white image and used to train a model that colorizes black-and-white images.
  • This process is self-supervised because no additional information, outside of the original images, is used to generate the training dataset.
  • a sensor input or user action is provided by a user
  • a user action within a set of user actions may be masked and the networked trained to predict the masked user action based on the remaining sensor inputs or user actions.
  • Each model develops a rule or algorithm over several epochs by varying the values of one or more variables affecting the inputs to map to a desired result more closely, but as the training dataset may be varied, and is preferably very large, perfect accuracy and precision may not be achievable.
  • Epochs that make up a learning phase therefore, may be set as a given number of trials or a fixed time/computing budget, or may be terminated before that number/budget is reached when the accuracy of a given model is high enough or low enough or an accuracy plateau has been reached.
  • the learning phase may end early and use the produced model satisfying the end-goal accuracy threshold.
  • the learning phase for that, model may be terminated early, although other models in the learning phase may continue training.
  • the learning phase for the given model may terminate before the epoch number/computing budget is reached.
  • models that are finalized are evaluated against testing criteria.
  • a testing dataset that includes known outputs for its inputs is fed into the finalized models to determine an accuracy of the model in handling data that it has not been trained on.
  • a false positive rate or false negative rate may be used to evaluate the models after finalization.
  • a delineation between data clusterings is used to select a model that produces the clearest bounds for its clusters of data.
  • the neural network 1020 may be a deep learning neural network, a deep convolutional neural network, a recurrent neural network, or another type of neural network.
  • a neuron is an architectural element used in data processing and artificial intelligence, particularly machine learning.
  • a neuron implements a transfer function by which a number of inputs are used to generate an output.
  • the inputs are weighted and summed, with the result compared to a threshold to determine if the neuron should generate an output signal (e.g., a 1 output) or not (e.g., a 0 output).
  • the inputs of the component neurons are modified through the training of a neural network.
  • neurons and neural networks may be constructed programmatically (e.g., via software instructions) or via specialized hardware linking each neuron to form the neural network.
  • An example type of layer in the neural network 1020 is a Long Short-Term Memory (LSTM) layer.
  • An LSTM layer includes several gates to handle input vectors (e.g., time-series data), a memory cell, and an output vector. The input gate and output gate control the information flowing into and out of the memory cell, respectively, whereas forget gates optionally remove information from the memory? cell based on the inputs from linked cells earlier in the neural network. Weights and bias vectors for the various gates are adjusted over the course of a training phase, and once the training phase is complete, those weights and biases are finalized for normal operation.
  • input vectors e.g., time-series data
  • the input gate and output gate control the information flowing into and out of the memory cell, respectively, whereas forget gates optionally remove information from the memory? cell based on the inputs from linked cells earlier in the neural network.
  • Weights and bias vectors for the various gates are adjusted over the course of a training phase, and once the training phase is complete, those weights and
  • a deep neural network is a stacked neural network, which is composed of multiple layers. The layers are composed of nodes, winch are locations where computation occurs, loosely patterned on a neuron in the human brain, which fires when it encounters sufficient stimuli.
  • a node combines input from the data with a set of coefficients, or weights, that either amplify or dampen that input. Thus, the coefficients assign significance to inputs for the task the algorithm is trying to learn. These input-weight products are summed, and the sum is passed through what is called a node’s activation function, to determine whether and to what extent that signal progresses further through the network to affect the ultimate outcome.
  • a DNN uses a cascade of many layers of non-linear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. Higher-level features are derived from lower-level features to form a hierarchical representation.
  • the layers following the input layer may be convolution layers that produce feature maps that are filtering results of the inputs and are used by the next convolution layer.
  • a regression which is structured as a set of statistical processes for estimating the relationships among variables, can include a minimization of a cost function.
  • the cost function may be implemented as a function to return a number representing how well the neural network performed in mapping training examples to correct output.
  • backpropagation is a common method of training artificial neural networks that are used with an optimization method such as a stochastic gradient descent (SGD) method.
  • SGD stochastic gradient descent
  • an input When an input is presented to the neural network, it is propagated forward through the neural network, layer by layer, until it reaches the output layer.
  • the output of the neural network is then compared to the desired output, using the cost function, and an error value is calculated for each of the nodes in the output layer.
  • the error values are propagated backwards, starting from the output, until each node has an associated error value which roughly represents its contribution to the original output. Backpropagation can use these error values to calculate the gradient of the cost function with respect to the weights in the neural network.
  • the calculated gradient is fed to the selected optimization method to update the weights to attempt to minimize the cost function.
  • each layer is predefined.
  • a convolution layer may contain small convolution kernels and their respective convolution parameters, and a summation layer may calculate the sum, or the weighted sum, of two or more values. Training assists in defining the weight coefficients for the summation.
  • One way to improve the performance of DNNs is to identify newer structures for the feature-extraction layers, and another way is by improving the way the parameters are identified at the different layers for accomplishing a desired task.
  • a given neural network there may be millions of parameters to be optimized. Trying to optimize ah these parameters from scratch may take hours, days, or even weeks, depending on the amount of computing resources available and the amount of data in the training set.
  • FIG. 11 is a diagram 1100 illustrating a representative software architecture, winch may be used in conjunction with various device hardware described herein, according to some example embodiments.
  • FIG. 11 is merely a non-limiting example of a software architecture 1102 and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein.
  • the software architecture 1102 may be executing on hardware such as computing device 1200 of FIG. 12 that includes, among other things, processor 1205, memory 1210, storage 1215 and 1220, and I/O interfaces 1225 and 1230.
  • a representative hardware layer 1104 is illustrated and can represent, for example, the computing device 1200 of FIG. 12.
  • the representative hardware layer 1104 comprises one or more processing units 1 106 having associated executable instructions 1108.
  • Executable instructions 1108 represent the executable instructions of the software architecture 1102, including implementation of the methods, modules, and so forth of FIG. 1 through FIG.
  • Hardware layer 1104 also includes memory and/or storage modules 1110, which also have executable instructions 1108. Hardware layer 1104 may also comprise other hardware 1112, which represents any other hardware of the hardware layer 1104, such as the other hardware illustrated as part of computing device 1200.
  • the software architecture 1102 may be conceptualized as a stack of layers where each layer provides particular functionality.
  • the software architecture 1102 may include layers such as an operating system 11 14, libraries 1116, frameworks/middleware 1 118, applications 1120, and presentation layer 1144.
  • the applications 1 120 and/or other components within the layers may invoke application programming interface (API) calls 1124 through the software stack and receive a response, returned values, and so forth illustrated as messages 1126 in response to the API call 1 124.
  • API application programming interface
  • the layers illustrated in FIG. 11 are representative in nature and not all software architectures 1102 have all layers. For example, some mobile or special purpose operating sy stems may not provide frameworks/middleware 11 18, while others may provide such a layer. Other software architectures may include additional or different layers.
  • the operating system 1 1 14 may manage hardware resources and provide common sendees.
  • the operating system 1114 may include, for example, a kernel 1128, services 1130, and drivers 1132.
  • the kernel 1128 may act as an abstraction layer between the hardware and the other software layers. For example, kernel 1128 may be responsible for memory’ management, processor management (e.g., scheduling), component management, networking, security settings, and so on.
  • the sendees 1130 may provide other common sendees for the other software layers.
  • the drivers 1132 may be responsible for controlling or interfacing with the underlying hardware.
  • the drivers 1132 may include display drivers, camera drivers, Bluetooth® drivers, flash memory' drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth, depending on the hardware configuration.
  • serial communication drivers e.g., Universal Serial Bus (USB) drivers
  • Wi-Fi® drivers e.g., Wi-Fi® drivers
  • audio drivers e.g., audio drivers, power management drivers, and so forth, depending on the hardware configuration.
  • the libraries 11 16 may provide a common infrastructure that may be utilized by the applications 1120 and/or other components and/or layers.
  • the libraries 1116 typically provide functionality that allows other software modules to perform tasks more easily than to interface directly with the underlying operating system 1114 functionality (e.g., kernel 1128, services 1130, and drivers 1 132).
  • the libraries 1016 may include system libraries 1034 (e.g., C standard library/) that may provide functions such as memory/ allocation functions, string manipulation functions, mathematic functions, and the like.
  • the libraries 1116 may include API libraries 1136 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as MPEG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like.
  • the libraries 1116 may also include a wide variety of other libraries 1138 to provide many other APIs to the applications 1120 and other software components/modules.
  • the frameworks/middleware 1 118 may provide a higher-level common infrastructure that may be utilized by the applications 1120 and/or other software components/modules.
  • the frameworks/middleware 1118 may provide various graphical user interface (GUI) functions, high-level resource management, high-level location services, and so forth.
  • GUI graphical user interface
  • the frameworks/middl eware 1 118 may provide a broad spectrum of other APIs that may be utilized by the applications 1120 and/or other software components/modules, some of which may be specific to a particular operating system 1114 or platform.
  • the applications 1 120 include built-in applications 1 140, third- party applications 1142, and container module 1 160.
  • the container module 1160 can include an application. Examples of representative built-in applications 1140 may include but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. Third-party applications 1142 may include any of the built-in applications 1140 as well as a broad assortment of other applications.
  • the third-party application 1142 may be mobile software running on a mobile operating system such as iOS'TM, Android'TM, Windows® Phone, or other mobile operating systems.
  • the third-party application 1142 may invoke the API calls 1124 provided by the mobile operating system such as operating system 1114 to facilitate the functionality described herein.
  • the applications 1120 may utilize built-in operating system functions (e.g., kernel 1128, sendees 1130, and drivers 1 132), libraries (e.g., system libraries 1134, API libraries 1136, and other libraries 1138), and frameworks/middleware 1118 to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems, interactions with a user may occur through a presentation layer, such as presentation layer 1144. In these systems, the application/module “logic” can be separated from the aspects of the application/module that interact with a user. [00191] Some software architectures utilize virtual machines. In the example of FIG. 11, this is illustrated by virtual machine 1148.
  • virtual machine 1148 this is illustrated by virtual machine 1148.
  • a virtual machine creates a software environment, where applications/modules can execute as if they were executing on a hardware machine (such as the computing device 1200 of FIG. 12, for example).
  • a virtual machine 1148 is hosted by a host operating system (operating system 1114 in FIG. 11) and typically, although not always, has a virtual machine monitor 1146, which manages the operation of the virtual machine 1148 as well as the interface with the host operating system (i.e., operating system 1114).
  • a software architecture 1102 executes within the virtual machine 1148 such as an operating system 1150, libraries 1152, frameworks/middleware 1154, applications 1156, and/or presentation layer 1158. These layers of software architecture executing within the virtual machine 1148 can be the same as corresponding layers previously described or may be different.
  • FIG. 12 is a diagram of a computing device 1200 that implements algorithms and performs methods described herein, according to some example embodiments. All components need not be used in various embodiments. For example, clients, servers, and cloud-based network devices may each use a distinct set of components, or in the case of servers, larger storage devices.
  • One example computing device in the form of computing device 1200 may include a processor 1205, memory 1210, a removable storage 1215, a non-removable storage 1220, an input interface 1225, an output interface 1230, and a communication interface 1235, all connected by a bus 1240.
  • the example computing device is illustrated and described as the computer 1200, the computing device may be in different forms in different embodiments.
  • the memory 1210 may include volatile memory 1245 and nonvolatile memory 1250 and may store a program 1255.
  • the program 1255 may include instructions to implement the systems and methods for adaptive intelligent terminal self-defined operations 1260 described herein.
  • the computer 1200 may include or have access to a computing environment that includes a variety of computer-readable media, such as the volatile memory' 1245, the nonvolatile memory 1250, the removable storage 1215, and the non-removable storage 1220.
  • the memory 1210 includes random-access memory (RAM), readonly memory (ROM), erasable programmable read-only memory (EPROM) or electrically erasable programmable read-only memory (EEPROM), flash memory' or other memory 7 technologies, compact disc read-only memory' (CD ROM), digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium capable of storing computer-readable instructions.
  • Computer-readable instructions stored on a computer-readable medium e.g., the program 1255 stored in memory/ 1210) are executable by the processor 1205 of the computer 1200.
  • Program 1255 may utilize one or more modules discussed herein.
  • a hard drive, CD-ROM, and RAM are some examples of articles including a n on-transitory computer-readable medium such as a storage device.
  • the terms “computer-readable medium” and “storage device” do not include carrier waves to the extent that carrier waves are deemed too transitory.
  • “Computer-readable non-transitory media” includes all types of computer-readable media, including magnetic storage media, optical storage media, flash media, and solid-state storage media.
  • software can be installed on and sold with a computer. Alternatively, the software can be obtained and loaded into the computer, including obtaining the software through a physical medium or distribution system, including, for example, from a server owned by the software creator or from a server not owned but used by the software creator. The software can be stored on a server for distribution over the Internet, for example.
  • the terms “computer-readable medium” and “machine-readable medium” are interchangeable.
  • the computer 1200 includes means for retrieving application data from a computing device of the plurality of computing devices, the application data including an application identification (ID), and a first application version number of an application executing on the computing device.
  • the computer 1200 further includes means for updating a first database table using object type information associated with the application ID and the first application version number, the object type information identifying a database table schema of a data object used by the application, and a plurality of data fields of the data object.
  • the computer 1200 further includes means for synchronizing the data object using synchronization data for the plurality of data fields received from the second computing device to generate a synchronized data object.
  • the computer 1200 further includes means for receiving a second application version number from a second computing device, the second application version number associated with the application executing on the second computing device, in response to a notification of the synchronized data object communicated to the second computing device.
  • the computer 1200 further includes means for selecting one or more of the plurality of data fields of the synchronized data object, based on the second application version number, and means for communicating data stored by the one or more of the plurality of data fields to the third computing device for synchronization.
  • the computer 1200 may include other or additional modules for performing any one or combination of steps described in the embodiments. Further, any of the additional or alternative embodiments or aspects of the method, as shown in any of the figures or recited in any of the claims, are also contemplated to include similar modules.
  • Any one or more of the modules described herein may be implemented using hardware (e.g., a processor of a machine, an applicationspecific integrated circuit (ASIC), field-programmable gate array (FPGA), or any suitable combination thereof). Moreover, any two or more of these modules may be combined into a single module, and the functions described herein for a single module may be subdivided among multiple modules. Furthermore, according to various example embodiments, modules described herein as being implemented within a single machine, database, or device may be distributed across multiple machines, databases, or devices.
  • hardware e.g., a processor of a machine, an applicationspecific integrated circuit (ASIC), field-programmable gate array (FPGA), or any suitable combination thereof.
  • ASIC applicationspecific integrated circuit
  • FPGA field-programmable gate array
  • the software can be obtained and loaded into one or more computing devices, including obtaining software through a physical medium or distribution system, including, for example, from a sewer owned by the software creator or from a server not owned but used by the software creator.
  • the software can be stored on a sewer for distribution over the Internet, for example.
  • the components of the illustrative devices, systems, and methods employed per the illustrated embodiments can be implemented, at least in part, in digital electronic circuitry', analog electronic circuitry, computer hardware, firmware, software, or combinations of them. These components can be implemented, for example, as a computer program product such as a computer program, program code, or computer instructions tangibly embodied in an information carrier, or a machine-readable storage device, for execution by, or to control the operation of, data processing apparatus such as a programmable processor, a computer, or multiple computers.
  • a computer program product such as a computer program, program code, or computer instructions tangibly embodied in an information carrier, or a machine-readable storage device, for execution by, or to control the operation of, data processing apparatus such as a programmable processor, a computer, or multiple computers.
  • a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or another unit suitable for use in a computing environment.
  • a computer program can be deployed to be executed on one computer or multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • functional programs, codes, and code segments for accomplishing the techniques described herein can be easily construed as within the scope of the claims by programmers skilled in the art to which the techniques described herein pertain.
  • Method steps associated with the illustrative embodiments can be performed by one or more programmable processors executing a computer program, code, or instructions to perform functions (e.g., by operating on input data and/or generating an output). Method steps can also be performed by special purpose logic circuitry, and apparatus for performing the methods can be implemented as special purpose logic circuitry, such as an FPGA (field-programmable gate array) or an ASIC (applicationspecific integrated circuit).
  • FPGA field-programmable gate array
  • ASIC applicationspecific integrated circuit
  • a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only memory or a random-access memory, or both.
  • the required elements of a computer are a processor for executing instractions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, such as magnetic, magneto-optical disks, or optical disks.
  • Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile or non-transitory memory, including by way of example, semiconductor memory devices, such as electrically programmable read-only memory' or ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory devices, and data storage disks (e.g., magnetic disks, internal hard disks, removable disks, magneto-optical disks, CD-ROM disks, or DVD-ROM disks).
  • EPROM electrically programmable read-only memory' or ROM
  • EEPROM electrically erasable programmable ROM
  • flash memory devices e.g., electrically programmable read-only memory' or ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory devices, and data storage disks (e.g., magnetic disks, internal hard disks, removable disks, magneto-optical disks, CD-ROM disks, or DVD-ROM disks).
  • data storage disks e
  • machine-readable medium indicates a device able to store instractions and data temporarily or permanently and may include, but is not limited to, randomaccess memory 7 (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)), and/or any suitable combination thereof.
  • RAM randomaccess memory 7
  • ROM read-only memory
  • buffer memory flash memory
  • optical media magnetic media
  • cache memory other types of storage
  • EEPROM Erasable Programmable Read-Only Memory
  • machine-readable medium should be taken to include a single medium or multiple media (e.g., a centralized or distributed database or associated caches and servers) able to store processor instructions.
  • machine-readable medium shall also be taken to include any medium (or a combination of multiple media) that is capable of storing instructions for execution by one or more processors 1205, such that the instructions, when executed by one or more processors 1205, cause the one or more processors 1205 to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” as used herein excludes signals per se.

Abstract

A mobile electronic device and a method for generating a user-defined operation in a mobile electronic device are provided. The method for generating the user-defined operation in a mobile electronic device includes receiving a user selection of a device action to be performed by the mobile electronic device, receiving a user specification of one or more user-performed operations to be performed in sequence, associating the device action with the one or more user-performed operations, and initiating a learning mode and learning the one or more user-performed operations.

Description

GENERATING MOBILE ELECTRONIC DEVICE USER-DEFINED OPERATION
TECHNICAL FIELD
[001] The present disclosure is related to techniques associated with electronic device operations, and in particular, to techniques for instructing electronic device operations.
BACKGROUND
[002] A user may interact with a mobile computing device (e.g., intelligent terminal, smart phone, tablet computer, or smart home terminal, for example) to cause an action. In an example, a user may interact with a smartphone that is ringing with an incoming call by turning the smartphone from face-up to face-down. The user may expect this action to silence the ringing or send the incoming call to voicemail.
[003] Each user of a mobile computing device may have different usage habits and expectations when interacting with the mobile computing device. For example, a user may prefer that turning over a ringing phone will result in silencing the ringing, but not rejecting the call. Enabling a user of a mobile computing device to select an action and linking that action to a device operation may improve efficiency. It may also encourage a user to engage further with the mobile computing device.
SUMMARY
[004] Various examples are now' described to introduce a selection of concepts in a simplified form that is further described below in the detailed description. The Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
[005] According to a first aspect of the present disclosure, there is provided a method for generating a user-defined operation in a mobile electronic device, the method including: receiving, by the mobile electronic device, a user selection of a device action to be performed by the mobile electronic device; receiving, by the mobile electronic device, a user specification of one or more user-performed operations to be performed in sequence; associating, by the mobile electronic device, the device action with the one or more user-performed operations; and initiating, by the mobile electronic device, a learning mode and learning the one or more user-performed operations.
[006] In some aspects of the method for user-defined mobile electronic device operations, the device action including a sequence of at least one device action.
[007] In some aspects of the method for user-defined mobile electronic device operations, the method further includes receiving an input action at the mobile electronic device; determining the input action matches the one or more user-performed operations; and initiating the device action responsive to the input action matching the one or more user-performed operations.
[008] In some aspects of the method for user-defined mobile electronic device operations, the initiating the learning mode and learning the one or more user-performed actions includes providing the one or more user-performed operations to a pre-trained classifier of an initial sensor perception model, generating a first identified action at the initial sensor perception model based on the one or more user-performed operations; and matching the first identified action with a first stored action, the first stored action associated with the one or more user-performed operations.
[009] In some aspects of the method for user-defined mobile electronic device operations, the initial sensor perception model is based on device-specific capabilities of a set of environmental sensors of the mobile electronic device. [0010] In some aspects of the method for user-defined mobile electronic device operations, the method further includes generating a personalized sensor perception model based on the initial sensor perception model, the one or more user-performed operations, and the first identified action.
[0011] In some aspects of the method for user-defined mobile electronic device operations, the initiating the learning mode and learning the one or more user-performed operations further includes displaying a training request for the user to repeat the one or more user-performed operations, receiving one or more training inputs from the user; and updating the personalized sensor perception model based on the one or more training inputs, the one or more user-performed operations, the first identified action, and the initial sensor perception model.
[0012] In some aspects of the method for user-defined mobile electronic device operations, the initiating the learning mode and learning the one or more user-performed operations further includes providing the one or more user- performed operations to an adaptive classifier of the personalized sensor perception model; generating a first identified input action sequence at the personalized sensor perception model; and matching the first identified input action sequence with a first stored action sequence, the first stored action sequence associated with the one or more user-performed operations.
[0013] In some aspects of the method for user-defined mobile electronic device operations, the method further includes updating the personalized perception model based on the first identified input action sequence.
[0014] In some aspects of the method for user-defined mobile electronic device operations, a user-performed operation of the one or more user-performed operations includes an action characteristic, the action characteristic including at least one of a repetition count, a repetition frequency, a motion acceleration, or a final orientation.
[0015] In some aspects of the method for user-defined mobile electronic device operations, the device action includes at least one of sending an emergency request data communication, sending a notification communication to a set of user contacts, initiating an emergency services phone call, playing a prerecorded emergency request message, or generating and playing an audio location message based on a current location of the mobile electronic device. [0016] In some aspects of the method for user-defined mobile electronic device operations, the method further includes the mobile electronic device generating a transferrable operation model, the transferrable operation model including the association of the device action with the one or more user- performed operations; the mobile electronic device transmitting the transferrable operation model, a second mobile electronic device receiving the transferrable operation model; the second mobile electronic device determining a sensor difference between a first environmental sensor of the mobile electronic device and a second environmental sensor of the second mobile electronic device; the second mobile electronic device identifying a transferred input action based on the second environmental sensor of the second mobile electronic device and the sensor difference; and the second mobile electronic device associating the transferred input action with the device action.
[0017] In some aspects of the method for user-defined mobile electronic device operations, the method further includes displaying on a display of the mobile electronic device and responsive to receiving the one or more user- performed operations, a first action image associated with the one or more user- performed operations; and receiving a first action confirmation input from the user; wherein the associating the device action with the one or more user- performed operations is responsive to the first action confirmation input.
[0018] In some aspects of the method for user-defined mobile electronic device operations, the method further includes displaying a sequence of operations on the display of the mobile electronic device; receiving at least a first operation selection and a second operation selection on the mobile electronic device; and generating the device action based on the first operation selection and the second operation selection, the device action including the sequence of operations.
[0019] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations are performed by the user.
[0020] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations are performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order.
[0021] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations are performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order and for a pre-defined time duration.
[0022] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including mobile electronic device physical motions. [0023] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including mobile electronic device physical motions performed by the user.
[0024] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including physical motions performed on an input device of the mobile electronic device by the user.
[0025] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including physical motions performed on a touchscreen device of the mobile electronic device by the user.
[0026] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including audio inputs inputted to the mobile electronic device.
[0027] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including one or more audio inputs inputted to the mobile electronic device, the one or more audio inputs including one or more audio inputs generated by the user, one or more audio command inputs, or one or more non-verbal audio inputs.
[0028] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including one or more video inputs inputted to the mobile electronic device.
[0029] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including one or more video inputs inputted to the mobile electronic device, the one or more video inputs including a human face, or a human hand or hands.
[0030] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including one or more images inputted to the mobile electronic device.
[0031] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including one or more images inputted to the mobile electronic device, the one or more images including a human face or a human hand or hands. [0032] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including user movement of the mobile electronic device performed according to a movement sequence.
[0033] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including user movement of the mobile electronic device performed according to a movement sequence and according to a movement timing sequence.
[0034] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including user contacts with the mobile electronic device performed according to a contact sequence.
[0035] In some aspects of the method for user-defined mobile electronic device operations, the one or more user-performed operations including user contacts with the mobile electronic device according to a contact sequence and according to a contact timing sequence.
[0036] In some aspects of the method for user-defined mobile electronic device operations, the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for training the one or more user-performed operations associated with the device action.
[0037] In some aspects of the method for user-defined mobile electronic device operations, the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for subsequently recognizing the one or more user-performed operations associated with the device action.
[0038] In some aspects of the method for user-defined mobile electronic device operations, the user-performed operations data including device velocity information of movement of the mobile electronic device.
[0039] In some aspects of the method for user-defined mobile electronic device operations, the user-performed operations data including device velocity direction information of movement of the mobile electronic device. [0040] In some aspects of the method for user-defined mobile electronic device operations, the user-performed operations data including device motion acceleration information of movement of the mobile electronic device.
[0041] In some aspects of the method for user-defined mobile electronic device operations, the user-performed operations data including device acceleration direction information of movement of the mobile electronic device. [0042] In some aspects of the method for user-defined mobile electronic device operations, the initiating the learning mode for learning the one or more user-performed operations includes prompting the user to perform at least one iteration of the one or more user-performed operations, monitoring the at least one iteration of the one or more user-performed operations using at least one device sensor of the mobile electronic device, and learning the one or more user- performed operations as performed by the user.
[0043] In some aspects of the method for user-defined mobile electronic device operations, the method further includes a preliminary step of receiving, by the mobile electronic device, an initiation input from the user, the initiation input commencing the generating of the user-defined operation.
[0044] According to a second aspect of the present disclosure, there is provided a mobile electronic device, the mobile electronic device including: a memory storing instructions; at least one environmental sensor associated with the mobile electronic device; and at least one processor in communication with the memory' and the at least one environmental sensor, the at least one processor configured, upon executi on of the instructions, to perform the following steps: receive a user selection of a device action to be performed by the mobile electronic device; receive a user specification of one or more user-performed operations to be performed in sequence; associate the device action with the one or more user-performed operations, and initiate a learning mode and learning the one or more user-performed operations.
[0045] In some aspects of the mobile electronic device, the device action including a sequence of at least one device action.
[0046] In some aspects of the mobile electronic device, further including the at least one processor executing the instructions to perform the following steps: receiving an input action at the mobile electronic device; determining the input, action matches the one or more user-performed operations, and initiating the device action responsive to the input action matching the one or more user- performed operations.
[0047] In some aspects of the mobile electronic device, the initiating the learning mode and learning the one or more user-performed actions include providing the one or more user-performed operations to a pre-trained classifier of an initial sensor perception model; generating a first identified action at the initial sensor perception model based on the one or more user-performed operations; and matching the first identified action with a first stored action, the first stored action associated with the one or more user-performed operations. [0048] In some aspects of the mobile electronic device, the initial sensor perception model is based on device-specific capabilities of a set of environmental sensors of the mobile electronic device.
[0049] In some aspects of the mobile electronic device, further including the at least one processor executing the instructions to generate a personalized sensor perception model based on the initial sensor perception model, the one or more user-performed operations, and the first identified action.
[0050] In some aspects of the mobile electronic device, the initiating the learning mode and learning the one or more user-performed operations further including: displaying a training request for the user to repeat the one or more user-performed operations; receiving one or more training inputs from the user; and updating the personalized sensor perception model based on the one or more training inputs, the one or more user-performed operations, the first identified action, and the initial sensor perception model.
[0051] In some aspects of the mobile electronic device, the initiating the learning mode and learning the one or more user-performed operations further including: providing the one or more user-performed operations to an adaptive classifier of the personalized sensor perception model; generating a first identified input action sequence at the personalized sensor perception model; and matching the first identified input action sequence with a first stored action sequence, the first stored action sequence associated with the first sequence of operations. [0052] In some aspects of the mobile electronic device, further including the at least one processor executing the instructions to update the personalized perception model based on the first identified input action sequence.
[0053] In some aspects of the mobile electronic device, a user-performed operation of the one or more user-performed operations includes an action characteristic, the action characteristic including at least one of a repetition count, a repetition frequency, a motion acceleration, or a final orientation.
[0054] In some aspects of the mobile electronic device, the device action includes at least one of sending an emergency request data communication, sending a notification communication to a set. of user contacts, initiating an emergency sendees phone call, playing a prerecorded emergency request message, or generating and playing an audio location message based on a current location of the mobile electronic device.
[0055] In some aspects of the mobile electronic device, further including the at least one processor executing the instructions to: generate a transferrable operation model, the transferrable operation model including the association of the device action with the one or more user-performed operations; transmit, the transferrable operation model; a second mobile electronic device receiving the transferrable operation model; determine a sensor difference between the at least one environmental sensor of the mobile electronic device and a second environmental sensor of the second mobile electronic device; identify a transferred input action based on the second environmental sensor of the second mobile electronic device and the sensor difference; and associate the transferred input action with the device action.
[0056] In some aspects of the mobile electronic device, further including the at least one processor executing the instructions to: display on a display of the mobile electronic device and responsive to receiving the one or more user- performed operations, a first action image associated with the one or more user- performed operations; and receive a first action confirmation input from the user; wherein the associating the device action with the one or more user-performed operations is responsive to the first action confirmation input.
[0057] In some aspects of the mobile electronic device, further including the at least one processor executing the instructions to: display a sequence of g operations on the display of the mobile electronic device; receive at least a first operation selection and a second operation selection on the mobile electronic device; and generate the device action based on the first operation selection and the second operation selection, the device action including the sequence of operations.
[0058] In some aspects of the mobile electronic device, the one or more user-performed operations are performed by the user.
[0059] In some aspects of the mobile electronic device, the one or more user-performed operations are performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order.
[0060] In some aspects of the mobile electronic device, the one or more user-performed operations are performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order and for a pre-defined time duration.
[0061] In some aspects of the mobile electronic device, the one or more user-performed operations including mobile electronic device physical motions. [0062] In some aspects of the mobile electronic device, the one or more user-performed operations including mobile electronic device physical motions performed by the user.
[0063] In some aspects of the mobile electronic device, the one or more user-performed operations including physical motions performed on an input device of the mobile electronic device by the user.
[0064] In some aspects of the mobile electronic device, the one or more user-performed operations including physical motions performed on a touchscreen device of the mobile el ectronic device by the user.
[0065] In some aspects of the mobile electronic device, the one or more user-performed operations including audio inputs inputted to the mobile electronic device.
[0066] In some aspects of the mobile electronic device, the one or more user-performed operations including one or more audio inputs inputted to the mobile electronic device, the one or more audio inputs including one or more audio inputs generated by the user, one or more audio command inputs, or one or more non-verbal audio inputs. [0067] In some aspects of the mobile electronic device, the one or more user-performed operations including one or more video inputs inputted to the mobile electronic device.
[0068] In some aspects of the mobile electronic device, the one or more user-performed operations including one or more video inputs inputted to the mobile electronic device, the one or more video inputs including a human face, or a human hand or hands.
[0069] In some aspects of the mobile electronic device, the one or more user-performed operations including one or more images inputted to the mobile electronic device.
[0070] In some aspects of the mobile electronic device, the one or more user-performed operations including one or more images inputted to the mobile electronic device, the one or more images including a human face or a human hand or hands.
[0071] In some aspects of the mobile electronic device, the one or more user-performed operations including user movement of the mobile electronic device performed according to a movement sequence.
[0072] In some aspects of the mobile electronic device, the one or more user-performed operations including user movement of the mobile electronic device performed according to a movement sequence and according to a movement timing sequence.
[0073] In some aspects of the mobile electronic device, the one or more user-performed operations including user contacts with the mobile electronic device performed according to a contact sequence.
[0074] In some aspects of the mobile electronic device, the one or more user-performed operations including user contacts with the mobile electronic device according to a contact sequence and according to a contact timing sequence.
[0075] In some aspects of the mobile electronic device, the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for training the one or more user- performed operations associated with the device action. [0076] In some aspects of the mobile electronic device, the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for subsequently recognizing the one or more user-performed operations associated with the device action.
[0077] In some aspects of the mobile electronic device, the user- performed operations data including device velocity information of movement of the mobile electronic device.
[0078] In some aspects of the mobile electronic device, the user- performed operations data including device velocity direction information of movement of the mobile electronic device.
[0079] In some aspects of the mobile electronic device, the user- performed operations data including device motion acceleration information of movement of the mobile electronic device.
[0080] In some aspects of the mobile electronic device, the user- performed operations data including device acceleration direction information of movement of the mobile electronic device.
[0081] In some aspects of the mobile electronic device, the initiating the learning mode for learning the one or more user-performed operations including: prompting the user to perform at least one iteration of the one or more user- performed operations; monitoring the at least one iteration of the one or more user-performed operations using at least one device sensor of the mobile electronic device; and learning the one or more user-performed operations as performed by the user.
[0082] In some aspects of the mobile electronic device, further including the at least one processor executing the instructions to receive an initiation input from the user, the initiation input causing a prompting of the user selection of the device action to be performed by the mobile electronic device and the user specification of the one or more user-performed operations to be performed in sequence.
[0083] According to a third aspect of the present disclosure, there is provided a non-transitory computer-readable media storing computer instructions that configure at least one processor, upon execution of the instructions, to perform the following steps: receive a user selection of a device action to be performed by a mobile electronic device; receive a user specification of one or more user-performed operations to be performed in sequence; associate the device action with the one or more user-performed operations, and initiate a learning mode and learning the one or more user-performed operations.
[0084] According to a fourth aspect of the present disclosure, there is provided a system for user-defined mobile electronic device, the system including: an input reception means configured to receive an initiation input from a user, the initiation input commencing a generating of a user-defined operation; a device action selection means configured to receive a user selection of a device action to be performed by the mobile electronic device; a user specification means configured to receive a user specification of one or more user-performed operations to be performed in sequence; an association means configured to associate the device action with the one or more user-performed operations; and a learning means configured to initiate a learning mode and learning the one or more user-performed operations.
[0085] Any one of the foregoing examples may be combined with any one or more of the other foregoing examples to create a new embodiment within the scope of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0086] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
[0087] FIG. 1 is a diagram illustrating user-defined device operations, according to an example embodiment.
[0088] FIG. 2 is a diagram illustrating user-defined device operations, according to another example embodiment.
[0089] FIG. 3 is a flowchart of a method for generating a user-defined operation in a mobile electronics device, according to an example embodiment. [0090] FIG. 4 is a sequence diagram illustrating a first user-defined operation, according to an example embodiment. [0091] FIG. 5 is a sequence diagram illustrating a second user-defined operation, according to an example embodiment.
[0092] FIG. 6 is a flowchart illustrating an adaptive user-defined operation method, according to an example embodiment.
[0093] FIG. 7 is a flowchart illustrating a user-initiated operation method, according to an example embodiment.
[0094] FIG. 8 is a flowchart illustrating an input training method, according to an example embodiment.
[0095] FIG. 9 is a flowchart illustrating a model transfer method, according to an example embodiment.
[0096] FIG. 10 illustrates the structure of a neural network, according to some example embodiments.
[0097] FIG. 11 is a diagram illustrating a representative software architecture, according to some example embodiments.
[0098] FIG. 12 is a diagram of a computing device that implements algorithms and performs methods described herein, according to some example embodiments.
DEI AILED DESCRIPTION
[0099] It should be understood at the outset that although an illustrative implementation of one or more embodiments is provided below, the disclosed systems or methods described in connection with the figures may be implemented using any number of techniques, whether currently known or not yet in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
[00100] In the following description, reference is made to the accompanying drawings that form a part hereof, and which are shown by way of illustration embodiments which may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be used, and that structural, logical, and electrical changes may be made without departing from the scope of the disclosure. The following description of example embodiments is, therefore, not to be taken in a limited sense, and the scope of the disclosure is defined by the appended claims.
[00101] The functions or algorithms described herein may be implemented in software in one embodiment. The software may consist of computer-executable instructions stored on computer-readable media or a computer-readable storage device such as one or more non-transitory memories or other types of hardware-based storage devices, either local or networked. Further, such functions correspond to modules, which may be software, hardware, firmware, or any combination thereof. Multiple functions may be performed in one or more modules as desired, and the embodiments described are merely examples. The software may be executed on a digital signal processor, ASIC, microprocessor, or another type of processor operating on a computer system, such as a personal computer, server, or another computer system, turning such computer system into a specifically programmed machine. [00102] FIG. I is a diagram illustrating user-defined device operations 100, according to an example embodiment. A user of a mobile electronic device 105 may customize the operations of that device 105 to respond to a user-defined input. The device may be configured to enable a user to identify one or more user-defined operations that may be used to define a shortcut to initiate a device action or sequence of device actions. The user-defined customization of user inputs and device operations may improve the user’s efficiency in using the device, such as by improving accuracy and reliability of identifying a user’s device inputs and resulting device operations.
[00103] The one or more user-defined operations may include a single input, operation or a sequence (e.g., a permutation or an ordered combination) of input operations. In an example, the input operation may include a repeated tapping 110 on the device 105. The repeated tapping 110 or a similar simple action may improve the user’s ability to remember the action or to invoke the action. In an example, if a user has a mobility disability or has injured themselves in a fall, a repeated tapping 110 may enable the user to request assistance without having to unlock a smartphone and dial an emergency contact. By enabling the user to customize input operations, the device 105 may provide improved detection of a user’s intended action.
[00104] The one or more user-defined operations may include a single operation or multiple sequential operations. In an example, a repeated tapping 110 may cause the device 105 to call emergency services 120, play a prerecorded message 130 (e.g., requesting assistance or identifying the user), report a location 140 (e.g., sending global positioning system (GPS) coordinates or generating and playing a text-to-speech message indicating a street address), and sending a SOS text message 150 to one or more user-defined emergency contacts. In addition to improving the efficiency of a user interacting with a device 105, these and other operations may improve user safety and improve emergency response time.
[00105] FIG. 2 is a diagram illustrating user-defined device operations 200, according to an example embodiment. The user-defined device operations 200 show an embodiment in which the user initiates a device action or actions 210 without directly interacting with the device. The mobile electronic device 205 may include a smart, phone within a vehicle, an interactive terminal within the vehicle, the computing system of the vehicle itself, or another device traveling with the vehicle. The mobile electronic device 205 may determine when a user is traveling toward a home 225, such as by determining the device has disconnected from a work Wi-Fi network, by determining the vehicle has accessed an electronic toll collection system associated with a commuting route, or by determining the device is within a geofence threshold of the home 225. [00106] In response to determining the mobile electronic device 205 is traveling toward home 225, the mobile electronic device 205 may communicate through a wireless network 215 to one or more devices within the home 225.
The communication may cause one or more operations at the home 225, such as activation of a smart garage door 220, activation of smart lock 230, playing a predetermined announcement message on a smart speaker 240, activation of a smart light 250, adjusting a smart thermostat 260 to a predetermined temperature, and activating or adjusting other smart devices 270.
[00107] In an example, the communication may include separate commands from the mobile electronic device 205 to each of the smart devices within the home 225. In another exampie, the communication may include a single command from the mobile electronic device 205 (e.g., “run/execute home arrival operations”) to the wireless network 215 or to a smart hub within the home 225, which may interpret the message and initiate various operations. [00108] Various smart devices may provide information from the home 225 through the wireless network 215 back to the mobile electronic device 205. The information provided back to the mobile electronic device 205 may include an acknowledgement of the instruction, a confirmation of the operations to be initiated, an indication that one or more operations could not be instructed or completed, a confirmation that the operations have been completed, a confirmation that the operations will be completed by the time the user arrives at the home 225, or other smart device information.
[00109] FIG. 3 is a flowchart of a method 300 for generating a user- defined operation in a mobile electronics device according to an example embodiment. In step 302, the mobile electronic device receives an initiation input from the user. The initiation input commences the generating of a user- defined operation in the mobile electronic device. The user then can generate a user-defined operation that performs one or more device actions. The user can subsequently associate one or more user-performed operations with a device action to be performed on (or to) the mobile electronic device.
[00110] In step 304, the mobile electronic device receives a user selection of a device action to be performed by the mobile electronic device. The device action comprises the action that the user wants the mobile electronic device to perform. The device action can include a sequence of actions. For simplicity, this disclosure will refer to a device action, although it should be understood that a device action or sequence of device actions are within the scope of this disclosure.
[00111] The device action can comprise one or more actions (i.e., a sequence of device actions) that the mobile electronic device can perform, including sending a message, sending a sequence of messages, performing one or more actions, et cetera. For example, the device action can comprise the mobile electronic device performing one or more of: sending an emergencyrequest data communication, sending a notification communication to a set of user contacts (i.e., sending multiple e-mail messages), initiating an emergency services phone call, playing a pre-recorded emergency request message, or generating and playing an audio location message based on a cun-ent location of the mobile electronic device. Other device actions (or device action sequences) are contemplated and are within the scope of the description and claims.
[00112] In step 308, the mobile electronic device receives a user specification of one or more user-performed operations the user intends to associate with the device action. The one or more user-performed operations are performed by the user. In some embodiments, each operation of the one or more user-performed operations is performed in a pre-defined order. In some embodiments, each operation of the one or more user-performed operations is performed in a pre-defined order and for a pre-defined time duration.
[00113] In some embodiments, a user-performed operation of the one or more user-performed operations includes an action characteristic, the action characteristic including at least one of a repetition count, a repetition frequency, a motion acceleration, or a final orientation.
[00114] The one or more user-performed operations can include user movement of the mobile electronic device performed according to a movement sequence, for example. The one or more user-performed operations can include user movement of the mobile electronic device performed according to a movement sequence and according to a movement timing sequence.
[00115] The one or more user-performed operations can include user contacts with the mobile electronic device performed according to a contact sequence. The one or more user-performed operations can include user contacts with the mobile electronic device according to a contact sequence and according to a contact timing sequence.
[00116] The user-performed operations data includes device velocity information of movement of the mobile electronic device. The user-performed operations data includes device velocity and direction information of movement of the mobile electronic device. The user-performed operations data includes device motion acceleration information of movement of the mobile electronic device. The user-performed operations data includes device acceleration and direction information of movement of the mobile electronic device. [00117] The one or more user-performed operations can include mobile electronic device physical motions. In some embodiments, the one or more user- performed operations include mobile electronic device physical motions performed by the user. In some embodiments, this includes user contact with the exterior of the mobile electronic device. In some embodiments, the one or more user-performed operations include physical motions performed on an input device of the mobile electronic device by the user. In some embodiments, the one or more user-performed operations include physical motions performed on a touchscreen device of the mobile electronic device by the user.
[00118] For example, the one or more user-performed operations can include a sequence of tapping operations the user will perform on the mobile electronic device. The user can configure the sequence of tapping actions to initiate the device action. The user can specify tapping four times on the mobile electronic device to perform a particular device action, including tapping anywhere on the mobile electronic device, tapping on a particular input button or soft key, or tapping on a touch-sensitive display screen, for example. Further, the user specification can also specify a timing of the four taps, wherein the timing can be unique, such as the user tapping four times in the approximate timing of a series of notes of a song or according to a rhythm. Alternatively, the user can specify a sequence of other similar or dissimilar movements or actions, including specifying movements of the mobile electronic device, the orientation of the mobile electronic device, for example.
[00119] Alternatively, the one or more user-performed operations can include user inputs other than movement-based inputs, including audio inputs, image inputs, or video inputs. The one or more user-performed operations can include audio inputs inputted to the mobile electronic device. The one or more audio inputs can include one or more audio inputs generated by the user, one or more audio command inputs, or one or more non-verbal audio inputs, for example. Other audios are contemplated and are within the scope of the description and claims.
[00120] The one or more user-performed operations can include images inputted to the mobile electronic device. The one or more images can include a human face or a human hand or hands, for example. Other images are contemplated and are within the scope of the description and claims.
[00121] The one or more user-performed operations can include video inputs inputted to the mobile electronic device. The one or more video inputs can include a human face, or a human hand or hands, for example. Other videos are contemplated and are within the scope of the description and claims.
[00122] It should be understood that the one or more user-performed operations can further comprise a mix of such user-performed operations. For example, in one possible sequence of user-defined operations, the sequence could comprise three quick taps, a whistle generated by the user for at least a predefined time period, followed by the user waving the mobile electronic device from side to side. Other such mixes of sequences are contemplated and are within the scope of the description and claims.
[00123] In step 311, the mobile electronic device associates the device action with the one or more user-performed operations. The associating enables the user to define operations of the mobile electronic device that will be performed when the mobile electronic device recognizes the one or more user- performed operations as performed by the user. The associating can be performed within the mobile electronic device and stored within the mobile electronic device in some embodiments.
[00124] In step 315, the user initiates a learning mode as a final step in the generating of the user-defined operation. The learning mode enables the mobile electronic device to recognize the one or more user-performed operations. The learning mode enables the mobile electronic device to recognize the one or more user-performed operations even though the one or more user-performed operations wall not be identically performed every time by the user. For example, a tapping sequence can be learned during the learning mode, wherein the user may not tap the proper sequence with a uniform impact force or uniform timing every time, and the learning phase enables the mobile electronic device to properly interpret the user’s tapping actions, even where some variability exists in the sequence. As a result, the learning mode allows for some variation in how the user performs the one or more user-performed operations, including over time. [00125] The learning mode stores user-performed operations data obtained during the learning mode. The stored user-performed operations data is used for training the one or more user-performed operations associated with the device action. The user-performed operations data is used for subsequently recognizing the one or more user-performed operations associated with the device action.
[00126] In some embodiments, the initiating the learning mode and learning the one or more user-performed operations further comprises prompting the user to perform at least one iteration of the one or more user-performed operations, monitoring the at least one iteration of the one or more user- performed operations using at least one device sensor of the mobile electronic device, and learning the one or more user-performed operations as performed by the user.
[00127] In some embodiments, the initiating the learning mode and learning the one or more user-performed operations further comprises displaying a training request for the user to repeat the one or more user-perfonned operations, receiving one or more training inputs from the user, and updating the personalized sensor perception model based on the one or more training inputs, the one or more user-performed operations, the first identified action, and the initial sensor perception model.
[00128] In some embodiments, the initiating the Seaming mode and learning the one or more user-performed actions comprises providing the one or more user-performed operations to a pre-trained classifier of an initial sensor perception model, generating a first identified action at the initial sensor perception model based on the one or more user-performed operations, and matching the first identified action with a first stored action, the first stored action associated with the one or more user-performed operations.
[00129] As used herein, the term “classifier” indicates a classification algorithm used by a trained machine learning (AIL) model. In machine learning networks, a classification algorithm can be used when the outputs are restricted to a limited set of values (e.g., text categorization, image classification, etc.). In machine learning, a “classifier” can also be referred to as an “identifier.” [00130] In some embodiments, the initiating the learning mode and learning the one or more user-performed operations further comprises providing the one or more user-performed operations to an adaptive classifier of the personalized sensor perception model, generating a first identified input action sequence at the personalized sensor perception model, and matching the first identified input action sequence with a first stored action sequence, the first stored action sequence associated with the first sequence of operations.
[00131] In some embodiments, the initial sensor perception model is based on device-specific capabilities of a set of environmental sensors of the first mobile electronic device. In some embodiments, the method further comprises generating a personalized sensor perception model based on the initial sensor perception model, the one or more user-performed operations, and the first identified action. In some embodiments, the method further comprises updating the personalized perception model based on the first identified input action sequence.
[00132] Subsequently, the user can control actions of the mobil e electronic device according to the user-defined actions. The user can perform the one or more user-defined actions. The mobile electronic device will receive user input actions, determine the user input actions match the one or more user- performed operations, and the mobile electronic device will initiate the device action responsive to the input action matching the one or more user-performed operations.
[00133] FIG. 4 is a sequence diagram 400 illustrating a first, user-defined operation method, according to an example embodiment. The sequence diagram 400 may be used by a mobile electronic device to correctly and efficiently identify the intention behind the user’s command. In particular, the sequence diagram 400 may be used to receive and interpret a set of basic actions 410 from a user at a mobile electronic device, and allow7 the user to designate the basic actions 410 as a trigger to initiate (e.g., run or execute a shortcut to) a sequence of operations 420. In an example, basic actions 410 may include a series of continuous taps on a device screen, which may trigger the sequence of operations 420, with the sequence of operations 320 including sending a user location and request for help to a previously designated user emergency contact. [00134] The sequence diagram 400 may be initiated by a user interacting with a mobile electronic device configured to detect the set of basic actions 410. In an example, the sequence diagram 400 may be initiated by a user tapping continuously on the mobile electronic device, and the device may detect the set of basic actions 410 and prompt the user to associate the set of basic actions 410 with a sequence of operations 420. The tapping may be performed by the user on buttons or input devices of the mobile electronic device. Alternatively, the tapping may be performed on a display screen or touch-sensitive display screen of the mobile electronic device. In another alternative, the tapping may be performed on the exterior of the mobile electronic device. The sequence diagram 400 may also be initiated by a user interacting with an application on the mobile electronic device, or by initiating a mapping setting interface (e.g., shortcut settings) within the operating system of the mobile electronic device. [00135] The detection of the set of basic actions 410 may be determined by user input, such as the user indicating when to start recording actions and when to stop recording actions. The detection of the set of basic acti ons 410 may be determined by device detection, such as by a device identifying a group of movements within a predetermined time interval, or such as by a device identifying an end of a group of movements, with the end movement being detected before the expiration of a defined maximum period of time elapsed from a first detected action or from a last detected action.
[00136] The detection of the set of basic actions 410 may include training the mobile electronic device to recognize the set of basic actions 410. Training may include repeating the set of basic actions 410 two or more times, storing variations between each of the repetitions, and determining a set of tolerance thresholds associated with the set of basic actions 410. In an example, a user may repeat a triple tapping action, the mobile electronic device may determine that the repetitions were all completed between two seconds and four seconds, and the device may set a threshold for the completing of the set of basic actions 410 between two seconds and four seconds. In various examples, the user may initiate a training of the set of basic actions 410, or the device may initiate the training of the set of basic actions 410. The device may initiate further training based on a detected variability. In an example, the device may detect a training action completed in two seconds and a repeated action completed after six seconds, and may prompt the user to repeat the training input with greater consistency.
[00137] The association of the set of basic actions 410 with the sequence of operations 420 may be stored on the device, such as in a memory' of the mobile electronic device, in a memory associated with a sensor on the mobile electronic device, or a combination thereof. In an example, the set of basic actions 410 includes a tapping gesture stored on software, firmware, or hardware associated with an accelerometer sensor, the accelerometer sensor sends accelerometer data associated with a gesture indication to a central processing unit (CPU) of the mobile electronic device, and the CPU determines that the accelerometer data sufficiently matches the gesture indication and triggers the sequence of operations 420.
[00138] The basic actions 410 may include actions that may be detected based on sensors within the mobile electronic device (i.e., recognized actions that, can be translated into the sequence of operations 420). These actions may include actions such as the user turning over the device, rotating the device (e.g., 90°, 180°, 270°, or 360°), shaking the device, tapping the device a predetermined number of times (e.g., two taps, three taps, or continual tapping), moving the device to a different location, the user speaking a command to the device, or other user input actions.
[00139] The basic actions 410 may be detected based on a data set generated by environmental sensors within the device. As used herein, environmental sensors include sensors associated with the device that produce an output sensor data set in response to sensing an event or change in the environment. The environmental sensors may be within the device, attached to the device, external to and in communication with the device, or otherwise associated with the device. In various examples, the environmental sensors may include an audio sensor, a gyroscopic sensor to measure a rotational rate, an accelerometer sensor to measure an acceleration rate, a magnetometer to measure a magnetic field, a proximity sensor to detect a presence or distance of an object, a location sensor to determine the location of the device, or other environmental sensor. The proximity sensor may include a sensor configured to detect the presence of one or more nearby objects without requiring physical contact. The proximity sensor may include an inductive proximity sensor, a capacitive proximity sensor, a photoelectric proximity sensor, a wireless radio proximity sensor (e.g., a received signal strength indication (RS SI) sensor), or other proximity sensor, for example. The location sensor may include a sensor configured to detect a relative or absolute position of the device. The location sensor may include a near-field beacon sensor, a Wi-Fi beacon sensor, a cellular triangulation sensor, a Global Positioning System (GPS) sensor, or other location sensor. The sensor data set may include an action characteristic, such as a repetition count, a repetition frequency, a motion acceleration, a final orientation, or other action characteristics.
[00140] In some embodiments, the input sensor data set may be mapped to another sensor data set. In one example mapping embodiment, the input sensor data set includes an accelerometer sensor data set associated with a tapping gesture, and the accelerometer-based input sensor data set may be associated with an audio tapping data set with characteristics (e.g., tapping count or tapping frequency ) that correspond to the tapping gesture.
[00141] As used herein, these environmental sensors include sensors that are distinguished from user input devices such as a physical keyboard, a virtual keyboard (e.g., keyboard displayed on a touch-sensitive screen), touch inputs to an application running on the device, and other graphical user interface (GUI) inputs to an application or operating system running on the device. In an embodiment, a repeated tapping input (to an application tunning on the device) may cause one or more actions by the application. However, it should be understood that a repeated tapping input, on a device screen wallpaper (or on a blank device screen) may be interpreted as one of the basic actions 410, including where execution of the application is suspended or is not being indicated to the user.
[00142] In response to the basic actions 410, the sequence of operations 420 may include sending a communication from a network radio circuit of the mobile electronic device to a target electronic device via a telecommunication network. In various embodiments, the sequence of operations 420 may include sending an emergency request data communication, sending a notification communication to a set of user contacts, initiating an emergency services phone call, playing a prerecorded emergency request message, generating an audio location message based on a current location of the mobile electronic device and playing that audio location message during an audio call, or other operations. [00143] The detection of the user input may be improved using an initial perception model 430. The initial perception model 430 may receive sensor data related to basic actions 410, and may apply a pre-trained classifier to identify one or more basic actions 410. In an embodiment, an input sensor data set may be provided to the initial perception model 430, and the initial perception model 430 may identify an action based on the input sensor data set. For example, accelerometer data may indicate a shaking motion, and the initial perception model 430 may identify a shaking action with an associated shaking duration. In an embodiment, the initial perception model 430 may match an identified action with a stored action, where the stored action is associated with a particular sequence of operations 420. For example, the initial perception model 430 may analyze the identified shaking action and associated shaking duration, determine the identified shaking action is an intentional user input (e.g., not caused by movement in a vehicle), and match the shaking action with a stored shaking action associated with an emergency message operation.
[00144] The initial sensor perception model 430 may be based on devicespecific capabilities or device-specific sensor characteristics of a set of environmental sensors within the mobile electronic device. In an example, a device may include an accelerometer sensor but not include a gyroscopic sensor, and the device-specific capabilities may indicate that a rotation action is detected using only accelerometer data. In another example, a device may include a gyroscopic sensor and accelerometer sensor positioned close to an edge or corner of the device, and the device-specific sensor characteristics may indicate a leverarm compensation for interpreting the sensor data relative to the center of mass of the device. The device-specific capabilities or device-specific sensor characteristics may be provided by a manufacturer of the mobile electronic device. The accuracy of the device- specific capabilities or device-specific sensor characteristics may be improved by automatic calibration or manual calibration (e.g., guided user-driven calibration) of the device. [00145] The initial sensor perception model 430 may be combined with samples of user actions 440 to improve action detection and classification. This is denoted by the symbol in FIGS. 4 and 5. Samples of user actions 440 may be generated by sampling sensor data from the device, which may be used to improve action detection and improve rejection of non-action sensor data. For example, acceleration and gyroscopic sensor data received throughout the day may be sampled to differentiate typical daily device motion from actions that indicate the user is intending to recreate basic actions 410 and cause the sequence of operations 420.
[00146] FIG. 5 is a second sequence diagram 500 illustrating a second user-defined operation method, according to an example embodiment. The second operation flowchart 500 may be used to identify multiple basic actions 510. The ordered combination (e.g., permutation or action sequence) may be identified as a permutation of actions 515, which may initiate a sequence of operations 520.
[00147] The permutation of actions 515 may be based on multiple basic input actions 510. The basic input actions 510 may be w'ell-defined and reliably identified by a pre-trained initial perception model 530. In an example, the permutation of actions 515 may be represented as A = {ai, az, ..., an}, where aj is the action of knocking the screen of a smart phone, az is the action of shaking the smart phone from side to side, and additional actions are as, . . ., an. One or more of the permutation of actions 515 may be obtained by the user’s demonstration, such as by a user providing an example knocking action. Similarly, the permutation of actions 515 may initiate a sequence of operations 520 that may be represented by O :::: {oi, o?„ . . . , om} . One or more of the sequence of operations 520 may be determined by a user input, by a suggested action, or by a suggested sequence of actions. The mapping of the permutation of actions 515 to the sequence of operations 520 may be represented as A O. [00148] The accuracy of identification of multiple basic actions 510 maybe improved by a further training of the pre-trained initial perception model 530. In an example, the pre-trained initial perception model 530 is modified based on samples of user actions 540 and samples of permutations of actions 515 to generate a personalized perception model 545. The personalized perception model 545 may improve recognition of permutations of actions 515.
[00149] The personalized perception model 545 may include an adaptive classifier, which may be used to determine the permutation of actions 515. The personalized perception model 545 may be improved further (e.g., further personalized for the user) by sampling from the permutation of actions 515. The user may provide training samples to train personalized perception model 545, which may improve the ability of the personalized perception model 545 to identify subsequent user inputs selected from among the training samples. For example, the user may provide two-input pairs of training samples {(an, ap), (au, ap), . . (ain, ajU ) } that correspond with tw?o-input actions, and when the user subsequently provides a two-action input sequence {ab aj}, the personalized perception model 545 identifies the two-action as {at, aj} O. Based on the further training, the personalized perception model 545 improves the ability of a device to understand a user’s intention and initiate the desired sequence of operations 520.
[00150] Various identification and matching methodologies may be used for the identification of basic actions 510 or a permutation of actions 515, or for the mapping of basic actions 510 or a permutation of actions 515 to operations 520. The identification or mapping of actions may include deterministic analyses (e.g., threshold-based identification), probabilistic analyses (e.g., most likely action identification), neural network artificial intelligence inference analysis (e.g., machine learning action identification), or other identification or mapping. In an example, supervised machine learning may be used to analyze labeled accelerometer training data sets (e.g., data labeled as tapping actions or data labeled as non-tapping accelerometer data) and generate an inferential tapping function for use in the initial perception model 530. In another example, supervised machine learning may be used to analyze labeled permutation of actions 515 training data sets (e.g., data labeled as two recognized actions or data labeled as two actions to be ignored) and generate an inferential permutation detection function for use in the personalized perception model 545. Additional details of an example neural network inference analysis are discussed below with respect to FIG. 10. [00151] FIG. 6 is a flowchart illustrating an adaptive user-defined operation method 600, according to an example embodiment. The adaptive user- defined operation method 600 shows an example method that may be used to associate a set of operations with a set of inputs defined by a user.
[00152] At step 610, an input sensor data set is received at an environmental sensor of a mobile electronic device associated with a user. The input sensor data set may include at least one of an audio sensor data set, a gyroscopic sensor data set, an accelerometer sensor data set, a magnetometer data set, a proximity sensor data set, or a location sensor data set. The input sensor data set may include an action characteristic, such as a repetition count, a repetition frequency, a motion acceleration, a final orientation, or other action characteristics.
[00153] In some embodiments, the input sensor data set may be mapped to another sensor data set. In an example mapping embodiment, the input sensor data set includes an accelerometer sensor data set associated with a tapping gesture, and the accelerometer-based input sensor data set may be associated with an audio tapping data set with characteristics (e.g., tapping count or tapping frequency) that correspond to the tapping gesture.
[00154] At step 620, the input sensor data set is identified as an input action. The identification at step 620 may include application of a pre-trained classifier, which may include providing the input sensor data set to a pre-trained classifier of an initial sensor perception model, generating an identified action at the initial sensor perception model based on the input sensor data set, and matching the identified action with a stored action associated with the sequence of operations. In an exampie, the input data set includes accelerometer data indicative of a repeated tapping motion, the data set is matched with a previously stored tapping input, and the stored tapping input is associated with a sequence of emergency assistance operations. The initial sensor perception model may be based on device-specific capabilities of a set of environmental sensors within the mobile electronic device, and may be provided by a manufacturer of the mobile electronic device.
[00155] After identifying the input sensor data set as an input action at step 620, a personalized perception model may be generated. The personalized perception model may be generated based on the initial sensor perception model, the input sensor data set, and the identified action.
[00156] At step 630, a sequence of input actions may be identified by the user. The sequence may be identified by receiving a second input action at the mobile electronic device after receiving the initial input sensor data set, and identifying the input sensor data set and the second input action as a sequence of input actions. In an example, the receiving and identifying of the second input action may include repeating steps 610 and 620 for the second input action. The identification of additional input actions (e.g., repeating steps 610 and 620) may be repeated for additional input actions.
[00157] The identification of the sequence of input actions at step 630 may include providing the sequence of input actions to an adaptive classifier of the personalized perception model, generating an identified input action sequence at the personalized perception model based on the sequence of input actions, and matching the identified input action sequence with a stored action sequence. The stored action sequence may be associated with the sequence of operations. The personalized perception model may be updated based on the sequence of input actions and the identified input action sequence.
[00158] At step 640, the input sensor data, set is associated with a sequence of operations of the mobile electronic device. The sequence of operations may include sending a communication from a network radio circuit of the mobile electronic device via a telecommunication network to a target electronic device. In various embodiments, the sequence of operations may include sending an emergency request data communication, sending a notification communication to a set of user contacts, initiating an emergency sendees phone call, playing a prerecorded emergency request message, generating an audio location message based on a current location of the mobile electronic device and playing that generated message during a phone call, and other operations.
[00159] FIG-. 7 is a flowchart, illustrating a. user-initiated operation method 700, according to an example embodiment. The user-initiated operation method 700 shows an example method that may be used to initiate a set of operations by applying a set of inputs to a mobile electronic device. In an example, the user- initiated operation method 700 may be based on an association between a set of operations and a set of inputs that was previously defined by a user, such as an association previously defined using adaptive user-defined operation method 600.
[00160] At step 710, the input sensor data set is received at the mobile electronic device. At step 720, the received input sensor data set is matched to a previously identified input data set, such as the input data set identified in step 620 of adaptive user-defined operation method 600. At step 730, in response to determining the subsequent input action matches the input sensor data set, the mobile electronic device may initiate the sequence of operations. The determination of whether the subsequent input action matches the input sensor data set may be based on a matching tolerance threshold. In an example, the input sensor data set may include a triple tapping action, and the matching tolerance threshold may be set such that any triple tapping action completed between two seconds and four seconds matches the input data set. In an example, the sequence of operations includes sending a communication from the mobile electronic device via a telecommunication network to a target electronic device.
[00161] FIG. 8 is a flowchart illustrating an input training method 800, according to an example embodiment. At step 810, the user may be prompted to train the input. This may include displaying a training request for the user to repeat the input sensor data set, receiving a training input from the user, and updating a personalized perception model, such as the personalized perception model generated following the identification of the input sensor data set as an input action at step 620. The updating of the personalized perception model maybe based on the training input, the input sensor data set, the identified action, and the initial sensor perception model. This updated personalized perception model may provide improved detection of a user’s input.
[00162] At step 820, the user may be prompted with an image of an input action for training the detection of a user input action. After the input sensor data set is received at the mobile electronic device, an action image associated with the input sensor data set may be displayed on a display of the mobile electronic device. The display may include a prompt for the user to confirm the action, and an action confirmation input may be received from the user. The action confirmation input may be used to confirm actions that are consistent with the input sensor data set or reject actions that are inconsistent with the input data set. In an example, a user may make a mistake when generating a sensor input, reject the mistaken input when prompted at step 820, and be prompted to reenter the input. The user-confirmed input may be used to train (e.g., update) the personalized perception model. The user-confirmed input may also be used to complete the identification of the input data in step 620.
[00163] At step 830, the user may be prompted with an image of a sequence of operations for confirmation. This may include displaying a set of operations on the display of the mobile electronic device, receiving a first operation selection and a second operation selection on the mobile electronic device, and generating the sequence of operations based on the operation selection and the second operation selection. The user-confirmed sequence of operations may also be used to complete the association of the input sensor data set with the sequence of operations, such as the association shown at step 640. [00164] FIG. 9 is a flowchart, illustrating a model transfer method 900, according to an example embodiment. At step 910, a transferrable model may be generated by a first mobile electronic device. The transferrable model may include an association of the input sensor data set with the sequence of operations and information about sensors on the first device that are associated with the input data sensor set. For example, the transferrable model may include an association between a shaking input and a request for emergency services, along with information about the accelerometer sensor and gyroscopic sensor positioned close to an edge or corner of the first, device, such as a lever-arm compensation for interpreting the sensor data relative to the center of mass of the first device.
[00165] At step 920, the transferrable operation model may be received at a second mobile electronic device. At step 930, the second device may determine a sensor difference between a sensor of the first mobile electronic device and a similar sensor of the second mobile electronic device. In various embodiments, the sensor differences may be based on device size, sensor placement, device sensor sensitivities, and other sensor differences. [00166] At step 940, the second device may generate a transferred model based on the detected sensor differences. In an example, the transferred model may include a mapping of a first lever- arm compensation for a sensor on the first device to a second lever-arm compensation for a similar sensor on the second device. The transferred model may be used by the second device to detect the input action on the second device and trigger the sequence of operations previously defined by the user.
[00167] FIG. 10 illustrates the structure 1000 of a neural network 1020, according to some example embodiments. The neural network 1020 takes source domain data 1010 as input, processes the source domain data 1010 using the input layer 1030; the intermediate, hidden layers 1040A, 1040B, 1040C, 1040D, and I040E; and the output layer 1050 to generate a result 1060.
[00168] The neural network 1020 may be used to identify one or more steps within the adaptive intelligent terminal self-defined operations. In an embodiment, the source domain data 1010 includes a set of sensor data, and the result 1060 identifies a set of user input actions associated with the set of sensor data. In another embodiment, the source domain data 1010 includes a set of sensor data or the set of user input actions, and the result 1060 identifies a sequence of operations.
[00169] A neural network, sometimes referred to as an artificial neural network, is a computing system based on consideration of biological neural networks of animal brains. Such systems progressively improve performance, which is referred to as learning, to perform tasks, typically without task-specific programming. For example, in image recognition, a neural network may be taught to identify images that contain an object by analyzing example images that have been tagged with a name for the object, and having learned the object and name, may use the analytic results to identify the object in untagged images. [00170] A neural network is based on a collection of connected units called neurons, where each connection, called a synapse, between neurons can transmit a unidirectional signal with an activating strength that varies with the strength of the connection. The receiving neuron can activate and propagate a signal to downstream neurons connected to it, typically based on whether the combined incoming signals, which are from potentially many transmitting neurons, are of sufficient strength, where strength is a parameter.
[00171] Each of the layers 1030-1050 comprises one or more nodes (or “neurons”). The nodes of the neural network 1020 are shown as circles or ovals in FIG. 10. Each node takes one or more input values, processes the input values using zero or more internal variables, and generates one or more output values. The inputs to the input layer 1030 are values from the source domain data 1010. The output of the output layer 1050 is the result 1060. The intermediate layers 1040 A through 1040E are referred to as “hidden” because they do not interact directly with either the input or the output and are completely internal to the neural network 1020. Though five hidden layers are shown in FIG. 10, more or fewer hidden layers may be used.
[00172] A model may be run against a training dataset for several epochs, in which the training dataset is repeatedly fed into the model to refine its results. In each epoch, the entire training dataset is used to train the model. Multiple epochs (e.g., iterations over the entire training dataset) may be used to train the model. In some example embodiments, the number of epochs is ten, one hundred, five hundred, nine hundred, or more. Within an epoch, one or more batches of the training dataset are used to train the model. Thus, the batch size ranges between 1 and the size of the training dataset while the number of epochs is any positive integer value. The model parameters are updated after each batch (e.g., using gradient descent).
[00173] For self-supervised learning, the training dataset comprises selflabeled input examples. For example, a set of color images could be automatically converted to black-and-white images. Each color image may be used as a “label” for the corresponding black-and-white image and used to train a model that colorizes black-and-white images. This process is self-supervised because no additional information, outside of the original images, is used to generate the training dataset. Similarly, when a sensor input or user action is provided by a user, a user action within a set of user actions may be masked and the networked trained to predict the masked user action based on the remaining sensor inputs or user actions. [00174] Each model develops a rule or algorithm over several epochs by varying the values of one or more variables affecting the inputs to map to a desired result more closely, but as the training dataset may be varied, and is preferably very large, perfect accuracy and precision may not be achievable. Epochs that make up a learning phase, therefore, may be set as a given number of trials or a fixed time/computing budget, or may be terminated before that number/budget is reached when the accuracy of a given model is high enough or low enough or an accuracy plateau has been reached. For example, if the training phase is designed to run n epochs and produce a model with at least 95% accuracy, and such a model is produced before the nth epoch, the learning phase may end early and use the produced model satisfying the end-goal accuracy threshold. Similarly, if a given model is inaccurate enough to satisfy a random chance threshold (e.g., the model is only 55% accurate in determining true/false outputs for given inputs), the learning phase for that, model may be terminated early, although other models in the learning phase may continue training.
Similarly, when a given model continues to provide similar accuracy or vacillate in its results across multiple epochs - having reached a performance plateau - the learning phase for the given model may terminate before the epoch number/computing budget is reached.
[00175] Once the learning phase is complete, the models are finalized. In some example embodiments, models that are finalized are evaluated against testing criteria. In a first example, a testing dataset that includes known outputs for its inputs is fed into the finalized models to determine an accuracy of the model in handling data that it has not been trained on. In a second example, a false positive rate or false negative rate may be used to evaluate the models after finalization. In a third example, a delineation between data clusterings is used to select a model that produces the clearest bounds for its clusters of data.
[00176] The neural network 1020 may be a deep learning neural network, a deep convolutional neural network, a recurrent neural network, or another type of neural network. A neuron is an architectural element used in data processing and artificial intelligence, particularly machine learning. A neuron implements a transfer function by which a number of inputs are used to generate an output. In some example embodiments, the inputs are weighted and summed, with the result compared to a threshold to determine if the neuron should generate an output signal (e.g., a 1 output) or not (e.g., a 0 output). The inputs of the component neurons are modified through the training of a neural network. One of skill in the ait will appreciate that neurons and neural networks may be constructed programmatically (e.g., via software instructions) or via specialized hardware linking each neuron to form the neural network.
[00177] An example type of layer in the neural network 1020 is a Long Short-Term Memory (LSTM) layer. An LSTM layer includes several gates to handle input vectors (e.g., time-series data), a memory cell, and an output vector. The input gate and output gate control the information flowing into and out of the memory cell, respectively, whereas forget gates optionally remove information from the memory? cell based on the inputs from linked cells earlier in the neural network. Weights and bias vectors for the various gates are adjusted over the course of a training phase, and once the training phase is complete, those weights and biases are finalized for normal operation.
[00178] A deep neural network (DNN) is a stacked neural network, which is composed of multiple layers. The layers are composed of nodes, winch are locations where computation occurs, loosely patterned on a neuron in the human brain, which fires when it encounters sufficient stimuli. A node combines input from the data with a set of coefficients, or weights, that either amplify or dampen that input. Thus, the coefficients assign significance to inputs for the task the algorithm is trying to learn. These input-weight products are summed, and the sum is passed through what is called a node’s activation function, to determine whether and to what extent that signal progresses further through the network to affect the ultimate outcome. A DNN uses a cascade of many layers of non-linear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. Higher-level features are derived from lower-level features to form a hierarchical representation. The layers following the input layer may be convolution layers that produce feature maps that are filtering results of the inputs and are used by the next convolution layer.
[00179] In training of a DNN architecture, a regression, which is structured as a set of statistical processes for estimating the relationships among variables, can include a minimization of a cost function. The cost function may be implemented as a function to return a number representing how well the neural network performed in mapping training examples to correct output. In training, if the cost function value is not within a pre-determined range, based on the known training images, b ackpropagation is used, where backpropagation is a common method of training artificial neural networks that are used with an optimization method such as a stochastic gradient descent (SGD) method. [00180] Use of backpropagation can include propagation and weight updates. When an input is presented to the neural network, it is propagated forward through the neural network, layer by layer, until it reaches the output layer. The output of the neural network is then compared to the desired output, using the cost function, and an error value is calculated for each of the nodes in the output layer. The error values are propagated backwards, starting from the output, until each node has an associated error value which roughly represents its contribution to the original output. Backpropagation can use these error values to calculate the gradient of the cost function with respect to the weights in the neural network. The calculated gradient is fed to the selected optimization method to update the weights to attempt to minimize the cost function.
[00181] In some example embodiments, the structure of each layer is predefined. For example, a convolution layer may contain small convolution kernels and their respective convolution parameters, and a summation layer may calculate the sum, or the weighted sum, of two or more values. Training assists in defining the weight coefficients for the summation.
[00182] One way to improve the performance of DNNs is to identify newer structures for the feature-extraction layers, and another way is by improving the way the parameters are identified at the different layers for accomplishing a desired task. For a given neural network, there may be millions of parameters to be optimized. Trying to optimize ah these parameters from scratch may take hours, days, or even weeks, depending on the amount of computing resources available and the amount of data in the training set.
[00183] One of ordinary' skill in the art will be familiar with several machine learning algorithms that may be applied with the present disclosure, including linear regression, random forests, decision tree learning, neural networks, DNNs, genetic or evolutionary algorithms, and the like.
[00184] FIG. 11 is a diagram 1100 illustrating a representative software architecture, winch may be used in conjunction with various device hardware described herein, according to some example embodiments. FIG. 11 is merely a non-limiting example of a software architecture 1102 and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture 1102 may be executing on hardware such as computing device 1200 of FIG. 12 that includes, among other things, processor 1205, memory 1210, storage 1215 and 1220, and I/O interfaces 1225 and 1230. A representative hardware layer 1104 is illustrated and can represent, for example, the computing device 1200 of FIG. 12. The representative hardware layer 1104 comprises one or more processing units 1 106 having associated executable instructions 1108. Executable instructions 1108 represent the executable instructions of the software architecture 1102, including implementation of the methods, modules, and so forth of FIG. 1 through FIG.
10. Hardware layer 1104 also includes memory and/or storage modules 1110, which also have executable instructions 1108. Hardware layer 1104 may also comprise other hardware 1112, which represents any other hardware of the hardware layer 1104, such as the other hardware illustrated as part of computing device 1200.
[00185] In the example architecture of FIG. 11, the software architecture 1102 may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture 1102 may include layers such as an operating system 11 14, libraries 1116, frameworks/middleware 1 118, applications 1120, and presentation layer 1144. Operationally, the applications 1 120 and/or other components within the layers may invoke application programming interface (API) calls 1124 through the software stack and receive a response, returned values, and so forth illustrated as messages 1126 in response to the API call 1 124. The layers illustrated in FIG. 11 are representative in nature and not all software architectures 1102 have all layers. For example, some mobile or special purpose operating sy stems may not provide frameworks/middleware 11 18, while others may provide such a layer. Other software architectures may include additional or different layers.
[00186] The operating system 1 1 14 may manage hardware resources and provide common sendees. The operating system 1114 may include, for example, a kernel 1128, services 1130, and drivers 1132. The kernel 1128 may act as an abstraction layer between the hardware and the other software layers. For example, kernel 1128 may be responsible for memory’ management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The sendees 1130 may provide other common sendees for the other software layers. The drivers 1132 may be responsible for controlling or interfacing with the underlying hardware. For instance, the drivers 1132 may include display drivers, camera drivers, Bluetooth® drivers, flash memory' drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth, depending on the hardware configuration.
[00187] The libraries 11 16 may provide a common infrastructure that may be utilized by the applications 1120 and/or other components and/or layers. The libraries 1116 typically provide functionality that allows other software modules to perform tasks more easily than to interface directly with the underlying operating system 1114 functionality (e.g., kernel 1128, services 1130, and drivers 1 132). The libraries 1016 may include system libraries 1034 (e.g., C standard library/) that may provide functions such as memory/ allocation functions, string manipulation functions, mathematic functions, and the like. Also, the libraries 1116 may include API libraries 1136 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as MPEG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries 1116 may also include a wide variety of other libraries 1138 to provide many other APIs to the applications 1120 and other software components/modules. [00188] The frameworks/middleware 1 118 (also sometimes referred to as middleware) may provide a higher-level common infrastructure that may be utilized by the applications 1120 and/or other software components/modules. For example, the frameworks/middleware 1118 may provide various graphical user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks/middl eware 1 118 may provide a broad spectrum of other APIs that may be utilized by the applications 1120 and/or other software components/modules, some of which may be specific to a particular operating system 1114 or platform.
[00189] The applications 1 120 include built-in applications 1 140, third- party applications 1142, and container module 1 160. The container module 1160 can include an application. Examples of representative built-in applications 1140 may include but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. Third-party applications 1142 may include any of the built-in applications 1140 as well as a broad assortment of other applications. In an example, the third-party application 1142 (e.g., an application developed using the Android™ or iOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as iOS'™, Android'™, Windows® Phone, or other mobile operating systems. In this example, the third-party application 1142 may invoke the API calls 1124 provided by the mobile operating system such as operating system 1114 to facilitate the functionality described herein.
[00190] The applications 1120 may utilize built-in operating system functions (e.g., kernel 1128, sendees 1130, and drivers 1 132), libraries (e.g., system libraries 1134, API libraries 1136, and other libraries 1138), and frameworks/middleware 1118 to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems, interactions with a user may occur through a presentation layer, such as presentation layer 1144. In these systems, the application/module “logic” can be separated from the aspects of the application/module that interact with a user. [00191] Some software architectures utilize virtual machines. In the example of FIG. 11, this is illustrated by virtual machine 1148. A virtual machine creates a software environment, where applications/modules can execute as if they were executing on a hardware machine (such as the computing device 1200 of FIG. 12, for example). A virtual machine 1148 is hosted by a host operating system (operating system 1114 in FIG. 11) and typically, although not always, has a virtual machine monitor 1146, which manages the operation of the virtual machine 1148 as well as the interface with the host operating system (i.e., operating system 1114). A software architecture 1102 executes within the virtual machine 1148 such as an operating system 1150, libraries 1152, frameworks/middleware 1154, applications 1156, and/or presentation layer 1158. These layers of software architecture executing within the virtual machine 1148 can be the same as corresponding layers previously described or may be different.
[00192] FIG. 12 is a diagram of a computing device 1200 that implements algorithms and performs methods described herein, according to some example embodiments. All components need not be used in various embodiments. For example, clients, servers, and cloud-based network devices may each use a distinct set of components, or in the case of servers, larger storage devices. [00193] One example computing device in the form of computing device 1200 (also referred to as computer system 1200 or computer 1200) may include a processor 1205, memory 1210, a removable storage 1215, a non-removable storage 1220, an input interface 1225, an output interface 1230, and a communication interface 1235, all connected by a bus 1240. Although the example computing device is illustrated and described as the computer 1200, the computing device may be in different forms in different embodiments.
[00194] The memory 1210 may include volatile memory 1245 and nonvolatile memory 1250 and may store a program 1255. The program 1255 may include instructions to implement the systems and methods for adaptive intelligent terminal self-defined operations 1260 described herein. The computer 1200 may include or have access to a computing environment that includes a variety of computer-readable media, such as the volatile memory' 1245, the nonvolatile memory 1250, the removable storage 1215, and the non-removable storage 1220. The memory 1210 includes random-access memory (RAM), readonly memory (ROM), erasable programmable read-only memory (EPROM) or electrically erasable programmable read-only memory (EEPROM), flash memory' or other memory7 technologies, compact disc read-only memory' (CD ROM), digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium capable of storing computer-readable instructions. [00195] Computer-readable instructions stored on a computer-readable medium (e.g., the program 1255 stored in memory/ 1210) are executable by the processor 1205 of the computer 1200. Program 1255 may utilize one or more modules discussed herein. A hard drive, CD-ROM, and RAM are some examples of articles including a n on-transitory computer-readable medium such as a storage device. The terms “computer-readable medium” and “storage device” do not include carrier waves to the extent that carrier waves are deemed too transitory. “Computer-readable non-transitory media” includes all types of computer-readable media, including magnetic storage media, optical storage media, flash media, and solid-state storage media. It should be understood that software can be installed on and sold with a computer. Alternatively, the software can be obtained and loaded into the computer, including obtaining the software through a physical medium or distribution system, including, for example, from a server owned by the software creator or from a server not owned but used by the software creator. The software can be stored on a server for distribution over the Internet, for example. As used herein, the terms “computer-readable medium” and “machine-readable medium” are interchangeable.
[00196] In an example embodiment, the computer 1200 includes means for retrieving application data from a computing device of the plurality of computing devices, the application data including an application identification (ID), and a first application version number of an application executing on the computing device. The computer 1200 further includes means for updating a first database table using object type information associated with the application ID and the first application version number, the object type information identifying a database table schema of a data object used by the application, and a plurality of data fields of the data object. The computer 1200 further includes means for synchronizing the data object using synchronization data for the plurality of data fields received from the second computing device to generate a synchronized data object. The computer 1200 further includes means for receiving a second application version number from a second computing device, the second application version number associated with the application executing on the second computing device, in response to a notification of the synchronized data object communicated to the second computing device. The computer 1200 further includes means for selecting one or more of the plurality of data fields of the synchronized data object, based on the second application version number, and means for communicating data stored by the one or more of the plurality of data fields to the third computing device for synchronization. In some embodiments, the computer 1200 may include other or additional modules for performing any one or combination of steps described in the embodiments. Further, any of the additional or alternative embodiments or aspects of the method, as shown in any of the figures or recited in any of the claims, are also contemplated to include similar modules.
[00197] Any one or more of the modules described herein may be implemented using hardware (e.g., a processor of a machine, an applicationspecific integrated circuit (ASIC), field-programmable gate array (FPGA), or any suitable combination thereof). Moreover, any two or more of these modules may be combined into a single module, and the functions described herein for a single module may be subdivided among multiple modules. Furthermore, according to various example embodiments, modules described herein as being implemented within a single machine, database, or device may be distributed across multiple machines, databases, or devices.
[00198] Although a few embodiments have been described in detail above, other modifications are possible. For example, the logic flows depicted in the figures do not require the order shown, or sequential order, to achieve desirable results. Other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Other embodiments may be within the scope of the following claims. [00199] It should be further understood that software including one or more computer-executable instructions that facilitate processing and operations as described above concerning any one or ah of the steps of the disclosure can be installed in and sold with one or more computing devices consistent with the disclosure. Alternatively, the software can be obtained and loaded into one or more computing devices, including obtaining software through a physical medium or distribution system, including, for example, from a sewer owned by the software creator or from a server not owned but used by the software creator. The software can be stored on a sewer for distribution over the Internet, for example.
[00200] Also, it will be understood by one skilled in the art that this disclosure is not limited in its application to the details of construction and the arrangement of components outlined in the description or illustrated in the drawings. The embodiments herein are capable of other embodiments and capable of being practiced or carried out in various ways. Also, it will be understood that the phraseology and terminology used herein are for description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Unless limited otherwise, the terms “connected,” “coupled,” and “mounted,” and variations thereof herein are used broadly and encompass direct and indirect connections, couplings, and mountings. Also, the terms “connected” and “coupled,” and variations thereof are not restricted to physical or mechanical connections or couplings. Further, terms such as up, down, bottom, and top are relative and are employed to aid illustration, but are not limiting.
[00201] The components of the illustrative devices, systems, and methods employed per the illustrated embodiments can be implemented, at least in part, in digital electronic circuitry', analog electronic circuitry, computer hardware, firmware, software, or combinations of them. These components can be implemented, for example, as a computer program product such as a computer program, program code, or computer instructions tangibly embodied in an information carrier, or a machine-readable storage device, for execution by, or to control the operation of, data processing apparatus such as a programmable processor, a computer, or multiple computers.
[00202] A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or another unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or multiple computers at one site or distributed across multiple sites and interconnected by a communication network. Also, functional programs, codes, and code segments for accomplishing the techniques described herein can be easily construed as within the scope of the claims by programmers skilled in the art to which the techniques described herein pertain. Method steps associated with the illustrative embodiments can be performed by one or more programmable processors executing a computer program, code, or instructions to perform functions (e.g., by operating on input data and/or generating an output). Method steps can also be performed by special purpose logic circuitry, and apparatus for performing the methods can be implemented as special purpose logic circuitry, such as an FPGA (field-programmable gate array) or an ASIC (applicationspecific integrated circuit).
[00203] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an ASIC, an FPGA, or another programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[00204] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random-access memory, or both. The required elements of a computer are a processor for executing instractions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, such as magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile or non-transitory memory, including by way of example, semiconductor memory devices, such as electrically programmable read-only memory' or ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory devices, and data storage disks (e.g., magnetic disks, internal hard disks, removable disks, magneto-optical disks, CD-ROM disks, or DVD-ROM disks). The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
[00205] Those of skill in the art understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
[00206] As used herein, “machine-readable medium” (or “computer- readable medium”) indicates a device able to store instractions and data temporarily or permanently and may include, but is not limited to, randomaccess memory7 (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)), and/or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database or associated caches and servers) able to store processor instructions. The term “machine-readable medium” shall also be taken to include any medium (or a combination of multiple media) that is capable of storing instructions for execution by one or more processors 1205, such that the instructions, when executed by one or more processors 1205, cause the one or more processors 1205 to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” as used herein excludes signals per se.
[00207] Techniques, sy stems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art. and could be made without departing from the scope disclosed herein.
[00208] Although the present disclosure has been described regarding features and embodiments thereof, it is evident that various modifications and combinations can be made thereto without departing from the scope of the disclosure. For example, other components may be added to, or removed from, the described systems. The specification and drawings are, accordingly, to be regarded simply as an illustration of the disclosure as defined by the appended claims, and are contemplated to cover any modifications, variations, combinations, or equivalents that fall within the scope of the present disclosure. Other aspects may be within the scope of the following claims.

Claims

CLAIMS What is claimed is:
1. A method for generating a user-defined operation in a mobile electronic device, the method comprising: receiving, by the mobile electronic device, a user selection of a device action to be performed by the mobile electronic device; receiving, by the mobile electronic device, a user specification of one or more user-performed operations to be performed in sequence; associating, by the mobile electronic device, the device action with the one or more user-performed operations; and initiating, by the mobile electronic device, a learning mode and learning the one or more user-performed operations.
2. The method of claim 1, the device action comprising a sequence of at least one device action.
3. The method of any of claims 1-2, further comprising: receiving an input action at the mobile electronic device; determining the input action matches the one or more user-performed operations; and initiating the device action responsive to the input action matching the one or more user-performed operations.
4. The method of any of claims 1—3, the initiating the learning mode and learning the one or more user-performed actions comprising: providing the one or more user-performed operations to a pre-trained classifier of an initial sensor perception model; generating a first identified action at the initial sensor perception model based on the one or more user-performed operations; and matching the first identified action with a first stored action, the first stored action associated with the one or more user-performed operations.
5. The method of ctaim 4, wherein the initial sensor perception model is based on device-specific capabilities of a set of environmental sensors of the mobile electronic device.
6. The method of any of claims 4-5, further comprising generating a personalized sensor perception model based on the initial sensor perception model, the one or more user-performed operations, and the first identified action.
7. The method of claim 6, the initiating the learning mode and learning the one or more user-performed operations further comprising: displaying a training request for the user to repeat the one or more user- performed operations, receiving one or more training inputs from the user; and updating the personalized sensor perception model based on the one or more training inputs, the one or more user-performed operations, the first identified action, and the initial sensor perception model.
8. The method of any of claims 1-7, the initiating the learning mode and learning the one or more user-performed operations further comprising: providing the one or more user-performed operations to an adaptive classifier of the personalized sensor perception model; generating a first identified input action sequence at the personalized sensor perception model, and matching the first identified input action sequence with a first stored action sequence, the first stored action sequence associated with the one or more user-performed operations.
9. The method of claim 8, further comprising updating the personalized perception model based on the first identified input action sequence.
10. The method of any of claims 1-9, wherein a user-performed operation of the one or more user-performed operations includes an action characteristic, the action characteristic including at least one of a repetition count, a repetition frequency, a motion acceleration, or a final orientation.
11. The method of any of claims 1-10, wherein the device action includes at least one of sending an emergency request data communication, sending a notification communication to a set of user contacts, initiating an emergency services phone call, playing a prerecorded emergency request message, or generating and playing an audio location message based on a current location of the mobile electronic device.
12. The method of any of claims 1-11, further comprising; the mobile electronic device generating a transferrable operation model, the transferrable operation model including the association of the device action with the one or more user-performed operations; the mobile electronic device transmitting the transferrable operation model, a second mobile electronic device receiving the transferrable operation model; the second mobile electronic device determining a sensor difference between a first environmental sensor of the mobile electronic device and a second environmental sensor of the second mobile electronic device; the second mobile electronic device identifying a transferred input action based on the second environmental sensor of the second mobile electronic device and the sensor difference; and the second mobile electronic device associating the transferred input action with the device action.
13. The method of any of claims 1-12, further comprising: displaying on a display of the mobile electronic device and responsive to receiving the one or more user-performed operations, a first action image associated with the one or more user-performed operations; and receiving a first action confirmation input from the user; wherein the associating the device action with the one or more user- performed operations is responsive to the first action confirmation input.
14. The method of claim 13, further comprising: displaying a sequence of operations on the display of the mobile electronic device; receiving at least a first operation selection and a second operation selection on the mobile electronic device; and generating the device action based on the first operation selection and the second operation selection, the device action comprising the sequence of operations.
15. The method of claim 14, the one or more user-performed operations being performed by the user.
16. The method of claim 14, the one or more user-performed operations being performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order.
17. The method of claim 14, the one or more user-performed operations being performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order and for a pre-defined time duration .
18. The method of any of claims 1-17, the one or more user-performed operations comprising mobile electronic device physical motions.
19. The method of any of claims 1-17, the one or more user-performed operations comprising mobile electronic device physical motions performed by the user.
20. The method of any of claims 1 -17, the one or more user-performed operations comprising physical motions performed on an input device of the mobile electronic device by the user.
21. The method of any of claims 1-17, the one or more user-performed operations comprising physical motions performed on a touchscreen device of the mobile electronic device by the user.
22. The method of any of claims 1-17, the one or more user-performed operations comprising audio inputs inputted to the mobile electronic device.
23. The method of any of claims 1-17, the one or more user-performed operations comprising one or more audio inputs inputted to the mobile electronic device, the one or more audio inputs including one or more audio inputs generated by the user, one or more audio command inputs, or one or more nonverbal audio inputs.
24. The method of any of claims 1-17, the one or more user-performed operations comprising one or more video inputs inputted to the mobile electronic device.
25. The method of any of claims 1-17, the one or more user-performed operations comprising one or more video inputs inputted to the mobile electronic device, the one or more video inputs including a human face, or a human hand or hands.
26. The method of any of claims 1 -17, the one or more user-performed operations comprising one or more images inputted to the mobile electronic device.
27. The method of any of claims 1-17, the one or more user-performed operations comprising one or more images inputted to the mobile electronic device, the one or more images including a human face or a human hand or hands.
28. The method of any of claims 1-17, the one or more user-performed operations comprising user movement of the mobile electronic device performed according to a movement sequence.
29. The method of any of claims 1-17, the one or more user-performed operations comprising user movement of the mobile electronic device performed according to a movement sequence and according to a movement timing sequence.
30. The method of any of claims 1-17, the one or more user-performed operations comprising user contacts with the mobile electronic device performed according to a contact sequence.
31. The method of any of claims 1-17, the one or more user-performed operations comprising user contacts with the mobile electronic device according to a contact sequence and according to a contact timing sequence.
32. The method of any of claims 1-17, the learning mode storing user- performed operations data obtained during the learning mode, the user- performed operations data being used for training the one or more user- performed operations associated with the device action.
33. The method of any of claims 1-17, the learning mode storing user- performed operations data obtained during the learning mode, the user- performed operations data being used for subsequently recognizing the one or more user-performed operations associated with the device action.
34. The method of any of claims 1-33, the user-performed operations data including device velocity information of movement of the mobile electronic device.
35. The method of any of claims 1-33, the user-performed operations data including device velocity direction information of movement of the mobile electronic device.
36. The method of any of claims 1-33, the user-performed operations data including device motion acceleration information of movement of the mobile electronic device.
37. The method of any of claims 1 -33, the user-performed operations data including device acceleration direction information of movement of the mobile electronic device.
38. The method of any of claims 1 -17, the initiating the learning mode for learning the one or more user-performed operations comprising: prompting the user to perform at least one iteration of the one or more user- performed operations; monitoring the at least one iteration of the one or more user-performed operations using at least one device sensor of the mobile electronic device; and
Seaming the one or more user-performed operations as performed by the user.
39. The method of any of claims 1-17, further comprising a preliminary step of receiving, by the mobile electronic device, an initiation input from the user, the initiation input commencing the generating of the user-defined operation.
40. A mobile electronic device, comprising: a memory storing instructions; at least one environmental sensor associated with the mobile electronic device; and at least one processor in communication with the memory and the at least one environmental sensor, the at least one processor configured, upon execution of the instructions, to perform the following steps: receive a user selection of a device action to be performed by the mobile electronic device; receive a user specification of one or more user-performed operations to be performed in sequence; associate the device action with the one or more user-performed operations; and initiate a learning mode and learning the one or more user- performed operations.
41, The mobile electronic device of claim 40, the device action comprising a sequence of at least one device action.
42, The mobile electronic device of any of claims 40-41, further comprising the at least one processor executing the instructions to perform the following steps: receiving an input action at the mobile electronic device; determining the input action matches the one or more user-performed operations; and initiating the device action responsive to the input action matching the one or more user-performed operations.
43, The mobile electronic device of any of claims 40-42, wherein the initiating the learning mode and learning the one or more user-performed actions comprising: providing the one or more user-performed operations to a pre-trained classifier of an initial sensor perception model; generating a first identified action at the initial sensor perception model based on the one or more user-performed operations; and matching the first identified action with a first stored action, the first stored action associated with the one or more user-performed operations.
44. The mobile electronic device of claim 43, wherein the initial sensor perception model is based on device-specific capabilities of a set of environmental sensors of the mobile electronic device.
45. The mobile electronic device of any of claims 43-44, further comprising the at least one processor executing the instructions to generate a personalized sensor perception model based on the initial sensor perception model, the one or more user-performed operations, and the first identified action.
46. The mobile electronic device of claim 45, wherein the initiating the learning mode and learning the one or more user-performed operations further comprising: displaying a training request for the user to repeat the one or more user- performed operations; receiving one or more training inputs from the user; and updating the personalized sensor perception model based on the one or more training inputs, the one or more user-performed operations, the first identified action, and the initial sensor perception model.
47. The mobile electronic device of any of claims 40-45, wherein the initiating the learning mode and learning the one or more user-performed operations further comprising: providing the one or more user-performed operations to an adaptive classifier of the personalized sensor perception model; generating a first identified input action sequence at the personalized sensor perception model, and matching the first identified input action sequence with a first stored action sequence, the first stored action sequence associated with the first sequence of operations.
48. The mobile electronic device of claim 47, further comprising the at least one processor executing the instructions to update the personalized perception model based on the first identified input action sequence.
49. The mobile electronic device of any of claims 40-48, wherein a user- performed operation of the one or more user-performed operations includes an action characteristic, the action characteristic including at least one of a repetition count, a repetition frequency, a motion acceleration, or a final orientation.
50. The mobile electronic device of any of claims 40-49, wherein the device action includes at least one of sending an emergency request data communication, sending a notification communication to a set of user contacts, initiating an emergency services phone call, playing a prerecorded emergency request message, or generating and playing an audio location message based on a current location of the mobile electronic device.
51. The mobile electronic device of any of claims 40-50, further comprising the at least one processor executing the instructions to: generate a transferrable operation model, the transferrable operation model including the association of the device action with the one or more user- performed operations; transmit the transferrable operation model; a second mobile electronic device receiving the transferrable operation model, determine a sensor difference between the at least one environmental sensor of the mobile electronic device and a second environmental sensor of the second mobile electronic device; identify a transferred input action based on the second environmental sensor of the second mobile electronic device and the sensor difference; and associate the transferred input action with the device action.
52. The mobile electronic device of any of claims 40-51, further comprising the at least one processor executing the instructions to: display on a display of the mobile electronic device and responsive to receiving the one or more user-performed operations, a first action image associated with the one or more user-performed operations; and receive a first action confirmation input from the user; wherein the associating the device action with the one or more user- performed operations is responsive to the first action confirmation input.
53. The mobile electronic device of claim 52, further comprising the at least one processor executing the instructions to: display a sequence of operations on the display of the mobile electronic device; receive at least a first operation selection and a second operation selection on the mobile electronic device; and generate the device action based on the first operation selection and the second operation selection, the device action comprising the sequence of operations.
54. The mobile electronic device of claim 53, the one or more user- performed operations being performed by the user.
55. The mobile electronic device of claim 53, the one or more user- performed operations being performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order.
56. The mobile electronic device of claim 53, the one or more user- performed operations being performed by the user, each operation of the one or more user-performed operations being performed in a pre-defined order and for a pre-defined time duration.
57. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising mobile electronic device physical motions.
58. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising mobile electronic device physical motions performed by the user.
59. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising physical motions performed on an input device of the mobile electronic device by the user.
60. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising physical motions performed on a touchscreen device of the mobile electronic device by the user.
61. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising audio inputs inputted to the mobile electronic device.
62. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising one or more audio inputs inputted to the mobile electronic device, the one or more audio inputs including one or more audio inputs generated by the user, one or more audio command inputs, or one or more non-verbal audio inputs.
63. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising one or more video inputs inputted to the mobile electronic device.
64. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising one or more video inputs inputted to the mobile electronic device, the one or more video inputs including a human face, or a human hand or hands.
65. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising one or more images inputted to the mobile electronic device.
66. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising one or more images inputted to the mobile electronic device, the one or more images including a human face or a human hand or hands.
67. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising user movement of the mobile electronic device performed according to a movement sequence.
68. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising user movement of the mobile electronic device performed according to a movement sequence and according to a movement timing sequence.
69. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising user contacts with the mobile electronic device performed according to a contact sequence.
70. The mobile electronic device of any of claims 40-56, the one or more user-performed operations comprising user contacts with the mobile electronic device according to a contact sequence and according to a contact timing sequence.
71. The mobile electronic device of any of claims 40-56, the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for training the one or more user- performed operations associated with the device action.
72. The mobile electronic device of any of claims 40-56, the learning mode storing user-performed operations data obtained during the learning mode, the user-performed operations data being used for subsequently recognizing the one or more user-performed operations associated with the device action.
73. The mobile electronic device of any of claims 40-72, the user-performed operations data including device velocity information of movement of the mobile electronic device.
74. The mobile electronic device of any of claims 40-72, the user-performed operations data including device velocity direction information of movement of the mobile electronic device.
75. The mobile electronic device of any of claims 40-72, the user-performed operations data including device motion acceleration information of movement of the mobile electronic device.
76. The mobile electronic device of any of claims 40-72, the user-performed operations data including device acceleration direction information of movement of the mobile electronic device.
77. The mobile electronic device of any of claims 40-56, the initiating the learning mode for learning the one or more user-performed operations comprising: prompting the user to perform at least one iteration of the one or more user- performed operations; monitoring the at least one iteration of the one or more user-performed operations using at least one device sensor of the mobile electronic device; and learning the one or more user-performed operations as performed by the user.
78. The mobile electronic device of any of claims 40-77, further comprising the at least one processor executing the instructions to receive an initiation input from the user, the initiation input causing a prompting of the user selection of the device action to be performed by the mobile electronic device and the user specification of the one or more user-performed operations to be performed in sequence.
79. A non-transitory computer-readable media storing computer instructions that, configure at least one processor, upon execution of the instructions, to perform the following steps comprising: receive a user selection of a device action to be performed by a mobile electronic device; receive a user specification of one or more user-performed operations to be performed in sequence; associate the device action with the one or more user-performed operations; and initiate a learning mode and learning the one or more user-performed operations.
80. A system for user-defined mobile electronic device, comprising: an input reception means configured to receive an initiation input from a user, the initiation input commencing a generating of a user-defined operation; a device action selection means configured to receive a user selection of a device action to be performed by the mobile electronic device; a user specification means configured to receive a user specification of one or more user-performed operations to be performed in sequence; an association means configured to associate the device action with the one or more user-performed operations; and a learning means configured to initiate a learning mode and learning the one or more user-performed operations.
PCT/US2023/018815 2023-04-17 2023-04-17 Generating mobile electronic device user-defined operation WO2024049509A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2023/018815 WO2024049509A1 (en) 2023-04-17 2023-04-17 Generating mobile electronic device user-defined operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/018815 WO2024049509A1 (en) 2023-04-17 2023-04-17 Generating mobile electronic device user-defined operation

Publications (1)

Publication Number Publication Date
WO2024049509A1 true WO2024049509A1 (en) 2024-03-07

Family

ID=86330378

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/018815 WO2024049509A1 (en) 2023-04-17 2023-04-17 Generating mobile electronic device user-defined operation

Country Status (1)

Country Link
WO (1) WO2024049509A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160231830A1 (en) * 2010-08-20 2016-08-11 Knowles Electronics, Llc Personalized Operation of a Mobile Device Using Sensor Signatures
US20210064132A1 (en) * 2019-09-04 2021-03-04 Facebook Technologies, Llc Systems, methods, and interfaces for performing inputs based on neuromuscular control

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160231830A1 (en) * 2010-08-20 2016-08-11 Knowles Electronics, Llc Personalized Operation of a Mobile Device Using Sensor Signatures
US20210064132A1 (en) * 2019-09-04 2021-03-04 Facebook Technologies, Llc Systems, methods, and interfaces for performing inputs based on neuromuscular control

Similar Documents

Publication Publication Date Title
US11610354B2 (en) Joint audio-video facial animation system
CN109427333B (en) Method for activating speech recognition service and electronic device for implementing said method
US11361225B2 (en) Neural network architecture for attention based efficient model adaptation
US11367434B2 (en) Electronic device, method for determining utterance intention of user thereof, and non-transitory computer-readable recording medium
JP7344900B2 (en) Choosing a neural network architecture for supervised machine learning problems
EP3120222B1 (en) Trainable sensor-based gesture recognition
US10679618B2 (en) Electronic device and controlling method thereof
KR102552668B1 (en) Improved geo-fence selection system
CN112106121B (en) Electronic device and control method thereof
US10521723B2 (en) Electronic apparatus, method of providing guide and non-transitory computer readable recording medium
US11721333B2 (en) Electronic apparatus and control method thereof
CN108885870A (en) For by combining speech to TEXT system with speech to intention system the system and method to realize voice user interface
EP3523709B1 (en) Electronic device and controlling method thereof
CN111264054A (en) Electronic device and control method thereof
CN111587419A (en) Electronic device and control method thereof
CN110379420A (en) Dynamically adapting of the language understanding system to acoustic enviroment
KR20180054362A (en) Method and apparatus for speech recognition correction
KR20200028089A (en) Electronic apparatus and controlling method thereof
US20200257954A1 (en) Techniques for generating digital personas
WO2024049509A1 (en) Generating mobile electronic device user-defined operation
US11817097B2 (en) Electronic apparatus and assistant service providing method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23722768

Country of ref document: EP

Kind code of ref document: A1