WO2015074553A1 - Method and apparatus of activating applications - Google Patents

Method and apparatus of activating applications Download PDF

Info

Publication number
WO2015074553A1
WO2015074553A1 PCT/CN2014/091583 CN2014091583W WO2015074553A1 WO 2015074553 A1 WO2015074553 A1 WO 2015074553A1 CN 2014091583 W CN2014091583 W CN 2014091583W WO 2015074553 A1 WO2015074553 A1 WO 2015074553A1
Authority
WO
WIPO (PCT)
Prior art keywords
keyword
module
audio
speech recognition
application
Prior art date
Application number
PCT/CN2014/091583
Other languages
French (fr)
Inventor
Yongxin Wang
Bin Li
Jing He
Cheng Luo
Wei Yi
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015074553A1 publication Critical patent/WO2015074553A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • the present invention relates to communications technology, and particularly to a method and an apparatus of activating applications.
  • the push service refers to a service in which a server sends a message to a target mobile terminal in real time.
  • a remote server may send a chat invitation to a mobile terminal to activate a chatting application in the mobile terminal.
  • a mobile operating system may obtain the trajectory of a motion of a mobile terminal using a gyroscope and an accelerometer. A series of complex computations are applied to the trajectory, and a matched pre-defined motion is searched for based on the computation result. If the trajectory matches with a pre-defined motion, an application corresponding to the pre-defined motion is activated.
  • Various examples provide a method and an apparatus of activating applications to address at least one of deficiencies of conventional mechanisms.
  • the apparatus may include:
  • a monitoring module configured for monitoring a device state of a mobile terminal
  • a recording starting module configured for starting an audio recording module when the device state obtained by the monitoring module meets a pre-defined condition
  • an audio processing module configured for applying speech recognition to audio recorded by the recording module
  • a matching module configured for judging whether a result of the speech recognition performed by the audio processing module matches with at least one pre-defined keyword
  • an application activating module configured for activating an application corresponding to the at least one pre-defined keyword in response to a determination that the result of the speech recognition matches with the at least one pre-defined keyword.
  • various examples of the present disclosure can activate any type of applications by using speech recognition.
  • the activation mechanism is easy to use by users.
  • the recording module is only started when the mobile terminal is in a specific state, which can avoid unnecessary power consumption and CPU occupation resulted from continuous running of the recording module and avoid disturbing normal operation of the mobile terminal.
  • Fig. 1 is a schematic diagram illustrating modules of a computing device
  • Fig. 2 is a schematic diagram illustrating modules of an apparatus of activating applications
  • Fig. 3 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure
  • Fig. 4 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure
  • Fig. 5 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure
  • Fig. 7 is a schematic diagram illustrating modules of an apparatus of activating applications in accordance with an example of the present disclosure.
  • the present disclosure is described by referring mainly to an example thereof.
  • numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be readily apparent however, that the present disclosure may be practiced without limitation to these specific details. In other instances, some methods and structures have not been described in detail so as not to unnecessarily obscure the present disclosure.
  • the term “includes” means includes but not limited to, the term “including” means including but not limited to.
  • the term “based on” means based at least in part on. Quantities of an element, unless specifically mentioned, may be one or a plurality of, or at least one.
  • Fig. 1 is a schematic diagram illustrating modules of a computing device.
  • the components of computing device 100 may include, but not limited to, a processing unit 180, a system memory 120, and a system bus 190.
  • the system bus 190 couples various system components including the system memory 120 to the processing unit 180.
  • the system bus 190 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, Peripheral Component Interconnect (PCI) and the like.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • the processing unit 180 is a control center of the computing device 100 which interconnects all of the components in the computing device using various interfaces and circuits and monitors the computing device by running or executing software programs and/or modules stored in the system memory 120 and calling various functions of the computing device 100 and processing data.
  • the processing unit 180 may include one or multiple processing cores.
  • the processing unit 180 may integrate an application processor and a modem processor.
  • the application processor mainly handles the operating system, user interfaces and application programs, and etc., and the modem processor mainly handles wireless communications.
  • the modem processor may be a standalone processor, not integrated into the processing unit 180.
  • the system memory 120 includes computing device storage media in the form of volatile and/or nonvolatile memory such as ROM 121 and RAM 122.
  • a basic input/output system 123 containing the basic routines that help to transfer information between elements within computing device 100, such as during start-up, is typically stored in ROM 121.
  • RAM 122 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 180.
  • Fig. 1 illustrates operating system 124, application programs 125, other program modules 126, and program data 127.
  • the apparatus of activating applications also referred to as application activating engine, AAE
  • AAE application activating engine
  • the computing device 100 may also include at least one sensor 150, e.g., an optical sensor, a motion sensor, or other types of sensors.
  • sensor 150 e.g., an optical sensor, a motion sensor, or other types of sensors.
  • the computing device 100 may operate in a networked environment using logical connections to one or more remote computing devices (not shown) via RF circuit 110 and/or transport unit 170.
  • An audio circuit 160 may convert received audio data into electrical signals, and send the electrical signals to the speaker 161.
  • the speaker 161 converts the electrical signals into sound and outputs the sound.
  • the microphone 162 may convert collected sound signals into electrical signals which are received by the audio circuit 160.
  • the audio circuit 160 converts the electrical signals into audio data, and sends the electrical signals to the processor 180 for processing.
  • the processed audio data may be sent to another terminal device via the RF circuit 110, or be output to the storage device 120 for future processing.
  • the audio circuit 160 may also include an ear jack providing communications between a peripheral earphone and the user device 100.
  • Fig. 2 is a schematic diagram illustrating modules of an apparatus of activating applications.
  • the apparatus 200 is typically disposed in computing devices such as one shown in Fig. 1, e.g., in a mobile terminal.
  • the application activating apparatus 200 may include a monitoring module 202, a recorder start engine 204, an audio recording module 205, an audio processing module 206 and an application activating engine (AAE) 210.
  • a monitoring module 202 may include a monitoring module 202, a recorder start engine 204, an audio recording module 205, an audio processing module 206 and an application activating engine (AAE) 210.
  • AAE application activating engine
  • the monitoring module 202 monitors a device state of a mobile terminal, and provides the device state to the recorder start engine 204.
  • the recorder start engine 204 starts the audio recording module 205 when the device state obtained by the monitoring module 202 meets a pre-defined condition.
  • the audio recording module 205 may be an entity implementing audio recording functions in the application activating apparatus 200, or a unit for audio recording in computing device 100 (e.g., a mobile terminal) , or a unit in the application activating apparatus 200 that is capable of co-working with an audio recording device in the computing device 100 to implement the audio recording functions required by examples of the present disclosure.
  • the apparatus 200 may also include a data storage device 208.
  • the data storage device 208 stores a variety of preset data, e.g., device state data 220 storing various pre-defined device states serving as basis for the judgment of whether the audio recording module is to be started, keyword data 230 storing pre-defined keywords and relations that associates the keywords with applications, and application data 204 storing information of the applications.
  • the preset data may be set up by a user through an input device of the mobile terminal, or be obtained from a remote device (e.g., a network entity such as a server) via a network.
  • Information of an application may include an identity of a process, or a location of an application (e.g., an application directory) , or the like.
  • the recorder start engine 204 may include a state obtaining module 214, a state judging module 216 and a recorder starting module 218.
  • the state obtaining module 214 may read pre-defined states from the device state data 220 in the data storage device 208.
  • the state judging module 216 judges whether the device state obtained by the state monitoring module 202 meets a pre-defined condition.
  • the state judging module 216 may judge whether the device state obtained by the state monitoring module 202 is one of at least one pre-defined state obtained by the state obtaining module 214.
  • the at least one pre-defined state may include screen switched-off, engaging in a call session, audio/video playing, running a full screen application, and the like.
  • the state judging module 216 determines the device state meets the pre-defined condition, and sends an indication to the recorder starting module 218.
  • the recorder starting module 218 activates the audio recording module 205 to start recording in response to the indication from the state judging module 216.
  • the recorder starting module 218 may run the audio recording module 205 at the background, i.e., not displaying the user interface of the audio recording module on a display panel of the mobile terminal.
  • the recorder starting module 218 may run the audio recording module 205 at the foreground to have the user interface of the audio recording module displayed on the display panel of the mobile terminal together with prompt information.
  • the prompt information may be text information such as “Please say the name of an application” , or the like.
  • the audio processing module 206 applies speech recognition to the audio obtained by the audio recording module 205, and generates a result of the speech recognition.
  • the result of the speech recognition may be text information.
  • the audio processing module 206 may use any available speech recognition techniques and algorithms, or dedicated speech recognition algorithms, which is not limited herein.
  • the AAE 210 judges whether the result of the speech recognition matches with a pre-defined keyword, and activates an application associated with the pre-defined keyword in response to a determination that the result of the speech recognition matches with the pre-defined keyword.
  • the AAE 210 may include a data retrieving module 222, a keyword matching module 224 and an application activating module 226.
  • the data retrieving module 222 acquires at least one pre-defined keyword from keyword data 220 in the data storage device 208.
  • the keyword matching module 224 judges whether the text information obtained by the audio processing module 206 matches with the at least one keyword obtained by the data retrieving module 222.
  • the data retrieving module 222 may retrieve multiple groups of keywords and each group may include one or multiple keywords.
  • the keyword matching module 224 may search the multiple groups of keywords for a group of keyword matching with the text information obtained by the audio processing module 206, e.g., judging whether the text information matches with each of the at least one keyword, and generates a result of the matching process.
  • the keyword matching module 224 may adopt an accurate comparison manner or an approximate string matching manner to find out the keyword that matches with the result of the speech recognition.
  • the keyword matching module 224 may calculate a similarity between the text information generated from the speech recognition and each group of pre-defined keyword, and identify the group of keyword that matches with the speech recognition result based on the similarities calculated.
  • the keyword matching module 224 instructs the data retrieving module 222 to acquire information of an application associated with at least one keyword after determining the at least one keyword matches with the text information obtained from the speech recognition.
  • the data retrieving module 222 acquires information of the application associated with the at least one keyword from the keyword data 220, and then acquires information required by activating the application from the application data 240. After obtaining the information required by activating the application, the data retrieving module 222 provides the information for the application activating module 226.
  • the application activating module 226 activates the application according to the information.
  • activating may refer to different actions depending on the state of the application, e.g., may refer to starting the application when the application is not running or wakening the application when the application is inactive, to have the user interface of the application displayed on the display panel of the mobile terminal.
  • the apparatus 200 may also include a recording ending module (not shown in Fig. 2) configured for making the audio recording module stop recording when a device state monitored by the monitoring module does not meet the pre-defined condition.
  • a recording ending module (not shown in Fig. 2) configured for making the audio recording module stop recording when a device state monitored by the monitoring module does not meet the pre-defined condition.
  • the apparatus 200 may also include an interaction module (not shown in Fig. 2) configured for receiving a keyword inputted by a user and configuration information of an application associated with the keyword.
  • an interaction module (not shown in Fig. 2) configured for receiving a keyword inputted by a user and configuration information of an application associated with the keyword.
  • a control logic (not shown) may direct operation of the various components listed above and associations between the components.
  • the apparatus 200 can be implemented in software, firmware, hardware, or a combination of software, firmware, hardware, etc.
  • the application activating apparatus may include a processor and a memory.
  • the memory stores machine-readable instructions which are capable of being executed by the processor to implement the methods described in the following.
  • Fig. 3 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure. The method may include the following procedures.
  • the device state of a mobile terminal is monitored.
  • audio recording is started when the device state meets a pre-defined condition.
  • the pre-defined condition may include: the device state is not any one of pre-defined at least one state.
  • the at least one state may include one or multiple of screen switched-off, engaging in a call, audio/video playing, a full screen application is running, and the like.
  • a result of the speech recognition matches with at least one pre-defined keyword
  • an application associated with the at least one keyword is activated when the result of the speech recognition matches with the at least one keyword.
  • there may be at least one group of pre-defined keyword and each group of pre-defined keyword may include one or multiple keywords.
  • This procedure may include: searching at least one groups of pre-defined keyword for a group of at least one keyword that matches with the result of the speech recognition.
  • a group of keyword may be: phone call, or write a text message, or the like.
  • the method may also include: receiving and storing configuration information of a keyword and an application associated with the keyword before monitoring the device state of the mobile terminal.
  • the apparatus then processes the audio recorded through speech recognition, and obtains a result of “phone call” or “call” (the result may vary depending on the noise level surrounding the user, pronunciation or accent of the user, the speech recognition algorithm adopted and the like) .
  • the result may be found out to match with a pre-defined keyword “phone call” , and an application, e.g., a call application, a phone book application or the like, that is associated with the keyword is activated based on stored information of the application.
  • the stored information of the application may be a process ID, or a location of the application, or the like.
  • Fig. 4 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure. The method may include the following procedures.
  • the device state of a mobile terminal is monitored.
  • the real-time device state of the mobile terminal may be monitored through monitoring a system process running in the mobile terminal.
  • a system process running in the mobile terminal By way of example, and not limitation, it may be monitored whether the screen of the mobile terminal is switched on or switched off, whether the mobile terminal is engaging in a call session, whether the mobile terminal is running specific applications, e.g., audio/video playing tools, game applications, a full screen application, and the like.
  • an audio recording module is activated when the device state is consistent with a pre-defined state.
  • the pre-defined state may include: any or any combination of: the screen is switched on, not in a call session, not playing audio/video, not running a full screen application.
  • the pre-defined state may be arbitrarily configured according to the needs. By configuring the pre-defined state and performing the judging before running the audio recording module, the audio recording module is only started when the mobile terminal is being used, and unnecessary power consumption resulted from continuous running of the audio recording module can be avoided. In addition, by configuring the pre-defined state, the process of activating applications will not affect normal operation of the mobile terminal and will not interfere with normal running of other applications.
  • the audio recording module may be a microphone module in the mobile terminal.
  • the audio recording module may keep checking input audio.
  • voice is detected in the recorded audio
  • the audio data obtained is processed through speech recognition and then compared with pre-defined keyword (s) . If it is determined that the speech recognition result is consistent with the pre-defined keyword (s) or is contained in the pre-defined keyword (s) , a determination that the recorded audio matches with the pre-defined keyword (s) is made.
  • an application associated with the keyword (s) is started when the recorded audio matches with the pre-defined keyword (s) .
  • the started application is associated with the keyword, e.g., if the pre-defined keyword (s) that matches with the result is “phone call” , the application may be a phone book application; if the pre-defined keyword (s) that matches with the result is “listen to music” , the application may be an audio playing application.
  • the procedure of starting or activating the application may refer to running the application or wakening the application or the like.
  • the actions are not strictly distinguished in this disclosure.
  • the activating, starting, running or wakening an application all refer to making the application running at the foreground and the user interface of the application displayed on a display panel of the mobile terminal.
  • the mechanism can activate an application even when the screen of the mobile terminal is still locked, thus facilitate use of the mobile terminal under some special circumstances, e.g., when a user is driving a car, or the like.
  • Fig. 5 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure. The method may include the following procedures.
  • the keyword and corresponding application may be configured by a user according to the needs.
  • the user may configure multiple relations which define multiple groups of keywords and applications. Each relation may define an application and one or multiple keywords associated with the application.
  • a keyword (s) configured to be associated with an audio playing application may be “play music” or “music”
  • a keyword (s) configured to be associated with a phone book application may be “making a phone call” . If voice input of “play music” or “music” is detected, the audio playing application is activated; if voice input of “making a phone call” is detected, the phone book application is activated.
  • the real-time device state of a mobile terminal is monitored.
  • the audio recording module may be a microphone module in the mobile terminal.
  • an application associated with the keyword is started.
  • the started application is corresponding to the keyword.
  • the audio recording module is switched off when the device state is not one of pre-defined states.
  • the monitoring module 61 is configured for monitoring a device state of a mobile terminal.
  • the monitoring module 61 may monitor the real time device state by monitoring a system process in the mobile terminal.
  • the recording start module 62 is configured for starting an audio recording module when the device state obtained by the monitoring module 61 is consistent with a pre-defined state.
  • the audio recording module may be a microphone module in the mobile terminal.
  • the pre-defined state may include: any or any combination of: the screen is switched on, not in a call session, not playing audio/video, not running a full screen application.
  • the pre-defined state may be arbitrarily configured according to the needs. By configuring the pre-defined state and performing the judging before running the audio recording module, the audio recording module is only started when the mobile terminal is being used, and unnecessary power consumption resulted from long-time running of the audio recording module can be avoided. In addition, by configuring the pre-defined state, the process of activating applications will not affect normal operation of the mobile terminal and will not interfere with normal running of other applications.
  • the matching module 63 is configured for judging whether an audio recorded by the audio recording module matches with at least one pre-defined keyword.
  • the keyword may be one or multiple words.
  • the audio recording module may keep checking input audio.
  • the matching module 63 may process the recorded audio data through speech recognition and then compared with pre-defined keyword (s) . If it is determined that the speech recognition result is consistent with the pre-defined keyword (s) or is contained in the pre-defined keyword (s) , a determination that the recorded audio matches with the pre-defined keyword (s) is made.
  • the application activating module 64 is configured for activating an application associated with the keyword (s) when the recorded audio matches with the pre-defined keyword (s) .
  • the apparatus of various examples can activate any types of applications through speech recognition, which is easy to use and can avoid mistakenly activating an application.
  • the recording module is only started when the mobile terminal is in a specific state, which can avoid unnecessary power consumption and CPU occupation resulted from continuous running of the recording module and avoid disturbing normal operation of the mobile terminal.
  • the interaction module 75 is configured for providing an interaction user interface so that configuration information of an application and at least one keyword associated with the application is received from a user via an input device.
  • the interaction module 75 allows the user to configure the keyword (s) and corresponding application (s) according to the needs.
  • the interaction module 75 may also provide multiple groups of keywords and applications for the user to select from, and may allow the user to select multiple keywords for one application.
  • the recording ending module 76 is configured for switching off the audio recording module after the audio recording module is started when the device state monitored by the monitoring module 76 is inconsistent with the pre-defined state.
  • the apparatus allows a user to configure the keyword (s) and corresponding application (s) , thus increases the flexibility of usage of the mobile terminal.
  • the audio recording module is switched off to avoid unnecessary power consumption and CPU resource occupation by the audio recording module.
  • the hardware modules may be implemented by hardware or a hardware platform with necessary software.
  • the software may include machine-readable instructions which are stored in a non-statutory storage medium.
  • the examples may be embodied as software products.
  • the hardware may be dedicated hardware or general-purpose hardware executing machine-readable instruction.
  • a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) ) to perform certain operations.
  • a module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
  • a machine-readable storage medium is also provided, which is to store instructions executable by a machine to implement the method of various examples.
  • a system or apparatus may have a storage medium which stores machine-readable program codes for implementing functions of any of the above examples.
  • a computing device or a CPU or an MPU in the system or the apparatus may read and execute the program codes stored in the storage medium.
  • Computer readable instructions may make an operating system in a computer to implement part or all of the above described operations.
  • a non-statutory computer-readable storage medium may be a storage device in an extension board inserted in the computer or a storage in an extension unit connected to the computer.
  • Program codes-based instructions can make a CPU or a processor installed in an extension board or an extension unit to implement part or all of the operations to implement any example of the present disclosure.
  • the non-statutory computer-readable storage medium for providing the program codes may include floppy disk, hard drive, magneto-optical disk, compact disk (such as CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW) , magnetic tape drive, Flash card, ROM and so on.
  • the program code may be downloaded from a server computer via a communication network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

Various examples provide a method and an apparatus of activating applications. The method may include: monitoring a device state of a mobile terminal, starting recording audio when the monitored device state is meets a pre-defined condition, applying speech recognition to the recorded audio, judging whether a result of the speech recognition matches with at least one pre-defined keyword, and activating an application corresponding to the at least one pre-defined keyword in response to a determination that the result of the speech recognition matches with the at least one pre-defined keyword.

Description

METHOD AND APPARATUS OF ACTIVATING APPLICATIONS
Related documents
The present invention claims priority of Chinese patent application No. 201310590949.7 titled “method and apparatus of wakening applications” and filed on November 21, 2013 with the Patent Office of the People’s Republic of China, the disclosure of which is incorporated by reference.
Technical Field
The present invention relates to communications technology, and particularly to a method and an apparatus of activating applications.
Background
Some conventional mobile operating systems are capable of activating an application through a push service. The push service refers to a service in which a server sends a message to a target mobile terminal in real time. For example, a remote server may send a chat invitation to a mobile terminal to activate a chatting application in the mobile terminal.
Some other mobile operating systems are capable of activating an application in response to a specific motion of mobile devices. A mobile operating system may obtain the trajectory of a motion of a mobile terminal using a gyroscope and an accelerometer. A series of complex computations are applied to the trajectory, and a matched pre-defined motion is searched for based on the computation result. If the trajectory matches with a pre-defined motion, an application corresponding to the pre-defined motion is activated.
Summary
Various examples provide a method and an apparatus of activating applications to address at least one of deficiencies of conventional mechanisms.
Various examples provide a method of activating applications. The method may include:
monitoring device state of a mobile terminal;
starting recording audio when the device state monitored meets a pre-defined condition;
applying speech recognition to audio recorded;
judging whether a result of the speech recognition matches with at least one pre-defined keyword; and
activating an application corresponding to the at least one pre-defined keyword in response to a determination that the result of the speech recognition matches with the at least one pre-defined keyword.
Various examples provide an apparatus of activating applications. The apparatus may include:
a monitoring module, configured for monitoring a device state of a mobile terminal;
a recording starting module, configured for starting an audio recording module when the device state obtained by the monitoring module meets a pre-defined condition;
an audio processing module, configured for applying speech recognition to audio recorded by the recording module;
a matching module, configured for judging whether a result of the speech recognition performed by the audio processing module matches with at least one pre-defined keyword; and
an application activating module, configured for activating an application corresponding to the at least one pre-defined keyword in response to a determination that the result of the speech recognition matches with the at least one pre-defined keyword.
Compared with conventional mechanisms, various examples of the present disclosure can activate any type of applications by using speech recognition. The activation mechanism is easy to use by users. According to the mechanism, the recording module is only started when the mobile terminal is in a specific state, which can avoid unnecessary power consumption and CPU occupation resulted from continuous running of the recording module and avoid disturbing normal operation of the mobile terminal.
Brief Description of the Drawings
Features of the present disclosure are illustrated by way of example and not limited in the following figures, in which like numerals indicate like elements, in which:
Fig. 1 is a schematic diagram illustrating modules of a computing device;
Fig. 2 is a schematic diagram illustrating modules of an apparatus of activating applications;
Fig. 3 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure;
Fig. 4 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure;
Fig. 5 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure;
Fig. 6 is a schematic diagram illustrating modules of an apparatus of activating applications in accordance with an example of the present disclosure;
Fig. 7 is a schematic diagram illustrating modules of an apparatus of activating applications in accordance with an example of the present disclosure.
Detailed Descriptions
For simplicity and illustrative purposes, the present disclosure is described by referring mainly to an example thereof. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be readily apparent however, that the present disclosure may be practiced without limitation to these specific details. In other instances, some methods and structures have not been described in detail so as not to unnecessarily obscure the present disclosure. As used herein, the term “includes” means includes but not limited to, the term “including” means including but not limited to. The term “based on” means based at least in part on. Quantities of an element, unless specifically mentioned, may be one or a plurality of, or at least one.
A computing device environment suitable for an apparatus of activating applications and suitable for practicing methods of examples described herein is described with respect to Fig. 1. Fig. 1 is a schematic diagram illustrating modules of a  computing device. The components of computing device 100 may include, but not limited to, a processing unit 180, a system memory 120, and a system bus 190. The system bus 190 couples various system components including the system memory 120 to the processing unit 180. The system bus 190 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, Peripheral Component Interconnect (PCI) and the like.
The processing unit 180 is a control center of the computing device 100 which interconnects all of the components in the computing device using various interfaces and circuits and monitors the computing device by running or executing software programs and/or modules stored in the system memory 120 and calling various functions of the computing device 100 and processing data. The processing unit 180 may include one or multiple processing cores. The processing unit 180 may integrate an application processor and a modem processor. The application processor mainly handles the operating system, user interfaces and application programs, and etc., and the modem processor mainly handles wireless communications. The modem processor may be a standalone processor, not integrated into the processing unit 180.
The computing device 100 typically includes a variety of computing device-readable media. Computing device-readable media can be any available media that can be accessed by computing device 100 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computing device-readable media may comprise computing device storage media and communication media. Computing device storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computing device-readable instructions, data structures, program modules, or other data. Computing device storage media includes, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 100. Communication media typically  embodies computing device-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulate data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computing device-readable media.
The system memory 120 includes computing device storage media in the form of volatile and/or nonvolatile memory such as ROM 121 and RAM 122. A basic input/output system 123, containing the basic routines that help to transfer information between elements within computing device 100, such as during start-up, is typically stored in ROM 121. RAM 122 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 180. By way of example, and not limitation, Fig. 1 illustrates operating system 124, application programs 125, other program modules 126, and program data 127. Although the apparatus of activating applications (also referred to as application activating engine, AAE) is depicted as software in random access memory 122, other implementations of the AAE can be hardware or combinations of software and hardware.
A user may enter commands and information into the computing device 100 through input devices such as a touch pad 131. Other input devices 132 (not shown) may include a keyboard, a pointing device (commonly referred to as a mouse, trackball) , a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 180 through a user input interface that is coupled to the system bus 190, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB) . A display device 140 is also connected to the system bus 190 via an interface, such as a video interface. In addition to the display device 140, computing devices may also include other peripheral output devices such as speakers 997 and printer 996, which may be connected through an output peripheral interface.
The computing device 100 may also include at least one sensor 150, e.g., an optical sensor, a motion sensor, or other types of sensors.
The computing device 100 may operate in a networked environment using logical connections to one or more remote computing devices (not shown) via RF circuit 110 and/or transport unit 170.
An audio circuit 160 may convert received audio data into electrical signals, and send the electrical signals to the speaker 161. The speaker 161 converts the electrical signals into sound and outputs the sound. The microphone 162 may convert collected sound signals into electrical signals which are received by the audio circuit 160. The audio circuit 160 converts the electrical signals into audio data, and sends the electrical signals to the processor 180 for processing. The processed audio data may be sent to another terminal device via the RF circuit 110, or be output to the storage device 120 for future processing. The audio circuit 160 may also include an ear jack providing communications between a peripheral earphone and the user device 100.
Fig. 2 is a schematic diagram illustrating modules of an apparatus of activating applications. The apparatus 200 is typically disposed in computing devices such as one shown in Fig. 1, e.g., in a mobile terminal.
In an example, the application activating apparatus 200 may include a monitoring module 202, a recorder start engine 204, an audio recording module 205, an audio processing module 206 and an application activating engine (AAE) 210.
The monitoring module 202 monitors a device state of a mobile terminal, and provides the device state to the recorder start engine 204. The recorder start engine 204 starts the audio recording module 205 when the device state obtained by the monitoring module 202 meets a pre-defined condition. The audio recording module 205 may be an entity implementing audio recording functions in the application activating apparatus 200, or a unit for audio recording in computing device 100 (e.g., a mobile terminal) , or a unit in the application activating apparatus 200 that is capable of co-working with an audio recording device in the computing device 100 to implement the audio recording functions required by examples of the present disclosure.
In an example, the apparatus 200 may also include a data storage device 208. The data storage device 208 stores a variety of preset data, e.g., device state data 220  storing various pre-defined device states serving as basis for the judgment of whether the audio recording module is to be started, keyword data 230 storing pre-defined keywords and relations that associates the keywords with applications, and application data 204 storing information of the applications. The preset data may be set up by a user through an input device of the mobile terminal, or be obtained from a remote device (e.g., a network entity such as a server) via a network. Information of an application may include an identity of a process, or a location of an application (e.g., an application directory) , or the like.
In an example, the recorder start engine 204 may include a state obtaining module 214, a state judging module 216 and a recorder starting module 218. The state obtaining module 214 may read pre-defined states from the device state data 220 in the data storage device 208. The state judging module 216 judges whether the device state obtained by the state monitoring module 202 meets a pre-defined condition. By way of example, and not limitation, the state judging module 216 may judge whether the device state obtained by the state monitoring module 202 is one of at least one pre-defined state obtained by the state obtaining module 214. The at least one pre-defined state may include screen switched-off, engaging in a call session, audio/video playing, running a full screen application, and the like. In response to a determination that the device state is not one of the at least one pre-defined state, the state judging module 216 determines the device state meets the pre-defined condition, and sends an indication to the recorder starting module 218. The recorder starting module 218 activates the audio recording module 205 to start recording in response to the indication from the state judging module 216. The recorder starting module 218 may run the audio recording module 205 at the background, i.e., not displaying the user interface of the audio recording module on a display panel of the mobile terminal. Alternatively, the recorder starting module 218 may run the audio recording module 205 at the foreground to have the user interface of the audio recording module displayed on the display panel of the mobile terminal together with prompt information. For example, the prompt information may be text information such as “Please say the name of an application” , or the like.
The audio processing module 206 applies speech recognition to the audio obtained by the audio recording module 205, and generates a result of the speech recognition. In an example, the result of the speech recognition may be text information.  The audio processing module 206 may use any available speech recognition techniques and algorithms, or dedicated speech recognition algorithms, which is not limited herein.
The AAE 210 judges whether the result of the speech recognition matches with a pre-defined keyword, and activates an application associated with the pre-defined keyword in response to a determination that the result of the speech recognition matches with the pre-defined keyword.
In an example, the AAE 210 may include a data retrieving module 222, a keyword matching module 224 and an application activating module 226. The data retrieving module 222 acquires at least one pre-defined keyword from keyword data 220 in the data storage device 208. The keyword matching module 224 judges whether the text information obtained by the audio processing module 206 matches with the at least one keyword obtained by the data retrieving module 222. In an example, the data retrieving module 222 may retrieve multiple groups of keywords and each group may include one or multiple keywords. The keyword matching module 224 may search the multiple groups of keywords for a group of keyword matching with the text information obtained by the audio processing module 206, e.g., judging whether the text information matches with each of the at least one keyword, and generates a result of the matching process. The keyword matching module 224 may adopt an accurate comparison manner or an approximate string matching manner to find out the keyword that matches with the result of the speech recognition. By way of example, and not limitation, the keyword matching module 224 may calculate a similarity between the text information generated from the speech recognition and each group of pre-defined keyword, and identify the group of keyword that matches with the speech recognition result based on the similarities calculated. The keyword matching module 224 instructs the data retrieving module 222 to acquire information of an application associated with at least one keyword after determining the at least one keyword matches with the text information obtained from the speech recognition. The data retrieving module 222 acquires information of the application associated with the at least one keyword from the keyword data 220, and then acquires information required by activating the application from the application data 240. After obtaining the information required by activating the application, the data retrieving module 222 provides the information for the application activating module 226. The application activating module 226 activates the application according to the  information. The term “activating” may refer to different actions depending on the state of the application, e.g., may refer to starting the application when the application is not running or wakening the application when the application is inactive, to have the user interface of the application displayed on the display panel of the mobile terminal.
In an example, the apparatus 200 may also include a recording ending module (not shown in Fig. 2) configured for making the audio recording module stop recording when a device state monitored by the monitoring module does not meet the pre-defined condition.
In an example, the apparatus 200 may also include an interaction module (not shown in Fig. 2) configured for receiving a keyword inputted by a user and configuration information of an application associated with the keyword.
A control logic (not shown) may direct operation of the various components listed above and associations between the components.
The apparatus 200 can be implemented in software, firmware, hardware, or a combination of software, firmware, hardware, etc. [0038] In another example, the application activating apparatus may include a processor and a memory. The memory stores machine-readable instructions which are capable of being executed by the processor to implement the methods described in the following.
Fig. 3 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure. The method may include the following procedures.
At block S31, the device state of a mobile terminal is monitored.
At block S32, audio recording is started when the device state meets a pre-defined condition.
In an example, the pre-defined condition may include: the device state is not any one of pre-defined at least one state. The at least one state may include one or multiple of screen switched-off, engaging in a call, audio/video playing, a full screen application is running, and the like.
At block S33, speech recognition is applied to audio recorded.
At block S34, it is judged whether a result of the speech recognition matches with at least one pre-defined keyword, and an application associated with the at least one keyword is activated when the result of the speech recognition matches with the at least one keyword. In an example, there may be at least one group of pre-defined keyword, and each group of pre-defined keyword may include one or multiple keywords. This procedure may include: searching at least one groups of pre-defined keyword for a group of at least one keyword that matches with the result of the speech recognition. By way of example, and not limitation, a group of keyword may be: phone call, or write a text message, or the like.
In an example, the method may also include: stopping recording audio when the monitored device state does not meet the pre-defined condition.
In an example, the method may also include: receiving and storing configuration information of a keyword and an application associated with the keyword before monitoring the device state of the mobile terminal.
By way of example, and not limitation, when a user needs to make a call but is not convenient to do it, e.g., the user is driving a car, the user may simply switch on the screen of a mobile phone, and then speaks out a keyword (s) which may either be pre-defined by an application or be set by the user, e.g., “phone call” . When an application activating apparatus in the mobile phone detects the device state of the mobile phone is screen switched-on which is not one of pre-defined states, the application may activate an audio recording module to record audio. The apparatus then processes the audio recorded through speech recognition, and obtains a result of “phone call” or “call” (the result may vary depending on the noise level surrounding the user, pronunciation or accent of the user, the speech recognition algorithm adopted and the like) . The result may be found out to match with a pre-defined keyword “phone call” , and an application, e.g., a call application, a phone book application or the like, that is associated with the keyword is activated based on stored information of the application. The stored information of the application may be a process ID, or a location of the application, or the like.
Fig. 4 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure. The method may include the following procedures.
At block S41, the device state of a mobile terminal is monitored.
The real-time device state of the mobile terminal may be monitored through monitoring a system process running in the mobile terminal. By way of example, and not limitation, it may be monitored whether the screen of the mobile terminal is switched on or switched off, whether the mobile terminal is engaging in a call session, whether the mobile terminal is running specific applications, e.g., audio/video playing tools, game applications, a full screen application, and the like.
At block S42, an audio recording module is activated when the device state is consistent with a pre-defined state.
In an example, the pre-defined state may include: any or any combination of: the screen is switched on, not in a call session, not playing audio/video, not running a full screen application. The pre-defined state may be arbitrarily configured according to the needs. By configuring the pre-defined state and performing the judging before running the audio recording module, the audio recording module is only started when the mobile terminal is being used, and unnecessary power consumption resulted from continuous running of the audio recording module can be avoided. In addition, by configuring the pre-defined state, the process of activating applications will not affect normal operation of the mobile terminal and will not interfere with normal running of other applications. The audio recording module may be a microphone module in the mobile terminal.
At block S43, it is judged whether the audio obtained by the audio recording module matches with at least one pre-defined keyword.
There may be one or multiple keywords, e.g., “phone call” , “listen to music” , “send a text message” , and the like. In an example, during the process of running, the audio recording module may keep checking input audio. When voice is detected in the recorded audio, the audio data obtained is processed through speech recognition and then compared with pre-defined keyword (s) . If it is determined that the speech recognition result is consistent with the pre-defined keyword (s) or is contained in the pre-defined keyword (s) , a determination that the recorded audio matches with the pre-defined keyword (s) is made. At block S44, an application associated with the keyword (s) is started when the recorded audio matches with the pre-defined keyword (s) .
The started application is associated with the keyword, e.g., if the pre-defined keyword (s) that matches with the result is “phone call” , the application may be a phone book application; if the pre-defined keyword (s) that matches with the result is “listen to music” , the application may be an audio playing application.
Depending on the current state of the application, e.g., stopped running, inactive or the like, the procedure of starting or activating the application may refer to running the application or wakening the application or the like. The actions are not strictly distinguished in this disclosure. The activating, starting, running or wakening an application all refer to making the application running at the foreground and the user interface of the application displayed on a display panel of the mobile terminal.
The method of various examples can activate any types of applications through speech recognition, which is easy to use and can avoid mistakenly activating an application. According to the mechanism, the recording module is only started when the mobile terminal is in a specific state, which can avoid unnecessary power consumption and CPU occupation resulted from continuous running of the recording module and avoid disturbing normal operation of the mobile terminal.
In an example, by properly configuring the pre-defined state, the mechanism can activate an application even when the screen of the mobile terminal is still locked, thus facilitate use of the mobile terminal under some special circumstances, e.g., when a user is driving a car, or the like.
Fig. 5 is a flowchart illustrating a method of activating applications in accordance with an example of the present disclosure. The method may include the following procedures.
At block S51, an interaction user interface is provided, and configuration information of an application that can be activated by a keyword and the keyword associated with the application is received from a user via an input device.
The keyword and corresponding application may be configured by a user according to the needs. The user may configure multiple relations which define multiple groups of keywords and applications. Each relation may define an application and one or multiple keywords associated with the application. For example, a keyword (s) configured to be associated with an audio playing application may be “play music” or “music” , and a  keyword (s) configured to be associated with a phone book application may be “making a phone call” . If voice input of “play music” or “music” is detected, the audio playing application is activated; if voice input of “making a phone call” is detected, the phone book application is activated.
At block S52, the real-time device state of a mobile terminal is monitored.
At block S53, it is judged whether the monitored device state is consistent with a pre-defined state. If the device state is consistent with the pre-defined state, the procedure in block S54 is performed; if the device state is inconsistent with the pre-defined state, the procedure in block S52 is performed.
In an example, the pre-defined state may include: any or any combination of: the screen is switched on, not in a call session, not playing audio/video, not running a full screen application.
At block S54, an audio recording module is started. The audio recording module may be a microphone module in the mobile terminal.
At block S55, real time voice input is detected via the audio recording module.
At block S56, it is judged whether the audio obtained by the audio recording module matches with a pre-defined keyword. The procedure in block S57 is performed if the audio matches with the pre-defined keyword. The procedure in block S55 is performed if the audio does not match with the pre-defined keyword.
At block S57, an application associated with the keyword is started. The started application is corresponding to the keyword.
At block S58, the audio recording module is switched off when the device state is not one of pre-defined states.
The mechanism allows a user to configure the keyword (s) and corresponding application (s) , thus increases the flexibility of usage of the mobile terminal. When the device state of the mobile terminal is not one of the pre-defined states, the audio recording module is switched off to avoid unnecessary power consumption and CPU resource occupation by the audio recording module.
Fig. 6 is a schematic diagram illustrating modules of an apparatus of activating application. The apparatus may include a monitoring module 61, a recording start module 62, a matching module 63 and an application activating module 64.
The monitoring module 61 is configured for monitoring a device state of a mobile terminal. The monitoring module 61 may monitor the real time device state by monitoring a system process in the mobile terminal.
The recording start module 62 is configured for starting an audio recording module when the device state obtained by the monitoring module 61 is consistent with a pre-defined state. The audio recording module may be a microphone module in the mobile terminal. In an example, the pre-defined state may include: any or any combination of: the screen is switched on, not in a call session, not playing audio/video, not running a full screen application. The pre-defined state may be arbitrarily configured according to the needs. By configuring the pre-defined state and performing the judging before running the audio recording module, the audio recording module is only started when the mobile terminal is being used, and unnecessary power consumption resulted from long-time running of the audio recording module can be avoided. In addition, by configuring the pre-defined state, the process of activating applications will not affect normal operation of the mobile terminal and will not interfere with normal running of other applications.
The matching module 63 is configured for judging whether an audio recorded by the audio recording module matches with at least one pre-defined keyword. The keyword may be one or multiple words. In an example, during the process of running, the audio recording module may keep checking input audio. When voice is detected in the recorded audio, the matching module 63 may process the recorded audio data through speech recognition and then compared with pre-defined keyword (s) . If it is determined that the speech recognition result is consistent with the pre-defined keyword (s) or is contained in the pre-defined keyword (s) , a determination that the recorded audio matches with the pre-defined keyword (s) is made.
The application activating module 64 is configured for activating an application associated with the keyword (s) when the recorded audio matches with the pre-defined keyword (s) .
The apparatus of various examples can activate any types of applications through speech recognition, which is easy to use and can avoid mistakenly activating an application. According to the apparatus, the recording module is only started when the mobile terminal is in a specific state, which can avoid unnecessary power consumption and CPU occupation resulted from continuous running of the recording module and avoid disturbing normal operation of the mobile terminal.
Fig. 7 is a schematic diagram illustrating modules of an apparatus of activating applications in accordance with an example of the present disclosure. Besides modules as shown in Fig. 6, the apparatus of Fig. 7 also includes an interaction module 75 and a recording ending module 76.
The interaction module 75 is configured for providing an interaction user interface so that configuration information of an application and at least one keyword associated with the application is received from a user via an input device. The interaction module 75 allows the user to configure the keyword (s) and corresponding application (s) according to the needs. The interaction module 75 may also provide multiple groups of keywords and applications for the user to select from, and may allow the user to select multiple keywords for one application.
The recording ending module 76 is configured for switching off the audio recording module after the audio recording module is started when the device state monitored by the monitoring module 76 is inconsistent with the pre-defined state.
The apparatus allows a user to configure the keyword (s) and corresponding application (s) , thus increases the flexibility of usage of the mobile terminal. When the device state of the mobile terminal is not one of the pre-defined states, the audio recording module is switched off to avoid unnecessary power consumption and CPU resource occupation by the audio recording module.
It should be understood that in the above processes and structures, not all of the procedures and modules are necessary. Certain procedures or modules may be omitted according to the needs. The order of the procedures is not fixed, and can be adjusted according to the needs. The modules are defined based on function simply for facilitating description. In implementation, a module may be implemented by multiple modules, and functions of multiple modules may be implemented by the same module. The modules  may reside in the same device or distribute in different devices. The “first” , “second” in the above descriptions are merely for distinguishing two similar objects, and have no substantial meanings.
The hardware modules according to various examples may be implemented by hardware or a hardware platform with necessary software. The software may include machine-readable instructions which are stored in a non-statutory storage medium. Thus, the examples may be embodied as software products.
In various examples, the hardware may be dedicated hardware or general-purpose hardware executing machine-readable instruction. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) ) to perform certain operations. A module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
A machine-readable storage medium is also provided, which is to store instructions executable by a machine to implement the method of various examples. Specifically, a system or apparatus may have a storage medium which stores machine-readable program codes for implementing functions of any of the above examples. A computing device (or a CPU or an MPU) in the system or the apparatus may read and execute the program codes stored in the storage medium. Computer readable instructions may make an operating system in a computer to implement part or all of the above described operations. A non-statutory computer-readable storage medium may be a storage device in an extension board inserted in the computer or a storage in an extension unit connected to the computer. Program codes-based instructions can make a CPU or a processor installed in an extension board or an extension unit to implement part or all of the operations to implement any example of the present disclosure.
The non-statutory computer-readable storage medium for providing the program codes may include floppy disk, hard drive, magneto-optical disk, compact disk  (such as CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW) , magnetic tape drive, Flash card, ROM and so on. Optionally, the program code may be downloaded from a server computer via a communication network.
The scope of the claims should not be limited by the embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.

Claims (15)

  1. A method of activating applications, comprising,
    monitoring a device state of a mobile terminal;
    starting recording audio when the device state meets a pre-defined condition;
    applying speech recognition to audio recorded;
    judging whether a result of the speech recognition matches with at least one pre-defined keyword;
    activating an application associated with the at least one pre-defined keyword in response to a determination that the result of the speech recognition matches with the at least one pre-defined keyword.
  2. The method of claim 1, wherein the pre-defined condition comprises: the device state is not one of at least one pre-defined state; wherein the at least one pre-defined state comprises at least one of: screen switched off, engaging in a call, playing audio/video, running a full screen application.
  3. The method of claim 1, further comprising:
    stopping recording audio when the device state does not meet the pre-defined condition.
  4. The method of claim 1, wherein the judging whether the result of the speech recognition matches with the at least one pre-defined keyword comprises:
    searching at least one groups of pre-defined keyword for a group of keyword that matches with the result of the speech recognition.
  5. The method of claim 1, further comprising:
    receiving and storing configuration information of at least one keyword and an application associated with the at least one keyword provided by a user.
  6. An apparatus of activating applications, comprising,
    a monitoring module, configured for monitoring a device state of a mobile terminal;
    a recording starting module, configured for starting an audio recording module when the device state obtained by the monitoring module meets a pre-defined condition;
    an audio processing module, configured for applying speech recognition to audio recorded by the recording module;
    a matching module, configured for judging whether a result of the speech recognition performed by the audio processing module matches with at least one pre-defined keyword;
    an application activating module, configured for activating an application corresponding to the pre-defined keyword in response to a determination that the result of the speech recognition matches with the pre-defined keyword.
  7. The apparatus of claim 6, wherein the recording starting module is configured for judging whether the device state obtained by the monitoring module is one of at least one pre-defined state, wherein the at least one pre-defined state comprises at least one of: screen switched off, engaging in a call, playing audio/video, running a full screen application; and determining that the device state meets the pre-defined condition in response to a determination that the device state is not one of the at least one pre-defined state.
  8. The apparatus of claim 6, wherein
    the matching module is configured for searching at least one group of pre-defined keyword for a group of keyword that matches with a result of the speech recognition performed by the audio processing module; and
    the application activating module is configured for activating the application corresponding to a group of pre-defined keyword in response to a determination that the result of the speech recognition matches with the group of pre-defined keyword.
  9. The apparatus of claim 6, further comprising:
    a recording ending module, configured for switching off the audio recording module when the device state obtained by the monitoring module does not meet the pre-defined condition.
  10. The apparatus of claim 6, further comprising:
    an interaction module, configured for receiving and storing configuration information of at least one keyword and an application associated with the at least one keyword provided by a user.
  11. An apparatus of activating applications, comprising a processor and a memory, the memory comprising a series of computer-readable instructions which are executable by the processor to implement actions of:
    monitoring a device state of a mobile terminal;
    starting recording audio when the device state meets a pre-defined condition;
    applying speech recognition to audio recorded;
    judging whether a result of the speech recognition matches with at least one pre-defined keyword;
    activating an application associated with the at least one pre-defined keyword in response to a determination that the result of the speech recognition matches with the at least one pre-defined keyword.
  12. The apparatus of claim 11, wherein the pre-defined condition comprises: the device state is not one of at least one pre-defined state; wherein the at least one pre-defined state comprises at least one of: screen switched off, engaging in a call, playing audio/video, running a full screen application.
  13. The apparatus of claim 11, wherein the computer-readable instructions are further executable by the processor to implement actions of:
    stopping recording audio when the device state does not meet the pre-defined condition.
  14. The apparatus of claim 11, wherein the computer-readable instructions are executable by the processor to implement actions of:
    searching at least one groups of pre-defined keyword for a group of keyword that matches with the result of the speech recognition.
  15. The apparatus of claim 11, wherein the computer-readable instructions are further executable by the processor to implement actions of:
    receiving and storing configuration information of at least one keyword and an application associated with the at least one keyword provided by a user.
PCT/CN2014/091583 2013-11-21 2014-11-19 Method and apparatus of activating applications WO2015074553A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310590949.7 2013-11-21
CN201310590949.7A CN104660792A (en) 2013-11-21 2013-11-21 Method and device for awakening applications

Publications (1)

Publication Number Publication Date
WO2015074553A1 true WO2015074553A1 (en) 2015-05-28

Family

ID=53178952

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/091583 WO2015074553A1 (en) 2013-11-21 2014-11-19 Method and apparatus of activating applications

Country Status (3)

Country Link
CN (1) CN104660792A (en)
TW (1) TW201520896A (en)
WO (1) WO2015074553A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3261324A1 (en) * 2016-06-23 2017-12-27 Beijing Xiaomi Mobile Software Co., Ltd. Method and device for application switching
CN111897601A (en) * 2020-08-03 2020-11-06 Oppo广东移动通信有限公司 Application starting method and device, terminal equipment and storage medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254612A (en) * 2015-06-15 2016-12-21 中兴通讯股份有限公司 A kind of sound control method and device
CN105491126A (en) * 2015-12-07 2016-04-13 百度在线网络技术(北京)有限公司 Service providing method and service providing device based on artificial intelligence
CN105812573A (en) * 2016-04-28 2016-07-27 努比亚技术有限公司 Voice processing method and mobile terminal
CN106598536A (en) * 2016-10-31 2017-04-26 深圳众思科技有限公司 Record startup method and apparatus for electronic device, and electronic device
EP3561698B1 (en) * 2017-01-22 2023-10-25 Huawei Technologies Co., Ltd. Method and device for intelligently processing application event
CN107517313A (en) * 2017-08-22 2017-12-26 珠海市魅族科技有限公司 Awakening method and device, terminal and readable storage medium storing program for executing
CN107919124B (en) * 2017-12-22 2021-07-13 北京小米移动软件有限公司 Equipment awakening method and device
CN110503962A (en) * 2019-08-12 2019-11-26 惠州市音贝科技有限公司 Speech recognition and setting method, device, computer equipment and storage medium
CN111524528B (en) * 2020-05-28 2022-10-21 Oppo广东移动通信有限公司 Voice awakening method and device for preventing recording detection
CN115881118B (en) * 2022-11-04 2023-12-22 荣耀终端有限公司 Voice interaction method and related electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011151502A1 (en) * 2010-06-02 2011-12-08 Nokia Corporation Enhanced context awareness for speech recognition
CN102868827A (en) * 2012-09-15 2013-01-09 潘天华 Method of using voice commands to control start of mobile phone applications
CN102929390A (en) * 2012-10-16 2013-02-13 广东欧珀移动通信有限公司 Method and device for starting application program in stand-by state

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7957972B2 (en) * 2006-09-05 2011-06-07 Fortemedia, Inc. Voice recognition system and method thereof
CN101599270A (en) * 2008-06-02 2009-12-09 海尔集团公司 Voice server and voice control method
CN101715018A (en) * 2009-11-03 2010-05-26 沈阳晨讯希姆通科技有限公司 Voice control method of functions of mobile phone
CN102104655A (en) * 2009-12-21 2011-06-22 康佳集团股份有限公司 Method and system for changing mobile phone standby wallpaper through voice control
CN102541574A (en) * 2010-12-13 2012-07-04 鸿富锦精密工业(深圳)有限公司 Application program opening system and method
CN102510426A (en) * 2011-11-29 2012-06-20 安徽科大讯飞信息科技股份有限公司 Personal assistant application access method and system
CN102568479B (en) * 2012-02-08 2014-05-28 广东步步高电子工业有限公司 Voice unlocking method and system of mobile handheld device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011151502A1 (en) * 2010-06-02 2011-12-08 Nokia Corporation Enhanced context awareness for speech recognition
CN102868827A (en) * 2012-09-15 2013-01-09 潘天华 Method of using voice commands to control start of mobile phone applications
CN102929390A (en) * 2012-10-16 2013-02-13 广东欧珀移动通信有限公司 Method and device for starting application program in stand-by state

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3261324A1 (en) * 2016-06-23 2017-12-27 Beijing Xiaomi Mobile Software Co., Ltd. Method and device for application switching
US10764418B2 (en) 2016-06-23 2020-09-01 Beijing Xiaomi Mobile Software Co., Ltd. Method, device and medium for application switching
CN111897601A (en) * 2020-08-03 2020-11-06 Oppo广东移动通信有限公司 Application starting method and device, terminal equipment and storage medium
CN111897601B (en) * 2020-08-03 2023-11-24 Oppo广东移动通信有限公司 Application starting method, device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN104660792A (en) 2015-05-27
TW201520896A (en) 2015-06-01

Similar Documents

Publication Publication Date Title
WO2015074553A1 (en) Method and apparatus of activating applications
US11064248B2 (en) Systems and methods for routing content to an associated output device
US11568876B2 (en) Method and device for user registration, and electronic device
US11257498B2 (en) Recorded media hotword trigger suppression
US9805719B2 (en) Initiating actions based on partial hotwords
US20210210094A1 (en) Messaging from a shared device
US10733987B1 (en) System and methods for providing unplayed content
US10321204B2 (en) Intelligent closed captioning
US20170323637A1 (en) Name recognition system
CN109643548B (en) System and method for routing content to associated output devices
US11516347B2 (en) Systems and methods to automatically join conference
US20140337030A1 (en) Adaptive audio frame processing for keyword detection
US11457061B2 (en) Creating a cinematic storytelling experience using network-addressable devices
US10931999B1 (en) Systems and methods for routing content to an associated output device
US10841756B1 (en) Managing communications sessions based on restrictions and permissions
CN108600559B (en) Control method and device of mute mode, storage medium and electronic equipment
CN108989551B (en) Position prompting method and device, storage medium and electronic equipment
CN112988956A (en) Method and device for automatically generating conversation and method and device for detecting information recommendation effect

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14863852

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 28/07/2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14863852

Country of ref document: EP

Kind code of ref document: A1