CN103677261A - Context aware service provision method and apparatus of user equipment - Google Patents
Context aware service provision method and apparatus of user equipment Download PDFInfo
- Publication number
- CN103677261A CN103677261A CN201310432058.9A CN201310432058A CN103677261A CN 103677261 A CN103677261 A CN 103677261A CN 201310432058 A CN201310432058 A CN 201310432058A CN 103677261 A CN103677261 A CN 103677261A
- Authority
- CN
- China
- Prior art keywords
- user
- rule
- condition
- input
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/74—Details of telephonic subscriber devices with voice recognition means
Abstract
A context aware service provision method and apparatus for recognizing the user context and executing an action corresponding to the user context according to a rule defined by the user and feeding back the execution result to the user interactively are provided. The method for providing a context-aware service includes receiving a user input, the user input being at least one of a text input and a speech input, identifying a rule including a condition and an action corresponding to the condition based on the received user input, activating the rule to detect a context which corresponds to the condition of the rule, and executing, when the context is detected, the action corresponding to the condition.
Description
Technical field
The present invention relates to Context-Aware Services (Context Aware Service) supplying method and device.More particularly, the present invention relates to for identifying user situation and carrying out and corresponding Context-Aware Services supplying method and the device that moves and execution result interactively is fed back to user of described user situation according to user-defined rule.
Background technology
Along with the progress of digital technology, there is various types of can communication and the user's set (for example, cellular communication terminal, PDA(Personal Digital Assistant), electronic notebook, smart phone and tablet personal computer (PC)) of deal with data.Recently, in line with Mobile Convergence trend, user's set develops into the multi-function device that is integrated with various functions gradually.For example, up-to-date user's set is integrated comprises the various functions of voice and video telephony feature, message transmission function (comprising Short Message Service/multimedia information service (SMS/MMS) and Email), navigation feature, picture capturing function, broadcast playback function, multimedia (for example, Voice & Video) playing function, the Internet access function, instant message function, social networking service (SMS) function etc.
Meanwhile, in utilization, be developed into aspect the Context-Aware Services (CAS) of various life daily record technology that form with numerical information records individual daily life and have growing interest.The feature of CAS is: according to the variation of the sight being defined by service object realize about whether provide service determine and about determining the service content being provided.Term " sight " refers to the information of using when determining the service behavior of service object's definition, and described information comprises that service provides sequential, whether service is provided, target, the service of serving provide position etc.This technology can record describe personal behavior various types of information and based on record information CAS is provided.
Yet, according to the CAS method of prior art, be in the situation that the loaded down with trivial details installation of hypothesis based on for collecting about the various sensor devices in the field of personal information realizes.According to the cas system of prior art, by user's set and server, formed, wherein, described user's set is for collect data by means of sensor, and described server is for analyzing the data obtained from described user's set to set up sight and to carry out service based on described sight.For example, because user's set must be equipped with various sensors and must operate so that the service based on sight to be provided to user with server interaction, so in order to realize the cas system according to prior art, exist high system to realize the obstacle of cost and design complexities.
According to the cas system of prior art, there is the shortcoming being associated with following aspect: owing to being limited to the information of collecting via subscriber equipment and lacking effective learning process, be difficult to effectively provide the service based on sight.For example, according to the cas system of prior art, can only utilize the rule being defined by equipment manufacturers to provide the service based on sight to user, thereby not meet all users' demand.Because user need to carry out extra program and/or carry out complicated operation to use the service based on sight, so have according to the cas system of prior art the shortcoming that user's access property is low.In addition, according to the cas system of prior art, be confined to single context aware scheme, thereby do not there is dirigibility when noticing that various sights impose a condition.
Therefore, need to support to have the CAS method and apparatus of the CAS of user-defined one or more rules.
Above-mentioned information is only rendered as for helping the background technical information of understanding of the present disclosure.As for above-mentioned any content, whether can be used as for prior art of the present invention, do not determine and do not assert.
Summary of the invention
Many aspects of the present invention will at least address the above problem and/or shortcoming also will at least provide advantage described below.Therefore, an aspect of of the present present invention provides a kind of CAS method and apparatus that can support to have the Context-Aware Services (CAS) of one or more rules defined by the user.
Another aspect of the present invention provides a kind of can also carry out the CAS method and apparatus moving accordingly with described user's situation according to feeding back to based on the collected context information of one or more rules user in terminal perception according to the mode of the determined user's of rule defined by the user situation.
Provide according to a further aspect in the invention a kind of can allow user by by the text based on natural language and/or phonetic entry to user's set carry out definition rule (or situation), for the order of executing rule with by the CAS method and apparatus of the action of rule-based execution.
Another aspect of the present invention provides a kind of can expand the supportive CAS method and apparatus of CAS in the following manner: on user's set, use text or voice based on natural language to come definition rule, order and action, text or the voice of identification based on natural language, and carry out according to the motion of user's set and selecteed rule.
Another aspect of the present invention provide a kind of can configuration rule the corresponding a plurality of sights of a plurality of conditions, perception and each condition and carry out and the CAS method and apparatus of the corresponding a plurality of actions of each sight.
Another aspect of the present invention provide a kind of can be according to the CAS method and apparatus of user's the one or more conditions of configuration preferences when definition rule.
Another aspect of the present invention provides a kind of CAS environment that can optimize by realization to improve the CAS method and apparatus of user's convenience and the availability of device.
According to an aspect of the present invention, provide a kind of for the method for the Context-Aware Services of user's set is provided.Context-Aware Services supplying method comprises: receive user's input, described user is input as at least one in text input and phonetic entry; User based on receiving inputs identification and comprises condition and the rule of moving accordingly with described condition, and described user is input as in text input and phonetic entry; Activate rule with the corresponding sight of condition of detection and described rule; When described sight being detected, carry out with described condition and move accordingly.
According to a further aspect in the invention, provide a kind of for the method for the Context-Aware Services of user's set is provided.Context-Aware Services supplying method comprises: the user interface that is provided for configuration rule; By user interface, receive at least one in phonetic entry based on natural language and the input of the text based on natural language; Use from user and input the condition of identification and move configuration rule; Activate rule with the corresponding event of condition of detection and described rule; With when described event being detected, carry out with described condition and move accordingly.
According to a further aspect in the invention, provide a kind of for the method for the Context-Aware Services of user's set is provided.Described method comprises: receive for using voice based on natural language or user's input of text configuration rule; According to user's input configuration rule; Receive for activating regular order, described order is voice based on natural language, the text based on natural language, the motion detection event of user's set, in the reception of the reception of incoming call sound and incoming call message one; Carry out with described order regular accordingly; At least one condition in inside and outside generation of inspection appointment in described rule; When in rule, at least one condition of appointment is reached, carry out with at least one condition being reached at least one at least one action accordingly.
According to a further aspect in the invention, provide a kind of for the method for the Context-Aware Services of user's set is provided.Context-Aware Services supplying method comprises: definition rule; Be received as the order of carrying out described rule and inputting; In response to described order, carry out executing rule; Check and the corresponding condition of described rule; When detecting with the corresponding condition of described rule, carry out at least one action.
A kind of Context-Aware Services supplying method of user's set is provided according to a further aspect in the invention.Described Context-Aware Services supplying method comprises: monitor to detect under the state being performed in rule whether event occurs; When event being detected, extract the function that is specified for performing an action; According to described function, carry out described action; Feed back the information relevant to the execution of described action; When event not detected, determine whether current situation reaches regular condition subsequent; When reaching regular condition subsequent, current situation removes described rule.
According to a further aspect in the invention, the program of above method is carried out in a kind of non-instantaneous computer-readable recording medium storage by processor.
According to a further aspect in the invention, provide a kind of user's set.Described user's set comprises: storage unit, and storage comprises condition and the rule of moving accordingly with described condition; Display unit, shows for being received in the user's input under the state that rule is activated and carrying out the user interface of information and the execution result of action; Control module, control inputs based on user the rule that identification comprises condition and action, control and activate rule with the corresponding sight of condition of detection and described rule, and carry out with described condition and move accordingly when described sight is detected, wherein, user is input as at least one in text input and phonetic entry.
According to a further aspect in the invention, provide a kind of user's set.Described user's set comprises: rule configuration module, by computer realization, for receive user input and for based on user, input identification comprise condition and with the rule of the corresponding action of described condition, user is input as phonetic entry based on natural language and at least one in the input of the text based on natural language; Rule execution module, by computer realization, for receiving for activating regular order regular accordingly for carrying out with described order, wherein, described order is voice based on natural language, the text based on natural language, the motion detection event of user's set, in the reception of the reception of incoming call sound and incoming call message one; Condition checking module, by computer realization, for detection of with rule in the corresponding sight of condition of appointment; Action executing module, computer realization, for carrying out with described condition and move accordingly when described sight is detected.
According to a further aspect in the invention, provide a kind of non-instantaneous computer-readable recording medium.Non-instantaneous computer-readable recording medium comprises a kind of program, impels at least one processor to carry out the method that comprises following processing when this program is performed: according to user, input definition for the rule of Context-Aware Services; When receive for executing rule order time carry out and described order rule accordingly; When the condition of appointment is reached in rule, carries out with described condition and move accordingly.
According to another aspect of the invention, provide a kind of non-instantaneous computer-readable recording medium.Described computer-readable recording medium comprises a kind of program, impels at least one processor to carry out the method that comprises following processing when this program is performed: receive user's input, described user is input as at least one in text input and phonetic entry; User based on receiving inputs identification and comprises condition and the rule of moving accordingly with described condition; Activate rule with the corresponding sight of condition of detection and described rule; When described sight being detected, carry out with described condition and move accordingly.
By disclosing of carrying out below in conjunction with accompanying drawing the detailed description of exemplary embodiment of the present invention, other side of the present invention, advantage and showing feature and will become for a person skilled in the art clear.
Accompanying drawing explanation
By the description of carrying out below in conjunction with accompanying drawing, certain exemplary embodiments of the present invention above closes other side, feature and advantage will be clearer, wherein:
Fig. 1 is the block diagram illustrating according to the configuration of the user's set of exemplary embodiment of the present invention;
Fig. 2 is the process flow diagram illustrating according to the Context-Aware Services of the user's set of exemplary embodiment of the present invention (CAS) supplying method;
Fig. 3 A to Fig. 3 K be illustrate according to exemplary embodiment of the present invention for explaining the diagram in the exemplary screen of the operation of user's set generation rule;
Fig. 4 A to Fig. 4 J be illustrate according to exemplary embodiment of the present invention for explaining the diagram in the exemplary screen of the operation of user's set generation rule;
Fig. 5 A to Fig. 5 E be illustrate according to exemplary embodiment of the present invention for explaining the diagram of exemplary screen of carrying out the operation of predefine rule at user's set;
Fig. 6 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention;
Fig. 7 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention;
Fig. 8 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention;
Fig. 9 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention;
Figure 10 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention;
Figure 11 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention;
Figure 12 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention;
Figure 13 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention;
Figure 14 A and Figure 14 B be illustrate according to exemplary embodiment of the present invention for explaining the diagram of exemplary screen that temporarily stops the regular operation of current operation at user's set;
Figure 15 A and Figure 15 B be illustrate according to exemplary embodiment of the present invention for explaining the diagram of exemplary screen that temporarily stops the regular operation of current operation at user's set;
Figure 16 A and Figure 16 C be illustrate according to exemplary embodiment of the present invention for explaining the diagram of exemplary screen that temporarily stops the regular operation of current operation at user's set;
Figure 17 shows according to the diagram of the exemplary screen of the indication with the executing rule in user's set of exemplary embodiment of the present invention;
Figure 18 A and Figure 18 B show the diagram of notifying the exemplary screen of the regular item of carrying out according to having of exemplary embodiment of the present invention in user's set;
Figure 19 A and Figure 19 B show the diagram of notifying the exemplary screen of the regular item of carrying out according to having of exemplary embodiment of the present invention in user's set;
Figure 20 A to Figure 20 C shows the diagram of notifying the exemplary screen of the regular item of carrying out according to having of exemplary embodiment of the present invention in user's set;
Figure 21 A and Figure 21 B show the diagram of the exemplary screen being associated according to the operation with notify executing rule in user's set of exemplary embodiment of the present invention;
Figure 22 A to Figure 22 C shows the diagram of the exemplary screen being associated according to the regular operation with stop current operation in user's set of exemplary embodiment of the present invention;
Figure 23 A and Figure 23 B show the diagram of the exemplary screen being associated according to the regular operation with stop current operation in user's set of exemplary embodiment of the present invention;
Figure 24 and Figure 25 are the diagrams illustrating according to the situation that stops CAS service in user's set of exemplary embodiment of the present invention;
Figure 26 A and Figure 26 B are the diagrams that the exemplary screen being associated according to the operation with deletion rule in user's set of exemplary embodiment of the present invention is shown;
Figure 27 shows the process flow diagram of the process of generation rule in user's set according to exemplary embodiment of the present invention;
Figure 28 shows according to the process flow diagram of the process that CAS is provided in user's set of exemplary embodiment of the present invention;
Figure 29 shows according to the process flow diagram of the process that CAS is provided in user's set of exemplary embodiment of the present invention;
Figure 30 shows according to the process flow diagram of the process that CAS is provided in user's set of exemplary embodiment of the present invention;
Figure 31 A to Figure 31 N shows the diagram of the exemplary screen being associated according to the operation with generation rule in user's set of exemplary embodiment of the present invention;
Figure 32 A to Figure 32 E shows the diagram of the exemplary screen being associated according to the operation with executing rule in user's set of exemplary embodiment of the present invention;
Figure 33 A to Figure 33 D shows the diagram of the exemplary screen being associated according to the regular operation with suspend current operation in user's set of exemplary embodiment of the present invention;
Figure 34 A to Figure 34 D shows the diagram of the exemplary screen being associated according to the regular operation with stop current operation in user's set of exemplary embodiment of the present invention.
In whole accompanying drawing, it should be noted, identical reference number is used to describe same or analogous element, feature and structure.
Embodiment
With reference to the following description of accompanying drawing, be provided for and help the exemplary embodiment of the present invention that is defined by the claims and the complete understanding of equivalent thereof.It comprises the detail that various help is understood, exemplary but these are only considered to.Therefore, those of ordinary skill in the art will recognize, can make various changes and modification to embodiment described here without departing from the scope and spirit of the present invention.In addition, for clear and succinct, can omit the description to known function and structure.
The term and the word that in description below and claims, use are not limited to dictionary meanings, but only by inventor, are made for realizing clear and succinct understanding of the present invention.Therefore, those skilled in the art are noted that the description of the exemplary embodiment of the present invention providing is only for illustration purpose below, and are not meant to limit the object of the present invention being limited by claim and equivalent thereof.
Unless will be appreciated that context explicitly points out in addition, referred to otherwise singulative comprises plural number.Therefore, for example, with reference to " assembly surface ", comprise one or more with reference in these surfaces.
Exemplary embodiment of the present invention relates to Context-Aware Services (CAS) supplying method and the equipment of user's set.
According to exemplary embodiment of the present invention, user's set can be according to one or more regular perception user's defined by the user different sights.
According to exemplary embodiment of the present invention, user's set can be carried out one or more actions according to context aware, and using context information as the result feedback performing an action to user or predetermined people.
According to exemplary embodiment of the present invention, CAS supplying method and device can for example, feed back to context information user and/or by message, context information be sent to another user by external unit (, TV, electric light etc.).
In various exemplary embodiments of the present invention, the voice definition rule that can pass through text (for example, hand-written) or utilize natural language to input.In various exemplary embodiments of the present invention, compare the language that natural language is used corresponding to the mankind with the artificial language of inventing for specific purpose (or machine language).
In various embodiment of the present invention, rule can be activated in response to the input (or reception) of the order being associated with rule.
In various embodiment of the present invention, can the user based on receiving input identification or selective rule.The rule being identified can be activated to detect the corresponding sight of condition with described rule.When described sight being detected, can carry out with described condition and move accordingly.
In various exemplary embodiments of the present invention, when rule is activated, user's set can monitor or detect the sight that subscriber equipment operates place.Based on user's set, operate the sight of supervision or the detection at place, user's set can be determined or identify user's set and operate in the corresponding sight of rule with activating.
In various exemplary embodiments of the present invention, rule can be comprised of at least one condition and at least one action, and is below describing method or the processing of generation rule.
In various exemplary embodiments of the present invention, can carry out pre-defined rule in response to the reception with the corresponding instruction of rule.
In various exemplary embodiments of the present invention, instruction can comprise for example, voice command or command statement or the text based on natural language by various input medias (, touch-screen, keyboard, microphone etc.) input.Instruction can also comprise that variation in user's set (for example, gesture, towards etc. variation), wherein, various sensors (for example, proximity transducer, illuminance transducer, acceleration transducer, gyrosensor, speech transducer etc.) by user's set detect described variation according to pre-defined rule.Instruction also can comprise the reception with the corresponding incoming call message of pre-defined rule or incoming call sound.Instruction also can comprise the variation with the corresponding user's of pre-defined rule (or user's set) geographic position.
In various exemplary embodiments of the present invention, for the instruction of executing rule (for example, about the definition of order, command statement, user's set can sensing behavior, for the sensor of behavior described in sensing) can be according to input the voice based on natural language or the mode of text be configured.
In various exemplary embodiments of the present invention, as the order of a type of the instruction of executing rule or imperative sentence, can for example, with the form of part (, word), part statement or the complete statement of the natural language that comprises at definition rule, be transfused to.
In various exemplary embodiments of the present invention, statement can be to express thoughts or the minimum of the complete content of emotion is expressed unit, although it comprises that subject and predicate can be vital, can omit any one in subject and predicate.
In various exemplary embodiments of the present invention, can be by being transfused to by the operation of one or more sensors of the rule configuration of definition as the detection of the behavior of a kind of type of user device of instruction.
In various exemplary embodiments of the present invention, the operation that when action can comprise the situation of appointment in the rule that perceives current operation, user's set is carried out.
In various exemplary embodiments of the present invention, action for example can comprise, by (controlling intraware, display unit, communication module, loudspeaker) feedback (for example controls about the operation of the information of the situation of appointment in respective rule, built-in function control), for example, by (controlling external module, televisor, electric light, external loudspeaker) feedback controls (for example, peripheral operation is controlled) about the operation of the information of the situation of appointment in respective rule and controls for controlling the operation of the intraware of user's set and the external module of user's set.
In various exemplary embodiments of the present invention, CAS represents a kind of like this service, in this service, the situation of user's set perception appointment in rule defined by the user, carry out with described situation and move accordingly, and to user (or predetermined people), be provided as the information about described situation of the result of action executing.The full detail that can use when situation information is included in user interactions, such as the application of user's (or user's set) position, identifier, activity, state and user's set.
Describe with reference to the accompanying drawings hereinafter according to the configuration of the user's set of exemplary embodiment of the present invention and method of controlling operation thereof.It should be noted that, exemplary embodiment of the present invention is not limited to configuration and the method for controlling operation thereof of the user's set of basis description below, but exemplary embodiment of the present invention can realize with various variations and change without departing from the scope of the invention.
Fig. 1 is the block diagram illustrating according to the configuration of the user's set of exemplary embodiment of the present invention.
With reference to Fig. 1, user's set 100 comprises radio communication unit 110, input block 120, touch-screen 130, audio treatment unit 140, storage unit 150, interface unit 160, control module 170 and power supply 180.According to the user's set 100 of exemplary embodiment of the present invention, can utilize or not utilize at least one in the assembly of describing in Fig. 1 and do not describe to realize.For example, if support picture capturing function according to the user's set 100 of exemplary embodiment of the present invention, can comprise camera model (not shown).Similarly, if user's set 100 does not have broadcast reception and playing function, can omit some functional modules (for example, the broadcast reception module 119 of radio communication unit 110).
WLAN module 113 is responsible for setting up WLAN with access point (AP) or another user's set 100 and is linked, and can be embedded in user's set 100 or be implemented as external device (ED).There are various available radio internet access technologies, such as Wi-Fi, WiMAX (WiBro), worldwide interoperability for microwave access (WiMAX), high speed downlink packet access (HSDPA) etc.WLAN module 113 can be connected under the state of server the various types of data (for example, comprising rule) that receive about CAS.Setting up under the state that WLAN links with another user's set, WLAN module 113 can be according to user's intention by various data (for example, comprise rule) send to described another user's set and receive various data (for example, comprising rule) from described another user's set.WLAN module 113 can also be linked the various data about CAS (for example, comprising rule) are sent to Cloud Server and receive the various data (for example, comprising rule) about CAS from Cloud Server by WLAN.WLAN module 113 can also for example, send at least one target subscriber device by action executing result (, situation information) under the control of control module 170.The message that the condition that WLAN module 113 can also receive appointment in the rule in current operation under the control of control module 170 produces while being reached.
Short-range communication module 115 is responsible for the short haul connection of user's set 100.There are various available short-range communication technique, such as bluetooth, low-power consumption bluetooth (BLE), radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), ZigBee and near-field communication (NFC) etc.When user's set 100 is connected to another user's set, short-range communication module 115 can send to the various data (comprising rule) about CAS described another user's set and receive the various data (comprising rule) about CAS from described another user's set according to user's intention.
Touch-screen 130 is input/output devices of being simultaneously responsible for input function and output function, and comprises display panel 131 and touch panel 133.According to exemplary embodiment of the present invention, for example, if at the execution screen of user's set 100 (, rule (condition and action) configuration screen, outbound calling dial screen, message transmits screen, game screen, picture library screen etc.) (be for example displayed on touch gestures that touch panel under the state on display panel 131 detects user, one or many touches, raps, drags, gently sweeps, flicks etc.), 130 pairs of control modules of touch-screen 170 produce and the corresponding input signals of touch gestures.Control module 170 identification touch gestures according to touch gestures executable operations.For example, if the touch gestures that the text based on natural language writes detected on touch panel 133 in the situation that rule configuration screen is displayed on display panel 131, control module 170 is in response to described touch gestures generation rule.
The following content of storage unit 150 storage: the operating system of user's set 100 (OS); With the input of touch-screen 130 and show control operation, the depend on rule CAS control operation (rule that for example, comprises condition and action), the program being associated according to regular action executing and context information feedback of context aware of (for example, condition); By the semi-durable ground of program or the data that temporarily produce.According to exemplary embodiment of the present invention, storage unit 150 can also be stored for supporting the configuration information of CAS.So can comprising closing, described configuration information support voice-based CAS still to support the information of text based CAS.Configuration information can also comprise at least one condition of each rule and the rule of appointment and the corresponding action of condition.
Available flash memory type, hard disk type, micro type, Card Type (for example, secure digital (SD) type and extreme digital (XD) Card Type) storer, random-access memory (ram), dynamic ram (DRAM), static RAM (SRAM) (SRAM), ROM (read-only memory) (ROM), programming ROM (PROM), electric erasable PROM(EEPROM), at least one the storage medium in magnetic ram (MRAM), disk and optical disc types storer etc. realizes storage unit 150.User's set 100 can with internet on as the web page memory interactive operation of storage unit 150.
According to exemplary embodiment of the present invention, control module 170 can be controlled the operation relevant to CAS, such as user-defined rule configuration, rule-based context aware, rule-based action executing and as the context information feedback of the result of action executing.Control module 170(for example, rule configuration module 173) can input according to user (for example, the voice based on natural language or text input) definition for the rule of CAS is provided.Control module 170 is (operatively) reception user input operationally, and the user based on receiving inputs the rule of identifying the condition that comprises and moving accordingly with described condition.Control module 170 can activate rule with the corresponding sight of condition of detection and described rule.If detected for carrying out the regular instruction in configuration rule appointment, control module 170(for example, rule execution module 175) can carry out one or more rule.Control module 170(for example, condition checking module 177) can check the result that (for example, determine) condition for identification (or sight) are carried out as rule.If the condition of appointment is identified in respective rule, control module 170(for example, action executing module 179) carry out the action trigger when described condition is reached.For example, when sight is detected, control module 170 is carried out with described condition and is moved accordingly.Control module 170(for example, action executing module 179) carry out at least one function (application program) and operate accordingly to carry out with described function (or application program).
If asked in response to user in rule execution module 175, carry out under at least one regular state, condition detection module 177 detects event (reaching the action of the condition of appointment in rule), and control module 170 extracts by action executing module 179 function that the corresponding action of event for carrying out with the rule of current operation defines.Control module 170 can be controlled the execution with the corresponding action of function of extracting by action executing module 179.If user's request event do not detected, control module 170 determines whether current situation reaches for stopping the regular condition of at least one current operation.If current situation reaches condition, control module 170 control law execution modules 175 are to stop the rule of current operation.
When in rule execution module 175, carry out under at least one regular state, carry out with by condition checking module 177, checked (for example, determine) condition is when move accordingly, and control module 170 can also be controlled the operation as the feedback of the context information of the result of the action executing of action executing module 179.When in rule execution module 175, carry out under at least one regular state, stop according to by condition checking module 177, checked (for example, determine) condition carry out regular time, control module can also be controlled the feedback operation that the action corresponding to action executing module 179 stops.
In various exemplary embodiments of the present invention, according to the feedback operation of action executing, can comprise by display panel 131 and (for example present action executing result to user, context information) and by radio communication unit 110 action executing result (for example, context information) is sent to another user.According to the feedback operation of action executing, can comprise and will for example, for example, for controlling the control signal of the operation (, opening/closing) of external device (ED) (, electric light, TV etc.), send to corresponding external unit accordingly with action executing.
In various exemplary embodiments of the present invention, feedback operation can comprise to device users provides audio frequency effect (for example, by the predetermined audio of loudspeaker 141), visual effect (for example, by the predetermined screen of display panel 131) and haptic effect (for example,, by the predetermined vibration pattern of vibration module (not shown)) at least one.
About with reference to accompanying drawing, carry out subsequently about the operation of user's set 100 and the description of control method in will make the detailed control operation of control module 170 become clearer.
In various exemplary embodiments of the present invention, control module 170 can be controlled operation and the aforesaid operations relevant to the normal function of user's set 100.For example, control module 170 can the execution of controlling application program and the demonstration of execution screen.Control module 170 can also be controlled and receive the input signal for example, being produced in response to touch gestures by the input interface (, touch-screen 130) based on touching and the operation of carrying out function according to described input signal.Control module 170 can also communicate various data by wired or wireless channel.
As mentioned above, according to exemplary embodiment of the present invention, user's set 100 comprises: rule configuration module 173, configures the executable rule of computing machine in response to the user speech based on natural language for configuration rule or text input; Rule execution module 175, comes the object computer can executing rule in response to the motion response instruction of the voice based on natural-sounding or text instruction or user's set or from the instruction for executing rule of the form of outside message instruction; Condition checking module 177, checks whether at least one condition (for example, situation) of for example, in (, identification and/or definite) rule appointment is reached; Whether action executing module 179, be reached to carry out at least one action according to the condition of appointment in rule.
In various exemplary embodiments of the present invention, rule configuration module 173 can be operating as the voice based on natural language or the text input that perception user makes under rule configuration pattern.In various exemplary embodiments of the present invention, rule execution module 175 can and be shone upon a plurality of actions and condition a plurality of conditions of each rule configuration.In various exemplary embodiments of the present invention, condition checking module 177 can be carried out for checking and the Scenario perceptional function that is the corresponding a plurality of sights of condition of each rule configuration.In various exemplary embodiments of the present invention, action executing module 179 can side by side or sequentially be carried out a plurality of actions in response to the perception of a plurality of sights of rule.
According to the CAS supplying method of an embodiment in various exemplary embodiments of the present invention, can realize with software, hardware or the combination of the two, or be stored in non-instantaneous computer-readable recording medium.In hard-wired situation, can use special IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor and carry out in other electric unit of particular task at least one realize the CAS supplying method according to exemplary embodiment of the present invention.
Can itself realize exemplary embodiment of the present invention by control module 170.In the situation that realizing with software, available software module (for example, rule configuration module 173, rule execution module 175, condition checking module 177 and action executing module 179 etc.) realizes process and the function in exemplary embodiment of the present invention, described.Software module can be carried out at least one in above-mentioned functions and operation.
Storage medium can be any non-instantaneous computer-readable recording medium of the following programmed instruction of storage: memory response is inputted definition about the regular program command of CAS in user; In response to program, carry out instruction and carry out at least one regular programmed instruction; The programmed instruction of corresponding at least one action of the condition of execution and appointment in executing rule when the condition (situation) of appointment in rule is reached.Storage medium can also be the non-instantaneous computer-readable recording medium of the following programmed instruction of storage: the regular programmed instruction that the voice based on natural language of making in response to user or text input configure the condition of comprising and move accordingly with described condition; In response to the regular instruction of indication, activate the programmed instruction of described rule; Determine the programmed instruction whether condition of appointment in the rule of carrying out is reached; Carry out with the condition being reached and work accordingly.
In various exemplary embodiments of the present invention, user's set 100 can be any type in information-communication device, multimedia device and (have in application processor (AP), Graphics Processing Unit (GPU) and CPU (central processing unit) (CPU) any one) their equivalent.For example, user's set 100 can be to use for example, in the cellular communication terminal that operates with the corresponding various communication protocol of communication system, tablet personal computer (PC), smart mobile phone, digital camera, portable media player (PMP), media player (, MP3 player), portable game console, PDA(Personal Digital Assistant) etc. any.According to the CAS supplying method of any one embodiment in various exemplary embodiments of the present invention, also can be applied to the various display device such as Digital Television (TV), digital signage (DS), giant display (LFD), laptop computer, desktop PC etc.
Fig. 2 illustrates the process flow diagram of the CAS supplying method of user's set according to an exemplary embodiment of the present invention.
With reference to Fig. 2, in step 201, control module 170(for example, rule configuration module 173) in response to by means of one in input block 120, microphone 143 and touch-screen 130 for defining regular user about CAS, input to define (for example, configuration and produce) rule.
For example, user can be used for the voice based on natural language of configuration rule under rule configuration pattern by microphone 143 inputs.User can also be used for the text based on natural language of configuration rule under rule configuration pattern by touch-screen 130 inputs.Control module 170(for example, rule configuration module 173) identify and resolve user input (for example, speech recognition and text identification) and define (for example, identification) by the rule of carrying out.Control module 170(for example, rule execution module 175) can input in response to user the instruction of executing rule (for example, for) control user's set 100 enter state of activation and etc. regular execution to be configured.For example, with reference to accompanying drawing (, Fig. 3 A to Fig. 3 K and Figure 31 A to Figure 31 N), describe according to the rule configuration of various exemplary embodiments of the present invention and produce operation.
If received for carrying out the instruction of ad hoc rules under at least one regular state inputting definition in response to user, control module 170(for example, rule execution module 175) control and carry out respective rule (step 203).
For example, user can be by an input in input block 120, microphone 143 and touch-screen 130 for carrying out the order based on natural language or the command statement of predefine rule.User can use function key input, phonetic entry, touch (for example to input, text writes and selection window widget (widget)) and the input based on gesture is (for example, change the posture of user's set 100, such as tilting and accelerated motion) input for for activating at least one regular specific instruction of CAS.According to exemplary embodiment of the present invention, produce for carrying out the instruction of respective rule during the condition of appointment in can reaching rule in various users' input behavior.According to exemplary embodiment of the present invention, can reach in rule the particular message of the condition of appointment or the form of sound produces the instruction for executing rule to receive.Control module 170(for example, rule execution module 175) can identify to reach for the instruction of the condition of executing rule an instruction for response month identification and carry out respective rule to activate CAS.
In step 205, control module 170(for example, condition checking module 177) trigger the corresponding condition of rule (situation) with current operation.
If be triggered with the corresponding condition of rule of current operation, control module 170(for example, action executing module 179) can control the execution (step 207) with corresponding at least one action of described condition.
For example, if carry out at least one rule, control module 170(for example, condition checking module 177) whether the condition for trigger action that can monitor to detect appointment in rule be reached.If the conditioned disjunction situation of trigger action is reached, control module 170(for example, action executing module 179) can control for carrying out inside and/or the peripheral operation of corresponding actions.Action executing for example can comprise, according to predefine rule (, condition and action) to be carried out function (or application program), produces execution result (for example, context information) and execution result is fed back to user or other people operation.
According to exemplary embodiment of the present invention, before performance objective rule, in the operation of step 201 definition rule, can be performed or can be carried out in addition by user.The former in the situation that, user can input the instruction for executing rule immediately in step 203, in the latter case, user can carry out for not only definition rule but also input for carrying out the step 201 and 203 of the instruction of described rule.
Fig. 3 A to Fig. 3 K be illustrate according to exemplary embodiment of the present invention for explaining the diagram in the exemplary screen of the operation of user's set generation rule.
With reference to Fig. 3 A to Fig. 3 K, for example show control module 170(, rule configuration module 173) for receiving the phonetic entry based on natural language that user makes and for for example, according to the exemplary operation of phonetic entry definition and/or recognition rule (, condition and action).
Fig. 3 A shows the exemplary screen of user's set 100 when equipment user carries out the application for CAS according to exemplary embodiment of the present invention.
With reference to Fig. 3 A, CAS application provides and has comprised the user interface (UI) of following menu or graphic user interface (GUI) (hereinafter, be called " screen interface "): the first menu 310(is for example, " my rule " in my menu or Fig. 3 A), for showing regular list defined by the user; The second menu 320(for example, " rule of just moving " in actions menu or Fig. 3 A), for the regular list of the current operation of the rule of display definition; The 3rd menu 350(for example, adds " adding rule " in menu or Fig. 3 A), for defining in addition new regulation.
Screen interface can provide and the corresponding list of menu item of selecting in the first menu 310 and the second menu 320.As shown in Figure 3A, if selected the first menu 310, show and the list of (for example, " family " item 330 and " taxi " item 340) accordingly of user-defined rule.
Under the state of Fig. 3 A, user can select (touching) for defining the 3rd menu (" adding rule ") 350 of new regulation.Then, the control module 170(of user's set 100 for example, rule configuration module 173) determine whether to start the operation for definition rule, and be switched to rule configuration pattern together with showing corresponding screen interface.Fig. 3 B shows the exemplary screen interface showing in this case.
Fig. 3 B shows the exemplary screen of user's set 100 when device users is carried out the rule configuration pattern for definition rule.In exemplary embodiment of the present invention, the operation of describing in Fig. 3 B is that the study course of the method about definition rule is offered to user, and can omit this study course according to user's intention step is provided.
With reference to Fig. 3 B, can provide study course with the form of pop-up window 351.Control module 170 can be controlled by the form of pop-up window 351 and show study course, and wherein, pop-up window 351 for example presents, for how guiding the guide (, picture and text) of definition rule.For example, described guide for example can be set to indicate the image 351c of activation of speech recognition mode and the text 351d(that guides definition rule how, how to lay down a regulation, as " if say " subway ", playing music ", " if say " subway ", carrying out operation below ").
Provide the pop-up window 351 of study course for example can comprise menu item 351a(for confirming definition rule, START button) and for the menu item 351b(that cancels definition rule for example, " cancellation " button).User can be by selecting the menu item 351a of the pop-up window that study course is provided 351 as shown in Figure 3 B and a definition that continues or cancel new regulation in 351b.
Fig. 3 C to Fig. 3 K shows the operation of giving a definition new regulation in rule configuration pattern according to exemplary embodiment of the present invention.Fig. 3 C to Fig. 3 K shows and is receiving user's the voice based on natural language and shown exemplary screen during in response to the condition of user's the voice configuration respective rule based on natural language and action.
As described in Fig. 3 C to Fig. 3 K, control module 170 display reminding users carry out the pop-up window 353 of phonetic entry (for example, " saying rule ") and wait for user speech input under rule configuration pattern.Under the state of Fig. 3 C, user can for example, based on the regular type being defined (, odd number structure or plural structure) is carried out to the phonetic entry based on natural language at least one condition with corresponding at least one action of condition.
For example, under the state of Fig. 3 D, user can carry out the phonetic entry of " if say " subway ", carrying out operation below ".Then, control module 170 is identified users' phonetic entry and shows that pop-up window 355 is as the result of speech recognition.According to exemplary embodiment of the present invention, control module 170 can show that then pop-up window 355 wait for user's phonetic entry, wherein, pop-up window 355 use notification messages (for example, " what I can do ") promptings user for when condition " subway ", utilize voice identification result " what order [subway] I can do? " while being reached, the action of execution is carried out to phonetic entry.
Under the state of Fig. 3 E, user can say " opening Wi-Fi ".As shown in Fig. 3 E, control module 170 identification users' phonetic entry also shows pop-up window 357, wherein, pop-up window 357 notice speech recognition mode and to condition (for example, subway) and action (for example, open Wi-Fi) process of the operation of shining upon (for example, " identification carry out in ").In various exemplary embodiments of the present invention, can omit the screen display relevant to identifying operation.
Once complete identification and map operation, control module 170 just can provide with the form of the pop-up window 359 as shown in Fig. 3 F identification and mapping result.For example, control module 170 can display notification about the pop-up window 359 of the information of the rule of new definition and the required movement that is associated with described rule.According to exemplary embodiment of the present invention, control module 170 can notify the new regulation with condition " subway " to be produced and for example, for pointing out menu item that user operates (, " confirmation " and " cancellation "), wherein, described new regulation is opened for configure Wi-Fi when described condition is reached.Under the state of Fig. 3 F, user can select " confirmation " menu item with the rule of application configuration or select " cancellation " menu item to cancel the rule of configuration.
Under the state of Fig. 3 F, if user selects " confirmation " menu item (or carrying out phonetic entry), as shown in Fig. 3 G, control module 170 can display reminding user carries out phonetic entry next time (for example, " saying Next Command ") and waits for user's phonetic entry.Under the state of Fig. 3 G, user can carry out the phonetic entry of " changing into vibration ".As shown in Fig. 3 H, then, control module 170 can be identified phonetic entry and show pop-up window 363, wherein, pop-up window 363 notice speech recognition mode and to condition (for example, subway) and the process of other action (for example, the configuration of the vibration) operation of shining upon (for example, " identification carry out in ").If completed identification and map operation, control module 170 can provide operating result with the form of the pop-up window 365 as shown in Fig. 3 I.For example, control module 170 can display notification in response to the rule of user's phonetic entry configuration and with the pop-up window 365 of the corresponding action of described rule.According to exemplary embodiment of the present invention, the new regulation that control module 170 can have a condition " subway " by pop-up window 365 notice produced and menu item (for example, " confirmation " and " cancellation "), wherein, described new regulation for being switched to vibration mode when described condition is reached.Under the state of Fig. 3 I, user can select " confirmation " menu item with the rule of application configuration or select " cancellation " menu item to cancel the rule of configuration.
Under the state of Fig. 3 I, if user selects " confirmations " menu item, as shown in Fig. 3 J, control module 170 can display reminding user carries out pop-up window 367 waiting voice of phonetic entry (for example, " saying Next Command ") and inputs.Under the state of Fig. 3 I, user can carry out the phonetic entry of " finishing (or stopping) ".Then, the control module 170 described phonetic entries of identification and provide about the condition of appointment in the rule of the step definition by Fig. 3 B to 3J and with the information of corresponding at least one action of this condition, as shown in Fig. 3 K.
For example, control module 170 can show that the regular condition " subway " defining by aforesaid operations is together with the action " Wi-Fi opens configuration " and " the vibration mode handover configurations " that are mapped to this condition, as shown in Fig. 3 K.In this manner, the rule of new definition can be added in list of rules as shown in Figure 3A.For example, new rule of adding as item " subway " 360 can with previous definition (for example, " family " 330 and " taxi " 340) together with, be displayed in list, and available details (for example, condition and action) provide " subway " item 360.The various settings of all right display device of screen interface.For example, screen interface can show that Wi-Fi arranges 371, sound arranges 373 etc.The setting of device can for example, be associated with item (, item 330, item 340 and/or item 360).
As described with reference to Fig. 3 A to Fig. 3 K above, according to various exemplary embodiments of the present invention, at least one action can be mapped to a condition.In exemplary embodiment of the present invention, CAS method can be supported single rule defining operation and the operation of polynary rule definition.This can be summarized as follows.
The operation of single structure rule definition can be summarized as shown in table 1.
Table 1
The operation of multi-factor structure rule definition can be summarized as shown in table 2.
Table 2
As shown in Table 1 and Table 2, if can use such as < and say " family ", if be switched to the simple if statement of the tinkle of bells > or say " family " such as <, when receiving Inbound Calls, make the complicated if statement of the quiet > of TV sound.According to exemplary embodiment of the present invention, can configure and the corresponding a plurality of actions of at least one condition (for example, a plurality of application accessory (App+Accessory) interoperability of termination function, self-adaptation situation and the use of cloud service) by the if statement based on simple or complicated.In described a plurality of actions, termination function can comprise that Wi-Fi pattern configurations, ring/vibration/silent mode switch, text message sends (recipient and the configuration of content voice), camera flash-light flicker etc.; The use of cloud service can comprise that (using GPS) checks that then (for example, determining) customer location sends text message, etc.
In rule, the type of the type of assignable condition (or instruction) and the configurable action of each condition can be summed up as shown in table 3.
Table 3
According to various exemplary embodiments of the present invention, when specified requirements is reached, user's set 100 can for example, by providing voice or the text feedback of the required information of action about intending according to user to carry out to carry out alternately (, inquiry and answer) to user.According to exemplary embodiment of the present invention, can provide with the form of database (DB) information of the whole actions supported about user's set 100 that receive from user.According to exemplary embodiment of the present invention, the in the situation that of text message sending function, user's set 100 can be identified the necessity about the other information of recipient and text message, and can point out user to input other information with the form of voice or text, the input of alarm mistake, and require reception to re-enter.With reference to Fig. 4 A to 4J, this operation is exemplarily described hereinafter.
Fig. 4 A to 4J be illustrate according to exemplary embodiment of the present invention for explaining the diagram in the exemplary screen of the operation of user's set generation rule.
With reference to Fig. 4 A to 4J, at Fig. 3 A(or Fig. 3 B) state under, control module 170 can be in response to the pop-up window 451 that enters rule configuration pattern display reminding user and carry out phonetic entry, and waiting voice input.Under the state of Fig. 4 A, user can for example, according to the regular type (, single structure and multi-factor structure) of definition is carried out for configuring the phonetic entry based on natural language of at least one condition and each at least one action of condition.
For example, under the state of Fig. 4 A, user can carry out the phonetic entry of " if say " taxi ", carrying out following operation ".Then, control module 170 can be identified user's phonetic entry, and provides recognition result with the form of pop-up window 453, as shown in Figure 4 B.According to exemplary embodiment of the present invention, control module 170 can show pop-up window 453 waiting voice input, wherein, pop-up window 452 prompting users are for along with meeting of condition " taxi " carries out phonetic entry by the action being performed, together with voice identification result (such as " what order [taxi] I can do? ").
Under the state of Fig. 4 B, user can carry out the phonetic entry of " transmission message ".Then, control module 170 can be identified phonetic entry and check for example, the information about action (, " transmission message ") that need to receive from user, to carry out feedback, as shown in Fig. 4 F.
For example, as shown in Figure 4 C, control module 170 can display reminding user input " recipient " as the pop-up window 455(of the required information of action " transmission message " for example, " order [taxi] please be said recipient "), and wait for user's phonetic entry.Under the state of Fig. 4 C, user can phonetic entry be sent out the particular recipient (for example, " boyfriend ") of message.Control module 170 can be identified the voice of being inputted by user, using the form prompting user input " message content " of pop-up window 457 as for moving the information of " transmission message ", (for example, " [boyfriend] said to content ") as shown in Figure 4 D, and wait for user's phonetic entry.
Under the state of Fig. 4 D, user can input the message content that will be sent to recipient (for example, " boyfriend ").According to exemplary embodiment of the present invention, user can carry out the phonetic entry of " sending my position " as the message content that will be sent to " boyfriend ".Then, control module 170 can be identified user's phonetic entry, and inspection is for the necessity of the other information of the action of execution " transmission position ", display reminding user input for carry out " transmission position " action the time interval pop-up window 459(for example, " how long [boyfriend] is sent "), and wait for reception phonetic entry.
Under the state of Fig. 4 E, user can input for carrying out the transmission interval of the action of " sending my position ".According to exemplary embodiment of the present invention, user can carry out the phonetic entry of " every 5 minutes " as the transmission interval of the action for execution " sending my position ".Then, control module 170 can be identified phonetic entry, and Identification display result, as shown in Fig. 4 F.
For example, as shown in Fig. 4 F, can provide with the form of pop-up window 461 for the conditional information 410 that rule defined by the user is configured and with the corresponding action message 411,413,415 and 417 of described condition.Particularly, the in the situation that of action message 411,413,415 and 417, can provide the auxiliary movement configuring according to the action of initial configuration.According to exemplary embodiment of the present invention, " the transmission message " that the first action can be set to be indicated by reference number 411, the second action can be set to as " boyfriend " that indicated by reference number 413 that be sent out the target of message according to the first action, , the 3rd action can be set to as being sent to " sending my position " of being indicated by reference number 415 of content of the message of " boyfriend " according to the first action and/or the second action, the 4th action can be set to conduct will be according to the first action, the second action and/or the 3rd action are sent to the transmission interval of being indicated by reference number 417 " 5 minutes " of the message that comprises " my position " of " boyfriend ".For example, according to exemplary embodiment of the present invention, control module 170 can ask to carry out the required information of previous action to user alternately with voice or text.At this moment, control module 170 can no longer be identified the action that needs other information, and can about requested action (for example with the form of pop-up window 461, provide, for condition (for example, " taxi " 410) " the transmission information " 411 configuring, " boyfriend " 413, " sending my position " 415 and " 5 minutes " 417) information, as shown in Fig. 4 F.In this manner, when condition " taxi " is reached, user can define about the message that comprises " my position " being sent to every 5 minutes to the rule of the action of " boyfriend ".
Simultaneously, under the state of Fig. 4 F, user can select " confirmation " menu item with application to the regular condition with configuring by above process (for example, " taxi ") corresponding action is (for example, " boyfriend ", " sending my position " and " 5 minutes ") definition, or select " cancellation " menu item to cancel or reconfigure rule.
Under the state of Fig. 4 F, if user selects " confirmation " menu item (or carrying out phonetic entry by saying " confirmation "), as shown in Figure 4 G, the pop-up window 463(that control module 170 can display reminding user carries out phonetic entry for example, " please say Next Command "), and waiting voice input.Under the state of Fig. 4 G, user can carry out for be switched to vibration mode " being switched to vibration " phonetic entry as with the corresponding additional move of condition (for example, " taxi ").Then, control module 170 can be identified user's phonetic entry, and with the form of pop-up window 465, shows condition (for example, " taxi ") and the additional move prompting of speech recognition mode designator, previously input, as shown in Fig. 4 H.According to exemplary embodiment of the present invention, as shown in pop-up window 465, control module 170 (for example can be notified the generation of the new regulation with condition " taxi " and menu item, " confirmation " and " cancellation "), wherein, described new regulation for being switched to vibration mode when described condition is reached.
Under the state of Fig. 4 H, if user selects " confirmation " menu item, as shown in Fig. 4 I, the pop-up window 465(that control module 170 can display reminding user carries out phonetic entry next time for example, " please say Next Command ") and wait for user's phonetic entry.Under the state of Fig. 4 I, the phonetic entry that user can carry out " finishing (or stopping) " finishes other rule configuration.Then, control module 170 can be identified user's phonetic entry, and provide about the information of the condition of appointment in the rule of the arrangements of steps by Fig. 4 A to Fig. 4 I and with corresponding at least one action of described condition, as shown in Fig. 4 J.
For example, as shown in Fig. 4 J, control module 170 can provide the screen of notifying following content: rule has the condition " taxi " about action " transmission message " and " sound setting ", together with details " message that comprised my position every 5 minutes sends to boyfriend " and " sound is set to vibration ".
Meanwhile, according to various exemplary embodiments of the present invention, as shown in Fig. 4 A to Fig. 4 F, control module 170 can ask with the form of voice or text the information needing in addition according to the action of being inputted by user to user.According to various exemplary embodiments of the present invention, as shown in Fig. 4 G to Fig. 4 I, control module 170 can be identified does not need the action of other action (for example, sound setting), and skips to user and ask other information to skip to next step.
Below described according to the operation of the definition rule of various exemplary embodiments of the present invention.The exemplary operation of carrying out rule defined above has hereinafter been described.According to various exemplary embodiments of the present invention, can carry out immediately predefined rule in response to user's as above voice or text input.In addition, predefined rule can produce widget and carry out respective rule by widget according to being defined in of user in user's set, will below to this, be described.For example, according to various exemplary embodiments of the present invention, can carry out for regular instruction by widget.
In various exemplary embodiments of the present invention, if such as receive Inbound Calls certain operation disruption the process of generation rule, rule produces operation and carries out until be saved (or temporarily storage), to process, causes the operation interrupted.
Fig. 5 A to Fig. 5 E be illustrate according to exemplary embodiment of the present invention for explaining the diagram of carrying out the operation of predefined rule at user's set.
Fig. 5 A to Fig. 5 E illustrates the exemplary operation of following processing: in the rule execution module 175 of control module 170, receive the phonetic entry based on natural language made by user and in response to phonetic entry executing rule; Condition checking module 177 at control module 170 checks the fixed condition of regular middle finger; In the action executing module 179 of control module 170, when condition is reached, (for example,, when sight being detected) carries out and corresponding at least one action of condition.
With reference to Fig. 5 A to Fig. 5 E, Fig. 5 A shows the exemplary screen of the user's set 100 while being provided according to the widget when about CAS of exemplary embodiment of the present invention.
As shown in Figure 5A, CAS widget 500 can be presented on the main screen (or menu screen) of user's set 100.Widget 500 can be provided with user to carry out the instruction input area (or regular executive button) 510 of the input (for example, touch and touch) for executing rule and the execution information area 520 about the regular information of the current operation in user-defined rule is shown.Widget 500 can also be provided with for upgrading the refresh function item 530 about the regular information of current operation.According to exemplary embodiment of the present invention, consider the intuitive to user, instruction input area 510 can be provided with image or text.Fig. 5 A illustrates the current exemplary cases that does not have rule moving for carrying out information area 520.
Under the state of Fig. 5 A, user can select (for example, touch gestures or touch) for the instruction input area 510 of executing rule.As shown in Figure 5 B, control module 170(for example, rule execution module 175) executing rule display reminding user input (or saying) is subsequently about by the pop-up window 551(of the regular information of carrying out for example, " saying order "), and wait for user's phonetic entry.
Under the state of Fig. 5 B, user can for example, to carrying out the rule of execution phonetic entry (, " subway ").As shown in Figure 5 C, then, control module 170 can be identified user's phonetic entry and provide the corresponding rule of notice and " subway " to be loaded the pop-up window 553 of (or being identified).In various exemplary embodiments of the present invention, can omit identification progress screen display.
If completed identification and load operation, control module 170 can provide with the form of pop-up window 555 identification and loading result.For example, control module 170 can by pop-up window 555 provide about according to user's phonetic entry by the information of the condition of appointment in the rule of carrying out and rule and the action corresponding with condition.According to exemplary embodiment of the present invention, control module 170 can provide to send a notice: by the rule of execution, by condition " subway " (be for example, execution [subway]) " subway " of configuration, described condition " subway " has action " Wi-Fi opens " and " being switched to vibration " (for example, " Wi-Fi opens ", " being configured to vibration ").In various exemplary embodiments of the present invention, Rule Information screen display can be skipped, and process can skip to the operation corresponding with Fig. 5 E.The screen of Fig. 5 E can show the predetermined duration, then carries out corresponding operating.
While passing by the predetermined duration under the state at Fig. 5 D, control module 170 can provide with the form of image or text the information (for example, subway) of the rule (or rule of current operation) about being carried out by user in the execution information area 520 of widget.For example, as shown in Fig. 5 E, the execution information area 520 of widget 500 can present icon or the text of indication " subway ", replaces message " current random operation ".In addition, control module 170 can provide the regular notice item 550 that is provided for notifying current operation in the indicator area of various modes of operation of user's set 100 in response to operating in of rule, as shown in Fig. 5 E.Subsequently notice is described.
Although Fig. 5 E is for an exemplary cases that rule is being moved only, can moves a plurality of rules, and in this case, carry out information area 520 the regular information about a plurality of current operations can be provided.Although describe, be in the selection of instruction input area 510, to carry out the exemplary cases of phonetic entry for prompting user, can in the selection of instruction input area 510, provide regular list defined by the user, so that user selects at least one rule.
Control module 170(for example, condition checking module 177) also can operate to determine whether the condition of appointment in the rule of the current operation in the various rules that define by said process is reached.Control module 170(for example, action executing module 179) can operate to carry out at least one action (for example, being mapped at least one action with the regular corresponding sight of condition) that is mapped to the regular condition that reaches.
As mentioned above, according to various exemplary embodiments of the present invention, can pass through widget 500 executing rules.According to exemplary embodiment of the present invention, user can selection instruction input area (or regular executive button) 510 and is said configuration rule (or order).Then.Control module 170 can be carried out ejection text or the voice that feed back to user, to provide about the rule start time with by the information of the action being performed.Control module 170 also can add respective rule to carries out information area 520, and in indicator area, shows that indication exists the regular notice item 550 of current operation.Subsequently notice item 550 is described.
Hereinafter, for example, with reference to accompanying drawing (, Fig. 6 to Figure 13), describe according to the detailed operation of the CAS control method of various exemplary embodiments of the present invention.
Fig. 6 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention.
Fig. 6 illustrates following example scenario: user configures and executing rule by interactive voice, and user's set 100 checks reaching of condition that regular middle finger is fixed and carry out the action triggering when condition is reached.Particularly, in Fig. 6, user's execution and condition 1(are for example, family) for example move accordingly 1(, sound setting), and in sub-condition (condition 2) (for example, when answering the call) under for example perform an action 2(, make lamp flicker) and for example move 3(, make the TV sound quiet).
With reference to Fig. 6, user can user's device 100 definition rules.For example, user can be by handling the function of user's set 100 activation generation rules and defining the rule for user's set 100, to change the tinkle of bells also when making lamp glimmer and making TV sound quiet at ad-hoc location when ad-hoc location detects incoming call.According to exemplary embodiment of the present invention, user can with user's set 100 progressively mutual in definition rule " pointing-type switched to ring tone modes at home and when receiving Inbound Calls at home, lamp glimmered and make the TV sound quiet ".Can be by carry out the operation of definition rule with reference to the regular production process of Fig. 3 A to Fig. 3 K and Fig. 4 A to Fig. 4 K description.Can be by carrying out alternately the operation of definition rule based on natural language.
User can order the regular execution defining alternately by the voice based on natural language or text.Situation that will be detected (for example, condition) can be " say [family] or be in and receive phone ", and described condition can be " lamp flicker is set, make TV sound quiet " by the action of taking while being reached.Although not definition separately in the situation that of Fig. 6, user's set 100 can be controlled according to the action of carrying out the execution of additional act.For example, make lamp flicker and TV sound quiet after, can carry out another action " when call session finishes the original state of recovery lamp when receiving phone play TV sound ".
Defining under regular state, if necessary, user can carry out the rule of definition.For example, when from outdoor while entering family, user says " family " by the process described in Fig. 5 A to Fig. 5 E and carries out phonetic entry.Then, can to check whether (for example, determine) exists with " family " in the rule of current operation regular accordingly for user's set 100.If moved with " family " corresponding rule, install 100 and can check at least one condition of for example, in (, determining) rule " family " appointment and move accordingly with condition.
" family " of the first condition defining in user's set 100 identification conduct rules " family " also identifies as " ring tone modes switching " with corresponding the first action of first condition.Therefore, as indicated in reference number 610, user's set 100 is switched to ring tone modes in response to user's phonetic entry " family " by the pointing-type of user's set 100.
In all right recognition rule " family " of user's set 100, the second condition " when receiving phone " of appointment inspection for example, to the interruption of this condition (, the reception of Inbound Calls).Afterwards, if receive Inbound Calls, as shown in reference number 620, as the reaction to this interruption, the second condition " when receiving phone " of appointment in user's set 100 recognition rules " family ".User's set 100 can operate to control the second action " make lamp flicker " (as indicated in reference number 630) and the 3rd and move " make TV sound quiet " (as indicated in reference number 640).
If indication audio call reception the tinkle of bells (for example, telephone ringer sound) reception that under played state, user has accepted Inbound Calls (for example, call session is set up), user's set 100 can make lamp return to previous state, and if call session is disengaged, remove the quiet of TV sound.
Fig. 7 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention.
Fig. 7 illustrates following example scenario: user configures and executing rule by interactive voice, the action that in user's set 100 recognition rules, the condition of appointment execution trigger when condition is reached.Particularly, in Fig. 7, user's definable sends to customer location (or position of user's set 100) rule of at least one target subscriber device automatically with predetermined time interval.User's set 100 can be inputted executing rule in response to user, and the time interval of carrying out with appointment in executing rule sends to positional information the action of at least one target subscriber device.
With reference to Fig. 7, user can user's device 100 definition rules.For example, user can activate rule generation function (or application) by handling user's set 100, and definition sends to positional information with predetermined time interval the rule of at least one target subscriber device 200.According to exemplary embodiment of the present invention, user can generation rule " if I take taxi, every 5 minutes, my positional information being sent to my father and younger brother (or younger sister) ".At this moment, can carry out alternately generation rule by the interactive voice with microphone 143 or with the text of input block 120 or touch-screen 130, as described later.Preferably, voice and text are alternately based on natural language, as mentioned above.For example, by the situation (, condition) of appointment in the rule of definition detecting, can be " current when moving out ", and can be " positional information being sent to father and younger brother (younger sister) every 5 minutes " when reaching the action of taking while being reached.
User's set 100 can be provided for specifying the interface of at least one target subscriber device 200, for example, to send positional information and the information about at least one target device 200 (, telephone number, title and the pet name) is mapped to rule.If necessary, user's predefine and redefine described rule whenever and wherever possible.
User can be by the interface that provides when the definition rule with the form input of voice, text or gesture for carrying out the instruction of described rule.Then, user's set 100 can arrive command mappings the rule of definition Storage Mapping information.In the situation that using phonetic entry, the waveform that user's set 100 can storaged voice, speech conversion is become to text and stores the text, or both the waveform of storaged voice was also stored the text of conversion.
Defining under regular state, if necessary, user can use predefined instruction (for example, voice, text and gesture) to identify, activate and/or carry out described rule.For example, under the exemplary cases in Fig. 5 A to 5E, before just will going up taxi or when upper taxi, user for example can input corresponding instruction, as phonetic entry (, " taxi ", " taxi pattern " and " just taking taxi ").Although describing is the situation that is the phonetic order inputted by microphone 143 for instruction, instruction can be transfused to textual form or be transfused to the form of gesture by input block 120 or touch-screen 130.Phonetic order and text instruction can be the instructions based on natural language as above.
When user intends to carry out the phonetic entry for executing rule (or the identification of conditioned disjunction situation), user can take the warming-up exercise for notifying user's set 100 to use for the phonetic entry of executing rule in advance.Under the exemplary cases by voice executing rule (or the identification of conditioned disjunction situation), may need to activate microphone 143.This is because if microphone 143 open mode always, and unexpected Speech input can cause unnecessary operation or mistake.Therefore, preferably, definition for example, for activate the specific action (, widget, gesture and function key operation) of microphone 143 under phonetic entry pattern, thereby user takes this to move to open microphone 143 before phonetic entry.According to exemplary embodiment of the present invention, user can say instruction after prearranged gesture, and wherein, described prearranged gesture is the regular executive button of pressing predetermined function key or selection window widget.
Phonetic entry can be identified and resolve to user's set 100 to carry out the rule of being indicated by described phonetic entry.For example, user's set 100 can for example, in pre-defined rule (, being mapped to each regular speech waveform) search and the corresponding speech waveform of phonetic entry.User's set 100 can also become the speech conversion of input text for example, to obtain described text in pre-defined rule (, being mapped to each regular text).User's set 100 can for example, be searched for speech waveform and text in pre-defined rule (, being mapped to each regular waveform and text).
User's set 100 can be carried out the condition of carrying out (situation) identification according to rule.For example, the reaching of the condition 1 that user's set 100 can be based on predefined rule detection such as " just taking taxi ", and check for example, condition 2 such as the schedule time (, 5 minutes) when condition 1 is reached.In this case, user's set 100 can operate to carry out the action 1 for the position of the time check user's set 100 every condition 2 appointments.User's set 100 can also operate to carry out for positional information being sent to according to action 1 to the action 2 of at least one target subscriber device 200.
Meanwhile, described at least one target subscriber device 200 can be carried out the feedback about the positional information of user's set 100 by predetermined interface when receiving positional information from user's set 100.For example, as shown in Figure 7, target subscriber device 200 can show the positional information about user's set 100 on map image.At this moment, it is handled and be sent out that map image and positional information can be used as user's set 100, or the positional information being sent by user's set 100 is present on the map image being provided by target subscriber device 200.
As mentioned above, according to the exemplary embodiment of Fig. 7, user can notify with predetermined time interval user's position at least one other designated user.At least one other user can obtain user's position and mobile route in the situation that not needing additional act.
Fig. 8 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention.
Fig. 8 illustrates following example scenario: user uses Application of composite configuration executing rule, and the condition of appointment in the described rule of user's set 100 identification the action that execution triggers when described condition is reached.Concrete, in Fig. 8, user's definable sends to customer location (or position of user's set 100) in response to external event the rule of at least one target subscriber device.User's set 100 is inputted executing rule according to user, and for example, carries out the action that positional information is sent to target subscriber device 200 while receiving (, or occur) event in the target subscriber device 200 of appointment from executing rule.Fig. 8 illustrates following exemplary cases: the input (event) that user's set 100 for example detects, from external device (ED) (, target subscriber device 200), and carry out predetermined action.
In various exemplary embodiments of the present invention, Application of composite can be the application program of carrying out following processing: make screen module for example, with by most preferred mode (, as desired in user) the different information that receive from each provenance are provided to final user, and design consideration user's authority and role and the screen pattern that switches and screen configuration so as optimizing user experience.
With reference to Fig. 8, user can user's device 100 definition rules.For example, user can activate rule and produce function (or application), and definition for example, sends to positional information the rule of at least one target subscriber device 200 when event (, the reception of message) occurs.According to exemplary embodiment of the present invention, user can " if receive the phone from wife when driving, send my current location information " by generation rule.At this moment, can carry out alternately generation rule by the interactive voice with microphone 143 or with the text of input block 120 or touch-screen 130, will to this, be described subsequently.Preferably, interactive voice and text are based on natural language, as mentioned above alternately.In the rule of appointment as shown in Figure 8 appointment by detected situation (for example, condition) can be " if when driving, receiving the phone from wife ", and when described condition is reached, by the action of taking, can be " current location information that sends me ".In addition, can further configure such as " if received when driving from the phrase of inquiring position comprising of wife (such as the other condition of " where ") message ".
User's set 100 can be provided for specifying at least one target subscriber device 200 of generation event and the information about at least one target subscriber device 200 (for example, telephone number, title and the pet name) is mapped to regular interface.If necessary, user's predefine and input whenever and wherever possible or redefine rule, and can be with the form of voice, text or gesture by the instruction for executing rule to stationary interface input.Then, user's set 100 can arrive described command mappings the rule of definition Storage Mapping information.
Defining under regular state, user can use predefined instruction (for example, voice, text and gesture) executing rule where necessary.For example, under the exemplary cases at Fig. 5 A to Fig. 5 E, user for example inputs command adapted thereto, as phonetic entry (, " driving ", " driving pattern " and " I will drive ") in the time of can or just having got on the bus before just will getting on the bus.Although describing is the situation that is the phonetic order inputted by microphone 143 for instruction, can the form input instruction with text by input block 120 or touch-screen 130, or input instruction with the form of gesture.Preferably, phonetic order and text instruction are the instructions based on natural language, as mentioned above.When user intends to carry out phonetic entry, user can take the warming-up exercise (for example, opening microphone 143) for notifying user's set 100 to use for the phonetic entry of executing rule in advance.
Phonetic entry can be identified and resolve to user's set 100 to carry out the rule of being indicated by described phonetic entry.User's set 100 can also testing conditions (situation) reach.For example, user's set 100 can detect in the rule of the definition condition 1(of appointment such as " I will drive ") reach, and inspection such as according to condition 1 destination apparatus 200 from condition 2(such as appointment) condition 3 of reception text message (text message that comprises specified conditions (such as " where ")).For example, if for example, according to the text message that (, from target subscriber device 200) receives the condition of reaching 3 that reaches of condition 2, user's set 100 can be carried out for obtaining the action 1 about the positional information of user's set 100.User's set 100 can also be carried out and will according to action 1 positional information of obtaining, send to the action 2 of target subscriber device 200.
If receive the positional information being sent by user's set 100, target subscriber device 200 can be carried out the feedback about the positional information of user's set 100 by predetermined interface.For example, as shown in Figure 8, target subscriber device 200 can show user device location on map image.
As mentioned above, according to the exemplary embodiment of Fig. 8, if receive event from the targeted customer of appointment, user can notify customer location by text message to targeted customer in the situation that not needing additional act.
Fig. 9 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention.
Fig. 9 illustrates that a rule in rule of similarity is selected by user or with the recommended example scenario of program mode (PM) or carry out similar Sample Rules disposition shape simultaneously.Particularly, in the example scenario of Fig. 9, user's definable is used for the rule of user's set 100 to carry out the feedback of alarm or to control specific function.User's set 100 can be inputted executing rule in response to user, and controls the specific function of user's set 100 during the variation in the situation that appointment in rule detected or carry out alarm action.
With reference to Fig. 9, user can user's device 100 definition rules.For example, user can by handle user's set 100 activate can generation rule function (or application) produce following rule: during the event that causes in the variation detecting in environment, export the rule of alarm (hereinafter, and the rule (Second Rule) of controlling the function of user's set 100 during the event causing in the variation detecting in environment the first rule).According to exemplary embodiment, can produce such as first rule of " exporting alarm when drive speed is equal to or greater than 80Km/h " and such as the Second Rule of " increasing the volume of automobile or user's set 100 when drive speed is equal to or greater than 60Km/h ".At this moment, as mentioned above, can carry out definition rule by the phonetic entry via microphone 143 or via the text input of input block 120 or touch-screen 130.Can use natural language to realize phonetic entry and text input.Situation to be detected (for example, condition) can be " for example, when the variation in environment (, drive speed) is equal to or greater than predetermined threshold ", and when condition is reached, will take action can be " output alarm or control volume ".
As mentioned above, user can be where necessary predefine or generation in real time and redefine the first rule and Second Rule whenever and wherever possible; Can with the form input of voice, text or gesture, be used for by given interface the instruction of executing rule.Then, rule and instruction that user's set 100 can mapping definition, and store mapping as above.Fig. 9 shows the exemplary cases that is mapped to same instruction such as a plurality of rules of the first rule and Second Rule.
Defining under the state of rule (the first rule and Second Rule), user can use the instruction (for example, voice, text and gesture) of definition to carry out executing rule where necessary.For example, under the exemplary cases at Fig. 5 A to Fig. 5 E, user can make the instruction phonetic entry of " driving ", " driving pattern " and " I will drive " (for example, such as) before getting on the bus or while getting on the bus.Although describing is the situation that is the phonetic order inputted by microphone 143 for instruction, instruction can the form with text be transfused to by input block 120 or touch-screen 130, or is transfused to the form of gesture.As mentioned above, when user intends to carry out phonetic entry, user can take the warming-up exercise (for example, opening microphone 143) for notifying user's set 100 to use for the phonetic entry of executing rule in advance.
Phonetic entry can be identified and resolve to user's set 100 to carry out the rule of being indicated by described phonetic entry.User's set 100 can also testing conditions (situation) reach.For example, user's set 100 will detect the condition 1(of appointments in the rule of definition such as " I will drive ") reach, whether the condition that then checks is reached (for example, whether drive speed equals or exceeds 60Km/h and still equal or exceed 80Km/h).If second condition is reached, user's set 100 can take to move 1.For example, if drive speed is equal to or greater than 60Km/h, user's set 100 can take to increase the action of its volume or automobile volume.In addition, if drive speed is equal to or greater than 80Km/h, user's set 100 can take to export according to the first rule the action of alarm song.
Meanwhile, for example, in the situation of a plurality of conditions (, the first rule and Second Rule) that has and instruction coupling, user's set 100 can be in user's selection recommendation condition.As shown in the example scenario of Fig. 9, user's set 100 can show the pop-up window that presents first condition (>=80Km/h) and second condition (>=60Km/h), to point out user to select.
Previous exemplary embodiment is the example scenario for executing rule before starting driving.Therefore, when thering is the pre-defined rule operation of a plurality of conditions, user's set 100 can monitor that drive speed is to determine whether first condition and second condition are reached, and sequentially carries out with the corresponding condition of moving or carrying out and reach recently of two conditions that reach and move accordingly.
Meanwhile, user can (for example,, under the state of pressing 110Km/h driving as shown in Figure 9) carry out predefine rule when driving.
In this case, phonetic entry can be identified and resolve to user's set 100 to carry out the rule of being indicated by described phonetic entry.User's set 100 also can testing conditions (situation) reach.For example, user's set 100 will detect the condition 1(of appointments in the rule of definition such as " I will drive ") reach, then check whether (for example, determining) second condition is reached (for example, drive speed is equal to or greater than 60Km/h or is equal to or greater than 80Km/h).Because the current drive speed of 110Km/h reaches first condition and second condition, so user's set 100 execution simultaneously and first condition and second condition move accordingly.According to exemplary embodiment of the present invention, when reaching first condition and second condition (for example, current drive speed 110Km/h is equal to or greater than 60Km/h and 80Km/h), user's set 100 can increase the volume of its volume or automobile, and exports alarm song simultaneously.
In the situation that instruction coupling is specified the regular of a plurality of conditions, user's set 100 can be user's selection and recommends in described condition.As shown in the example scenario of Fig. 9, when current situation (100Km/h) reaches first condition and second condition, user's set 100 can show that the pop-up window that presents first condition (>=80Km/h) and second condition (>=60Km/h) is with prompting user selection.By noticing that user, driving, preferably, makes condition by phonetic entry and selects.According to user, arrange, can select with the form executive condition of text input and gesture input and phonetic entry.
Figure 10 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention.
Figure 10 shows the example scenario that user's device 100 and external device (ED) (or the object that can communicate by letter with user's set 100 or be attached with the object of the device that can communicate by letter with user's set 100) provide CAS.Under the exemplary cases of Figure 10, rule can be defined by for example, with the change of the predetermined time interval inspection of user's setting (, determining) external environment condition and according to the alarm of check result feedback.User's set 100 can be carried out predefined rule in response to user instruction, and takes the action of alarm during the environment change of appointment in rule being detected.
With reference to Figure 10, user can user's device 100 definition rules.For example, user can by handle user's set 100 activate can generation rule function (or application), and the following rule of definition: with predetermined time interval inspection, according to external environment condition, change and event output by the alarm of described Event triggered.According to exemplary embodiment of the present invention, user can produce such as " between 10:00 at night and morning 7:00, if do not taken medicine every 4 hours, display alarm message and open bathroom light " rule.Can carry out generation rule by the phonetic entry via microphone 143 or via the text input of input block 120 or touch-screen 130.Can use natural language to realize phonetic entry and text input.The in the situation that of Figure 10, the situation detecting (for example, condition) can be " if not taking medicine every 4 hours " and " between 10:00 at night and morning 7:00 ", and when condition is reached, by the action of taking, can be " display alarm message is also opened bathroom light ".
As mentioned above, user can be where necessary predefine or generation in real time and redefine rule whenever and wherever possible; Can be by be used for the instruction of executing rule with the form input of voice, text or gesture to stationary interface.Then, rule and instruction that user's set 100 can mapping definition, and store mapping as above.
Defining under regular state, user can use the instruction (for example, voice, text and gesture) of definition to carry out executing rule where necessary.For example, under the exemplary cases at Fig. 5 A to Fig. 5 E, user can make the instruction phonetic entry of " medicine ", " inspection medicine bottle " and " administration time " (for example, such as) to carry out predefined rule.Although describing is the situation that is the phonetic order inputted by microphone 143 for instruction, instruction can the form with text be transfused to by input block 120 or touch-screen 130, or is transfused to the form of gesture.As mentioned above, when user intends to carry out phonetic entry, user can take the warming-up exercise (for example, opening microphone 143) for notifying user's set 100 to use for the phonetic entry of executing rule in advance.
Phonetic entry can be identified and resolve to user's set 100 to carry out the rule of being indicated by described phonetic entry.User's set 100 can also monitor to detect the reaching of condition (situation) of appointment in the rule of carrying out.For example, user's set 100 can monitor to detect and the corresponding sight of the rule activating.As another example, user's set 100 can detect in the rule of the definition condition 1(of appointment such as " administration time ") reach, for example then check condition 2(, 4 hours), after condition 2, be whether condition 3(such as external device (ED) (for example, medicine bottle) is moved (by rocking)).In various exemplary embodiments of the present invention, user's set 100 can for example, by the movement of radio check (, determining) external device (ED).In order to complete this process, external device (ED) (for example, medicine bottle) can have the communication module (for example, low-power consumption bluetooth (BLE) appendicle (tag), RF appendicle, NFC appendicle etc.) that can communicate by letter with user's set 100.
For example, if the movement of external device (ED) (, medicine bottle) for example, do not detected for predetermined lasting time (, 4 hours), user's set 100 could operate to carry out such as output reminder message and " take medicine! " action 1.User's set 100 can be carried out the output of controlling the action 2 of target device (for example, lamp, refrigerator, insulating pot etc.) and the reminder message of conduct action 1.
In exemplary embodiment of the present invention, destination apparatus can be any one and the intelligent apparatus in the object (such as lamp, refrigerator, insulating pot) that needs to use in the daily life of some actions when particular condition.If target device can be communicated by letter with the user's set 100 as Figure 10, directly control destination apparatus and otherwise for example, via supplementary module (power control that, can communicate by letter), indirectly to control destination apparatus be possible.According to exemplary embodiment of the present invention, user's set 100 can be communicated by letter with the power control with communication function, usings power is supplied to the bathroom light as destination apparatus.
Figure 11 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention.
Figure 11 shows the following example scenario according to exemplary embodiment of the present invention: owner's input of user's set is for the instruction of executing rule, and owing to lacking dexterity, child user can not executing rule.
Figure 11 shows the following example scenario that CAS is provided: use Application of composite configuration executing rule, and user's set 100 detect appointments in rule condition reach and execution and described condition are moved accordingly.Particularly, under the exemplary cases of Figure 11, user's definable sends to position (for example, the position of user's set) rule of specific objective user's set 200 when external event being detected.User's set 100 can be inputted executing rule according to user, and the target subscriber device 200 of appointment when the rule from carrying out is while receiving event, and user's set 100 is carried out the action that sends photographic intelligences and positional information.In Figure 11, user's set 100 detects for example, input (event) from external device (ED) (, target subscriber device 200), and carries out the action of appointment.
With reference to Figure 11, user can user's device 100 definition rules.For example, user (or father and mother of user) can produce function (or application) and for example define, when send the rule of photographic intelligence and positional information when at least one target subscriber device 200 receives event (, message) by handling user's set 100 activation rules.According to exemplary embodiment of the present invention, user (or father and mother of user) can produce the rule such as " if receive text from mother under protected mode, take a picture and send photo and my position ".By the interactive voice with microphone 143 or with the text of input block 120 or touch-screen 130, produce alternately described rule, will be described this subsequently.Preferably, voice and text are alternately based on natural language, as mentioned above.In the rule of definition as shown in figure 11; in the rule of definition, the situation that will detect of appointment (for example; condition) can be " if from mother, receiving text under protected mode ", and when condition is reached, by the action of taking, can be " take a picture and send described photo and my position ".In addition, can further configure such as " if receive and to comprise that the phrase of inquiring position is (such as the other condition of " where ") message " from mother.
User's set 100 can be provided for specifying at least one target subscriber device 200 of generation event and the information about at least one target device 200 (for example, telephone number, title and the pet name) is mapped to regular interface.If necessary, user's predefine and input whenever and wherever possible or redefine rule, and can be with the form of voice, text or gesture by the instruction for executing rule to stationary interface input.Then, user's set 100 can arrive described command mappings the rule of definition Storage Mapping information.
Defining under regular state, user can use predefined instruction (for example, voice, text and gesture) executing rule where necessary.For example, under the exemplary cases of Fig. 5 A to Fig. 5 E, user for example can input command adapted thereto, as Cong school (institute) phonetic entry (, " protected mode ", " after school " and " Wo Jiangcong school goes home ") on the way home.Although describing is the situation that is the phonetic order inputted by microphone 143 for instruction, can the form input instruction with text by input block 120 or touch-screen 130, or input instruction with the form of gesture.Preferably, phonetic order and text instruction are the instructions based on natural language, as mentioned above.When user intends to carry out phonetic entry, user can take the warming-up exercise (for example, opening microphone 143) for notifying user's set 100 to use for the phonetic entry of executing rule in advance.
Phonetic entry can be identified and resolve to user's set 100 to carry out the rule of being indicated by described phonetic entry.User's set 100 also can testing conditions (situation) reach.
For example; user's set 100 can detect in the rule of the definition condition 1(of appointment such as " protected mode ") reach, and check such as according to condition 1 target device 200 from condition 2(such as appointment) receive the condition 3 of text message (text message that comprises specified conditions (such as " where ")).User's set 100 can also be carried out according to action 1 action 2 of obtaining photographic intelligence by automatic shooting.User's set 100 also can be carried out the action 3 of active position locating module 117.User's set 100 also can be carried out the action 4 of obtaining about the positional information of user's set 100.User's set 100 also can be carried out and will 1 send to the action 5 of target subscriber device 200 to action 4 photographic intelligences that obtain and positional information by moving.
If receive the positional information being sent by user's set 100, target subscriber device 200 can be carried out environment photographic intelligence and the positional information about user's set 100 by predetermined interface.For example, target subscriber device 200 can show user device location on map image, as shown in figure 11.
Although not shown in Figure 11, the position that can obtain user's set 100 from external server photographic intelligence around.For example, if user's set 100 has been caught photo in user's pocket, described photo may for example, for being different from the photo of power failure of the scene photo (, environment photo) of plan.Therefore, user's set 100 (for example must be analyzed the photo of catching the environmental baseline based on user's set, use illuminance transducer) check whether (for example, determining) user's set 100 is arranged in pocket, to determine whether described photo is taken under normal condition.User's set 100 can be configured to has condition and the action operating in the following manner: if photo under abnormality, take, the current location of obtaining user's set from external server environment photo around.
Figure 11 shows owner's input of user's set 100 for the exemplary cases of the instruction of executing rule.Yet child user is executing rule deftly.By noticing this situation, preferably, allow the long-range executing rule of father and mother with watch-keeping children's position.For example, configuration is possible such as the rule of " carry out protected mode when the text message receiving from mother such as " where " and notify my position ".If there is the message from the comprising of target subscriber device 200 " where ", user's set 100 checks that whether (for example, determining) user's set 100 operates by protected mode, and if be, carries out aforesaid operations.Otherwise if user's set 100 does not have in protected mode, user's set 100 is carried out protected modes and carries out aforesaid operations.
The event of as mentioned above, to the greatest extent can be effectively according to the CAS service of the exemplary embodiment of Figure 11 and promptly notifying the position of a user (father and mother) by other users (children) and surrounding environment to trigger.User (father and mother) can also obtain the information about other users (children's) position, mobile route and surrounding environment.
Figure 12 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention.
Figure 12 illustrates following example scenario: user configures and executing rule by phonetic entry, thereby user's set 100 monitors to detect the reaching of condition of appointment in rule, and carries out corresponding actions when condition is reached.As shown in figure 12, user's definable is controlled the rule of the specific function of user's set 100 in specific environment.User's set 100 is inputted and is carried out described rule according to user, and carries out as the action of the specific function of the processing user's set defining in executing rule.
With reference to Figure 12, user can user's device 100 definition rules.For example, user can by handle user's set 100 activate can generation rule function (or application), and definition is for carrying out the rule of a plurality of functions (application) in specific environment.According to exemplary embodiment of the present invention, user can produce the rule such as " if gone up subway train, open Wi-Fi and carry out music APP ".Can carry out generation rule by the phonetic entry via microphone 143 or via the text input of input block 120 or touch-screen 130.Can use natural language to realize phonetic entry and text input.The in the situation that of Figure 12, the situation that detect (for example, condition) can be by clearly appointment of user " if having gone up subway train ", and when condition is reached, by the action of taking, can be " open Wi-Fi and carry out music APP ".
User's set 100 can be provided for selecting the interface of a plurality of actions (for example, function and application) when realizing corresponding function, thereby user can be chosen in a plurality of actions in the rule of definition.For example, if user's input " subway " is as regular condition, user's set 100 can show that the action that can be performed that is associated with condition " subway " (for example, function and application) list (action lists), to receive for for example selecting from action lists, by user's input of the action of carrying out (, Wi-Fi opens with music application and carries out).As mentioned above, user can be where necessary predefine or generation in real time and redefine rule whenever and wherever possible.
Defining under regular state, user can use the instruction (for example, voice, text and gesture) of definition to activate and/or executing rule where necessary.For example, user can be before upper subway train or on make instruction (for example, such as the phonetic entry of " upper subway train ", " subway " and " rail mode ") during subway train, to carry out predefined rule, as the exemplary cases in Fig. 5 A to Fig. 5 E.Although describing is the situation that is the phonetic order inputted by microphone 143 for instruction, instruction can the form with text be transfused to by input block 120 or touch-screen 130, or is transfused to the form of gesture.Preferably, phonetic order and text instruction are the instructions based on natural language, as mentioned above.As mentioned above, when user intends to carry out phonetic entry, user can take the warming-up exercise (for example, opening microphone 143) of notifying user's set 100 to use for the phonetic entry of executing rule in advance.
Phonetic entry can be identified and resolve to user's set 100 to carry out the rule of being indicated by described phonetic entry.User's set 100 can also testing conditions (situation) reach.For example, user's set 100 can detect in the rule of the definition condition 1(of appointment such as " having gone up subway train ") reach, and carry out the action 1 of opening Wi-Fi and the action 2 of carrying out music application.User's set 100 can be processed for the signal exchanging as action 1 and the setting up Wi-Fi connection and playing music of action 2.
If rule is performed and rule in the condition of appointment be reached (for example, upper subway train), user's set 100 is opened Wi-Fi function and is carried out music application, and feedback execution result.For example, as shown in figure 12, user's set 100 can show that the indication of Wi-Fi open mode output sound are as the result of music.
According to the exemplary embodiment of Figure 12, when in the rule in current operation, the condition of appointment is reached, user can carry out phonetic entry or behavior input, to carry out a plurality of actions that are associated with described condition.
Figure 13 illustrates the diagram that the example scenario of CAS is provided according to user's device of exemplary embodiment of the present invention.
Figure 13 illustrates following example scenario: user is by phonetic entry or text input configuration executing rule based on natural language, thereby user's set 100 monitors to detect the reaching of condition of appointment in rule, and when being reached, condition carries out corresponding action.For example, when rule is activated, user's set 100 monitors to detect and the corresponding sight of regular condition being activated.If described sight detected, user's set is carried out with regular condition and is worked accordingly.As shown in figure 13, the abstract condition of user's definable, thereby user's set 100 is inputted executing rule according to user, and the specific function of controlling appointment in processing rule and communication with external device (ED) (or the object that can communicate by letter with user's set 100 or be attached with the object of the device that can communicate by letter with user's set 100).
With reference to Figure 13, user can user's device 100 definition rules.For example, user can by handle user's set 100 activate can generation rule function (or application), and definition is for carrying out the rule of a plurality of functions (function of user's set and the function of external device (ED)) that are specified in specific environment and carry out.
According to exemplary embodiment of the present invention, user can produce the rule such as " if dim, be grade 2 by lamp brightness adjustment and play classical music ".At this moment, can carry out generation rule by the phonetic entry via microphone 143 or via the text input of input block 120 or touch-screen 130.Can use natural language to realize phonetic entry and text input.The in the situation that of Figure 13, the situation that detect (for example, condition) can be " if dim ", and when condition is reached, by the action of taking, can be " if dim, by lamp brightness adjustment, be grade 2 and play classical music ".
At this moment, user's set 100 can be provided for user and be chosen in the interface of asking to determine a plurality of actions (for example, the function of user's set is controlled and external device (ED) function is controlled) of carrying out in regular process according to user.For example, if user's input " dim " is as the condition of appointment in rule, user's set 100 can show the list of the executable action being associated with condition " dim ", with prompting user, from action lists, select action (for example, lamp brilliance control, music application are carried out and classical music is play).User can be where necessary predefine or generation in real time and redefine rule whenever and wherever possible.
Defining regular in the situation that, user can use instruction (for example, voice, text and the gesture) executing rule of definition where necessary.For example, user can make instruction (for example, such as " dim ", " tired " and " dim mode " phonetic entry) to carry out predefined rule, as the exemplary cases in Fig. 5 A to Fig. 5 E.Although describing is the situation that is the phonetic order inputted by microphone 143 for instruction, instruction can the form with text be transfused to by input block 120 or touch-screen 130, or is transfused to the form of gesture.Preferably, phonetic order and text instruction are the instructions based on natural language, as mentioned above.As mentioned above, when user intends to carry out phonetic entry, user can take to notify user's set 100 to use the warming-up exercise (for example, using the executive button of function key or widget to open microphone 143) for the phonetic entry of executing rule in advance.
Phonetic entry can be identified and resolve to user's set 100 to carry out the rule of being indicated by described phonetic entry.User's set 100 can also testing conditions (situation) reach.For example, user's set 100 can detect the reaching of condition (such as " dim ") of appointment in the rule of definition, and carries out by control external device (ED) be the action 1 of grade 2 by lamp brightness adjustment and carry out the action 2 of music application with the action 3 of complete broadcasting classical music.
In the exemplary cases of Figure 13, for example, if external device (ED) (, other external device (ED) that the lamp in living room or user's set 100 have the right to communicate and/or control) can communicate by letter with user's set 100, described external device (ED) can be communicated by letter with user's set 100, and described external device (ED) can directly be controlled by user's set 100, otherwise, can use servicing unit (power control that for example, can communicate by letter) indirectly to control.According to exemplary embodiment of the present invention, user's set 100 can communicate by letter to regulate with the power control that can communicate by letter the brightness of living room lamp.
If there is the instruction for executing rule, user's set 100 capable of regulating external modulation brightness are also carried out music application simultaneously, and feed back execution result.For example, as shown in figure 13, user's set 100 can be grade 2 result that also output sound is play as classical music using lamp brightness adjustment and carry out accordingly screen.
As mentioned above, according to the exemplary embodiment of Figure 13, the abstract condition of user's definable configuration rule are to perform an action according to situation.According to exemplary embodiment of the present invention, user can define based on natural-sounding user's condition (such as " dim (or tired) "), and feedback is provided after action executing when condition is reached.
Hereinbefore, Fig. 6 to Figure 13 be for rule and instruction by the situation that defines separately and configure.Yet, according to various exemplary embodiments of the present invention, for the instruction of executing rule, can extract from predefine rule, and without extra definition.
Suppose to have defined the rule such as " if I have gone up taxi, my position being sent to father and younger brother (younger sister) every 5 minutes ", as the exemplary cases in Fig. 7.If " if I have gone up taxi " in user's set 100 recognition rules, user's set 100 can extract associated instructions such as " taking now taxi ", " taxi " and " taxi pattern " (or having highly associated related word or order with the word in the defined rule of execution for regular).For example, when having given a definition under regular situation in the situation of not specifying particular command, user only just can executing rule such as the Associate Command of " taking now taxi ", " taxi " and " taxi pattern " by input.
Suppose to have defined the rule such as " if receive the text from wife when driving, sending my current location information ", as the exemplary cases in Fig. 8." driving " in user's set 100 recognition rules also extracts Associate Command " I will drive ", " driving " and " driving model " with executing rule.For example, when having defined in the situation that not specifying particular command under regular situation, user can utilize any one input in Associate Command " I will drive ", " driving " and " driving model " to carry out executing rule.
If inputted any Associate Command, user's set 100 can be searched for predefined rule to find any one rule of the described order of coupling, to carry out respective rule.
With reference to the operation of Fig. 7 and Fig. 8 description, can be applicable to the exemplary cases of Fig. 6 and Fig. 9 to Figure 13.For example, can input explicitly the Associate Command such as " I will drive ", " driving " and " driving model " with the rule that defines in the situation of Fig. 9; Can input explicitly the Associate Command such as " medicine ", " checking that medicine bottle falls (whereabouts) " and " medicine time " with the rule that defines in the situation of Figure 10; Can input explicitly the Associate Command such as " protected mode ", " after school " and " I will go home " with the rule that defines in the situation of Figure 11; Can input explicitly the Associate Command such as " upper subway train ", " subway " or " rail mode " with the rule that defines in the situation of Figure 12; Can input explicitly the Associate Command such as " dim " and " tired " with the rule that defines in the situation of Figure 13.Therefore, user's set 100 can be in the situation that do not need to define extra order based on carrying out corresponding rule with the corresponding Associate Command of predefine rule.
Figure 14 A and Figure 14 B be illustrate according to exemplary embodiment of the present invention for explaining the diagram of exemplary screen that temporarily stops the regular operation of current operation at user's set.
Figure 14 A shows the exemplary screen that CAS widget 500 is provided with the user's set 100 of the regular execution information area 520 of listing current operation.
Figure 14 A is the exemplary cases of moving for " taxi " and " subway " rule.User can select ad hoc rules to stop this rule by the execution information area 520 from widget 500.For example, under the state of Figure 14 A, user can select (for example, making the touch gestures touching) " taxi " rule.Then, user's set 100 is for example, for example, in response to the temporary transient stopping rule of selection (, taxi) of the user in the rule in current operation (, taxi and subway), as shown in Figure 14B.
In this case, user's set 100 can change the regular mark to temporarily stopping.According to exemplary embodiment of the present invention, user's set 100 can be changed into disabled status mark from initiate mode mark by the state cue mark of " taxi " rule, as shown in Figure 14B.For example, each rule in the rule of listing in the execution information area 520 of widget 500 is provided with state instruction button, and wherein, state instruction button indicates corresponding rule to be activated or to forbid.
According to exemplary embodiment of the present invention, widget 500 can be indicated the rule of current operation and can be inputted and temporarily stop each rule according to user.Therefore, according to user's intention, stopping where necessary adopting the regular operation of repetitive operation (for example, periodically checking that conditioned disjunction sends the action of text message) is possible to improve availability.
Figure 15 A and Figure 15 B be illustrate according to exemplary embodiment of the present invention for explaining the diagram of exemplary screen that temporarily stops the regular operation of current operation at user's set.
Figure 15 A and Figure 15 B show the screen of for example, while carrying out CAS application (, list of rules) according to working as of exemplary embodiment of the present invention user's set 100.
With reference to Figure 15 A and Figure 15 B, " subway " is current moves for rule, is presented in drop-down window with the details of the rule appointment explicitly of selecting.For example, user can select regularization term 1510 for example, to check the details (, condition and action) 1520 and 1530 of respective rule, for example, and " Wi-Fi arranges: open " and " sound setting: vibration ".
Under the state of Figure 15 A, user can be chosen in a plurality of action item 1520 and 1530 of appointment in a rule, with temporarily out of service.For example, user can for example, select " Wi-Fi setting " item 1520 in the action (, Wi-Fi setting option 1520 and sound setting option 1530) of rule " subway " 1510.Then, user's set 100 stops the corresponding action of item (for example, Wi-Fi arranges) of for example, selecting in the action of the rule (, " subway ") 1510 of current operation with user.
User's set 100 can change the state cue mark of the action stopping.For example, user's set 100 can be changed into disabled status mark from initiate mode mark by the state cue mark of action " Wi-Fi setting ", as shown in Figure 15 B.For example, each action in the action 1520 and 1530 of rule is provided with state instruction button, and wherein, state instruction button indication corresponding actions is enabled or forbidden.
According to exemplary embodiment of the present invention, when a plurality of actions and a rule association while moving, it is possible optionally stopping at each action of listing in current run action list.
Figure 16 A to Figure 16 C be illustrate according to exemplary embodiment of the present invention for explaining the diagram of exemplary screen that stops the regular operation of current operation at user's set.
Figure 16 A shows the exemplary screen that is performed the user's set 100 showing under the state presenting about the execution information area 520 of the regular information of current operation at CAS widget 500.
Figure 16 A is for the current exemplary cases of moving of " taxi " and " subway " rule.User can select the instruction input area (or regular executive button) 510 of (for example, making the touch gestures touching) widget 500.Then, the selection that control module 170 is determined instruction input areas 510 as rule carry out, the startup of in the regular regular termination temporarily stopping with current operation of current operation.Figure 16 A to Figure 16 C is for the regular exemplary cases that temporarily stops current operation.As shown in Figure 16 B, user's set 100 can show the pop-up window 1651 that uses message (for example, " please say order ") to point out user to input the instruction (for example, phonetic entry) that is used for temporary transient stopping rule, and waits for user's phonetic entry.
Under the state of Figure 16 B, user can carry out for example, phonetic entry (for example, " temporarily stopping taxi ") for stopping goal rule (, taxi).Then, control module 170 can be identified user's phonetic entry and for example stop, in the rule (, " taxi " and " subway ") of current operation and the corresponding rules of voice (for example, " taxi ").As shown in Figure 16 C, control module 170 can change by phonetic entry the state cue mark of the rule (for example, " taxi ") of selecting in the execution information area 520 of widget 500, thereby stops described rule temporarily.
According to exemplary embodiment of the present invention, control module 170 can be changed into disabled status mark from initiate mode mark by the state cue mark of carrying out " taxi " rule in information area 520, as shown in Figure 16 C.For example, Figure 16 C shows the exemplary screen of for example, user's set 100 under the state that temporarily stops specified rule (, taxi) in the execution information area 520 of widget 500.
According to exemplary embodiment of the present invention, the rule of current operation by widget 500, be presented and according to user's phonetic entry (for example, " and temporarily stop OOO ", wherein, OOO can be rule or condition) temporarily stopped.
Figure 17 is the diagram illustrating according to the exemplary screen of the indication in user's set with executing rule of exemplary embodiment of the present invention.
With reference to Figure 17, when asking executing rule in response to user, user's set 100 can be on the screen of display panel 131 display notification item 700.The regular icon that can be performed with performance or the form of text provide notice item 700.For example, if carry out and drive relevant rule, the notice item 700 of automobile image (or text) form can be set in a part for screen.In addition, if carry out the rule relevant to medicine, the notice item 700 of medicine or medicine bottle image (or text) form is set in a part for screen.According to various exemplary embodiments of the present invention, by notice item 700, notify the rule of the current operation of user, thus the rule of user's current operation of perception intuitively.
Figure 17 is for the exemplary cases in a part of notifying item 700 to be arranged on display panel 131.Yet exemplary embodiment of the present invention is not limited to this.Notice can be present in as described for the indicator area of the various operational status informations of user's set 100 is provided with reference to Fig. 5 E.
Figure 18 A and Figure 18 B show the diagram of notifying the exemplary screen of the regular item of carrying out according to having of exemplary embodiment of the present invention in user's set.
With reference to Figure 18 A and Figure 18 B, the part place of user's set 100 on can the indicator area 1850 on the screen of display panel 131 arranges notice 550.According to exemplary embodiment of the present invention, notice item 550 is presented on the left part of indicator area 1850, to notify the rule of current operation.
With reference to Figure 18 A and Figure 18 B, in various exemplary embodiments of the present invention, when at least one rule is current, is just moving time notice 550 and can be presented.
With reference to the exemplary cases of Figure 18 A, as shown in the execution information area 520 of widget 500, a plurality of rules (for example, " taxi " and " subway " two rules) are moved.With reference to Figure 18 B, the rule of listing in execution information area 520 (for example, rule " taxi " and " subway ") one in (for example, " subway ") be activated, for example, and another rule (, " taxi ") disabled (for example,, as temporarily being stopped described in exemplary embodiment).
According to various exemplary embodiments of the present invention, user's set 100 can be used notice item 550 to notify the regular existence of at least one current operation of user.Yet exemplary embodiment of the present invention is not limited to this.But each rule arranges notice.This means a plurality of notices that can present with the regular quantity Matching of current operation in indicator areas 1850.
Figure 19 A and Figure 19 B illustrate the diagram of notifying the exemplary screen of the regular item of carrying out according to having of exemplary embodiment of the present invention in user's set.
With reference to Figure 19 A and Figure 19 B, when asking executing rule in response to user, the notice item 550 of the regular existence of the current operation of indicator area 1850 place's display notification that user's set 100 can be on the screen of display panel 131.According to exemplary embodiment of the present invention, if the rule of current operation temporarily stops, user's set 100 can be used a corresponding notice regular existence that 550 notices temporarily stop of indicator area 1850.
As shown in the exemplary cases of Figure 19 A, for example, if select temporarily to stop the rule (, taxi) of current operation by user, user's set 100 switches to disabled status by the state of notice item 550.The execution information area 520 of widget 500 indicates the rule of hiring a car at present not in operation.
According to exemplary embodiment of the present invention, as shown in Figure 19 B, can be to represent that the icon of respective rule or the form of application execution icon arrange notice item 550.For visual representation, icon can present with in initiate mode icon and disabled status icon.
Figure 20 A to Figure 20 C illustrates the diagram of notifying the exemplary screen of the regular item of carrying out according to having of exemplary embodiment of the present invention in user's set.
With reference to Figure 20 A to Figure 20 C, user's set 100 can arrange the notice item 550 of at least one executive routine in indicator area 1850.According to exemplary embodiment of the present invention, user can select the notice item 550 of (for example, touching, make touch gestures etc.) indicator area 1850 or touch and drag down indicator area 1850 to show quick panel 2010.
Under the state of Figure 20 B, user can select the item of information 2050 of (for example, touching, make touch gestures etc.) quick panel 2010 to check the details about respective rule, as shown in Figure 20 C.In response to the selection of item of information 2050, user's set 100 can show the corresponding details of rule with (the item of information representative of being selected by user) current operation.In various exemplary embodiments of the present invention, keeping under the state of quick panel 2010, can be to be presented on text pop-up window on quick panel 2010, to provide details by the list of rules of screens switch as above or the form of voice output.
According to various exemplary embodiments of the present invention, if the notice item 550 of indicator area 1850 is touched, user's set 100 can be with condition and the action of appointment in the form Feedback Rule of voice or text output.
According to various exemplary embodiments, if touch or touch and drag the notice item 550 of indicator areas 1850, panel 2010 is shown so that the item of information 2050 that represents respective rule to be shown fast.If touched item of information 2050, can feed back the details (for example, condition and action) about respective rule with the form of voice output.According to exemplary embodiment of the present invention, if selected item of information 2050, as shown in Figure 20 C, the voice of exportable " [family] operation ".Certainly, can be with at least one the various notices of formal output such as " the tinkle of bells is stroke " and " Wi-Fi opens " in voice and text.
Figure 21 A and Figure 21 B are the diagrams that the exemplary screen being associated according to the operation with notifying executing rule in user's set of exemplary embodiment of the present invention is shown.
Figure 21 A and Figure 21 B show the regular exemplary operation of notifying the current operation of user according to the use widget 500 of exemplary embodiment of the present invention.
With reference to Figure 21 A, for example, in the rule of current operation (, family), be displayed under the state in the execution information area 520 of widget 500, user can select respective rule.Then, user's set 100 can feed back the regular details of selecting about in carrying out information area 520 with the form of text or voice feedback.
For example, as shown in Figure 21 B, condition and the action of rule (for example, the family) configuration explicitly that user's set 100 can be selected with formal output and the user of text pop-up window 2150, such as " [family], in operation, the tinkle of bells is stroke, and Wi-Fi opens ".User's set 100 can also with the form of voice feedback or with the formal output of voice and text feedback according to the details by user configured setting.
Figure 22 A to Figure 22 C is the diagram that the exemplary screen being associated according to the regular operation with stop current operation in user's set of exemplary embodiment of the present invention is shown.
Figure 22 A shows according to the exemplary screen of the user's set 100 under the state being performed at CAS widget 500 of exemplary embodiment of the present invention, wherein, CAS widget 500 has illustrated the regular information about current operation in the execution information area 520 of widget 500.
Figure 22 A is that the rule for current operation is the situation of " subway ".In this case, about the information of " subway ", be displayed in the execution information area of widget 500, and indicate the notice item 550 of the current rule of moving " subway " to be displayed on indicator area 1850 places.
User can select (for example, making the touch gestures touching) instruction input area (regular executive button) 510.Then, control module 170 determines that selection in instruction input area 510 and rule are carried out, the startup of in the regular regular termination temporarily stopping with current operation of current operation is corresponding.Figure 22 A to Figure 22 C is for the regular exemplary cases that stops current operation.As shown in Figure 22 B, user's set 100 can show the pop-up window 2251 that uses guide message (for example, " please say order ") to point out user to input the instruction (for example, phonetic entry) that is used for termination rules, and waits for user's voice.
Under the state of Figure 22 B, user for example can carry out, for example, for the phonetic entry of the execution of termination rules (, " subway ") (, " end subway ").Then, control module 170 can be identified phonetic entry, and stops the rule (for example, " subway ") indicated by phonetic entry.Control module 170 can for example, change the demonstration of the execution information area 520 of widget 500 by the termination of reflection rule (, " subway "), as shown in Figure 22 C.Control module 170 can be controlled and make by reflecting that the termination of rule " subway " makes notice 550 disappearance in indicating area 1850.
According to exemplary embodiment of the present invention, control module 170 can be changed into disabled status by the demonstration of the state of rule " subway " from initiate mode in carrying out information area 520, as shown in Figure 22 C.Figure 22 C shows the exemplary screen of user's set 100, and in this exemplary screen, according to the user's of the termination for respective rule phonetic entry, the execution information area 520 at widget 500 disappears rule (for example, subway).When rule, " " while being terminated, the item of information notified explanation " rule not just moved " relevant to rule " subway " replaces subway.In the situation that in the rule of current operation is terminated, only disappears with the corresponding item of information of the rule stopping, and be retained in and carry out in information area 520 with other the corresponding item of information of rule of current operation.
According to various exemplary embodiments of the present invention, can stop current rule in response to the user's of the termination with " stop 000 " form order rule phonetic entry.In various exemplary embodiments of the present invention, if ad hoc rules is terminated in response to user input, the state before the execution that can be automatically restored to respective rule with condition and action at least one setting of configuration explicitly of described rule.
Figure 23 A and Figure 23 B show the diagram of the exemplary screen being associated according to the regular operation with stop current operation in user's set of exemplary embodiment of the present invention.
Figure 23 A and Figure 23 B show and are provided for stopping the regular conclusion button representing with the item in widget 500 or list of rules and use described conclusion button to stop the exemplary operation of respective rule.
As shown in Figure 23 A, widget 500 shows that rule " subway " is current to be moved.As shown in Figure 23 A, notice item 550 is arranged in indicator area 1850 to indicate regular the existing of any current operation.
User can use the regular conclusion button 525 that is mapped to before current operation in termination rules " subway " to stop the operation of respective rule.For example, user can select the item of information of (for example, touching or touch gestures) delegate rules (for example, " subway ") in the execution information area 520 of widget 500, as shown in Figure 23 A.
User's input that control module 170 identifications are made conclusion button 525 termination and conclusion button 525 corresponding rules (for example, " subway ").As shown in Figure 23 B, control module 170 can be controlled and make to represent that the item of the rule (for example, " subway ") being stopped by user disappears in carrying out information area 520.Control module 170 also can be controlled and make notice 550 disappearance in indicator area 1850 due to regular termination.
According to exemplary embodiment, control module 170 can be changed into disabled status by the executing state of rule " subway " from initiate mode in carrying out information area 520, as shown in Figure 23 B.Figure 23 B shows for example, exemplary screen at the item of the initiate mode of indication rule (, the subway) user's set 100 from the state that the execution information area of widget disappears in response to the selection of conclusion button 525.In response to for stopping the request of the rule " subway " of current operation, indicate the item of information notified " rule of just not moving " of the initiate mode of rule " subway " to replace.Suppose that one in a plurality of operation rules is terminated, the regular item of information only stopping disappears, and is retained in execution information area 520 with the corresponding item of information of rule of other current operation.
Figure 24 and Figure 25 are the diagrams illustrating according to the situation that stops CAS service in user's set of exemplary embodiment of the present invention.
With reference to Figure 24, user for example can make, as the predefined instruction of termination rules (, the selection of the mute key of voice, text, widget and gesture) is to finish in the rule of current operation.Rule termination instruction can be phonetic order or text instruction.Rule can be by designing in advance function key, in widget 500, button, the text of each Design with Rule writes (text-scribing) input or predefined gesture is terminated.
For example, as shown in figure 24, user can carry out the phonetic entry such as " finish 000(and the corresponding order of rule) ", " getting home " and " will get home " based on natural language.Then, phonetic entry can be identified and resolve to user's set, and if phonetic entry mate with predetermined instruction, end rules.In various exemplary embodiments of the present invention, can rule termination instruction be set to each rule.Rule termination instruction can be set to the universal command for strictly all rules.The in the situation that of general rule, it is possible making the rule that command for stopping once finishes all current operations.
With reference to Figure 25, although user does not input for stopping regular any instruction of current operation, when specified conditions are reached, user's set 100 termination rules or prompting user termination rules.
For example, as shown in figure 25, user's set 100 can monitor the regular situation of current operation and determine whether described situation reaches the rule termination condition of using mapping table registration.In various exemplary embodiments of the present invention, (for example the rule termination condition of situation change or user's appointment in predetermined lasting time, do not detected, the specified conditions of termination rules are selected and are kept in phonetic entry, rule termination function key) while being satisfied, can reach rule termination condition.
When rule termination condition is reached, user's set 100 can show and for example present, for pointing out message that user stops respective rule (, " stopping driving model? ") pop-up window 900, and according to the mutual maintenance of user and pop-up window 900 or stop respective rule.
Figure 26 A and Figure 26 B are the diagrams that the exemplary screen being associated according to the operation with deletion rule in user's set of exemplary embodiment of the present invention is shown.
Figure 26 A and Figure 26 B show the exemplary screen of for example, while carrying out CAS application (, list of rules) user's set 100.
Figure 26 A and Figure 26 B are stored in the exemplary cases in user's set 100 for three rules " family " defined by the user, " taxi " and " subway ".Three rules, be presented under the state in list of rules, user can make delete instruction by menu manipulation, phonetic entry or gesture input.
For example, under the state of Figure 26 A, user can or make gesture input by the menu of operation user's set 100, the voice of making " deletion rule " or text input, asks the execution of delete function.Then, subscriber unit 170 can in response to user input activate redundant rule elimination function and show can deletion rule screen interface, as shown in Figure 26 B.According to exemplary embodiment of the present invention, the options 2600(that the list of pre-defined rule can be regular with each for example, check box) show together, as shown in Figure 26 B.
User can select at least one rule by choosing (check) corresponding options.At this moment, control module 170 can usage flag be chosen options 2600 and has been selected corresponding rule with indication.According to exemplary embodiment, the options 2600 of not choosing is rendered as sky frame (for example, wherein not choosing the choice box of mark), and the options 2600 of choosing is rendered as and wherein has the frame of choosing mark.User can handle or use delete button to delete at least one selecteed rule by menu.
Although Figure 26 A and Figure 26 B are for the exemplary cases of using menu and options 2600 deletion rulies, user can be by input voice or text instruction's deletion rule.For example, under the state of Figure 26 A and Figure 26 B, user can make the voice of " deleting house " or write input and carry out deletion rule " family ".
Figure 27 is the process flow diagram that exemplary embodiment according to the present invention process of generation rule in user's set is shown.
With reference to Figure 27, in step 1001, control module 170(for example, rule configuration module 173) receive user's input of request CAS.For example, user can carry out for by a configuration mode that carrys out definition rule of input block 120, microphone 143 and touch-screen 130.Under configuration mode, user can make the input for definition rule with the phonetic entry by microphone or the form of inputting by the text of input block 120 or touch-screen 130.Then.Control module 170 can be identified the user's input for CAS under configuration mode.User can be by a kind of the input and rule, condition (or situation) and the instruction being associated of moving in above-mentioned input mode.
For example, user's definable rule " if gone up taxi, sending text ", specifies at least one target subscriber device (for example, " father ") that is sent out the text, and definition rule is carried out instruction " I have gone up taxi ".As described in the exemplary embodiment on of the present invention, definition is possible such as the detailed rules of " if I have gone up taxi; every 5 minutes, my position is sent to father and younger brother (younger sister) " or " if receive the text message from the comprising of father " where ", sending text message to father ".
In step 1003, control module 170 identification user inputs.For example, control module 170 can be identified voice or the text of inputting by corresponding input medium under rule configuration pattern.For example, control module 170 is carried out for identifying the speech identifying function of the voice of inputting by microphone 143 or for identifying by the text recognition function of the text of input block 120 or touch-screen 130 inputs.Can preferably based on natural language, make voice and text instruction, as mentioned above.
In step 1005, control module 170 is resolved user's input (for example, the voice based on natural language or text) of identification.For example, control module 170 is resolved rule, condition and the regular fill order that the phonetic order based on natural language is planned to extract and to identify user.According to guide, input rule, condition (situation), action and rule are carried out instruction continuously.The item (for example, the situation that detect, target and order) that control module 170 can also perform an action according to the situation search of identification is as the result of resolving user's input, to check the part lacking.According to exemplary embodiment of the present invention, control module 170 can by be provided for generation rule guide and according to guide receive information based on user carry out alternately generation rule.
For example, when user the situation that there is no intended target give a definition such as " if I have gone up taxi, sending text " for situation identification regular time, control module 170 can identify the destination (for example, target) that lacks text message.When the action of defined rule in the situation that there is no intended target is performed, the bootable intended target of control module 170.For example, control module 170 can be carried out with user's voice or text mutual until obtain the other information according to this action.According to exemplary embodiment of the present invention, control module 170 can show the ejection text (use phonetic guiding) such as " please say recipient " or " where sending to ".Then, user can make the phonetic entry such as " specifying after a while ", " boyfriend " and " sending to father ".Can be by the voice based on natural language or text input intended target, as mentioned above.Can supplement rule by other information; According to above-mentioned process, control module 170 can be carried out identification and resolve phonetic entry, and corresponding entry is matched to a rule.
In step 1007, the user of control module 170 based on resolving inputs management for the rule of CAS definition.For example, control module 170 can be carried out between instruction and shine upon each other in rule, condition (situation), action and the rule obtained by parsing user input, and mapping is stored in mapping table to manage.
Although do not describe in Figure 27, according to various exemplary embodiments of the present invention, can provide user wizard when the process of definition rule.For example, user can be by typewriting with dummy keyboard or for example, writing to input the instruction for generation of regular with specific input tool (, electronic pen or user's finger) on touch-screen.Then, control module 170 can provide and can identify by writing or the screen interface of text that keyboard typing is inputted the text definition rule based on identification.
Screen interface can provide the list (for example, condition list and action lists) of condition and action (or function or application), and optionally opening/closing condition and action (or function or application) of user.For example, if user intends definition for close the rule of GPS function in office, user can be by electronic pen (writing input or the character based on touch keyboard selects to input) input " office " on the screen of user's set 100.
According to various exemplary embodiments of the present invention, user's set 100 can provide and can receive the screen interface (for example, order plate or touch pad) that writes input or typewriting input.User's set 100 can be identified the inputted text " office " that write or typewrite on the screen interface such as order plate and touch pad by input tool (electronic pen or user's finger).Then, control module 170 controls to show configured list screen, thus the condition that user's opening/closing is associated with " office " on configured list screen and at least one in action (function or application).
According to exemplary embodiment of the present invention, can carry out definition rule by voice or text input function.Can by writing/keying in, natural language instruction as above be carried out or for example, carry out text input by least one in order plate or touch pad input keyword (, " office ") and the action that then presents in close/open list.
Figure 28 is the process flow diagram illustrating according to the process that CAS is provided in user's set of exemplary embodiment of the present invention.
Figure 28 shows the operating process of taking action when the condition (situation) of executing rule and appointment in rule is reached on user's set 100 in response to user interactions.
With reference to Figure 28, in step 1101, the user instruction that control module 170 receives for CAS.For example, user can be undertaken for carrying out the regular phonetic entry for CAS definition by microphone 143.Then, control module 170 can receive by microphone 143 user's phonetic order.According to various exemplary embodiments of the present invention, can input the instruction for executing rule with the phonetic entry by microphone 143, text input by input block 120 or touch-screen 130 or the form of gesture input.
According to various exemplary embodiments of the present invention, by one in the function key of user's set 100, being appointed as instruction key (for example, regular executive button or the shortcut of widget 500) is possible to wait user's phonetic entry.In this case, when having selected instruction key, control module 170 waits the phonetic entry for the user of CAS, and attempts the phonetic entry of carrying out under holding state to carry out speech recognition.Described holding state can be configured to after standby mode order or only when pressing instruction key, be held.
In step 1103, the instruction that control module 170 identifications are inputted by user.For example, control module 170 can extract from user's phonetic entry the order of the regular execution of order.
In step 1105, control module 170 is resolved the instruction of identification.For example, the user speech (for example, " I have gone up taxi ") that control module 170 can be resolved identification for example, with extracting rule fill order (, " taxi ").
In step 1107, control module 170 determines in pre-defined rule, whether there is any rule of mating with the fill order of extracting.
If there is not the rule of mating with described fill order in step 1107, in step 1109, control module 170 controls to show guide.For example, can display notification there is not the regular ejection guide that user needs in control module 170.The form that control module 170 can also eject with guide shows whether inquiry defines the regular guide being associated with the corresponding command.Control module 170 also can provide regular list defined by the user.
After showing guide, in step 1111, control module 170 is controlled with according to user's request executable operations.For example, control module 170 can be according to user's selection termination rules, the new regulation being associated with order according to user's selection definition, or process the operation of selecting ad hoc rules from list of rules.
If mated with described fill order in step 1107 rule, in step 1113, control module 170 determines whether the regular quantity of mating with described fill order is greater than 1.For example, one or more rules that user's definable mates with an instruction.According to exemplary embodiment, a plurality of rules of user's definable (for example, the first rule " if receive event from the external device (ED) of appointment when driving; send current location ", Second Rule " if drive speed is equal to or greater than 100Km/h; export alarm " and three sigma rule " if drive speed is equal to or greater than 60Km/h, increase radio volume ").As example, user can utility command " driving 1 " define the first rule, and utility command " is driven 2 " and defined Second Rule, and utility command " is driven 3 " and defined three sigma rule.
If the regular quantity of mating with described fill order is for example not more than 1(, if only have a rule to mate with described fill order), in step 1115, control module 170 can control to carry out the action according to single rule.For example, whether the condition (situation) that control module 170 monitors to detect appointment in respective rule is reached, and carries out one or more actions when described condition (situation) is reached.
If the regular quantity of mating with described fill order in step 1113 is greater than 1, in step 1117, a plurality of rules that control module 170 is controlled to carry out and mate with described fill order are moved accordingly.For example, control module 170 can monitor the condition (situation) of appointment in a plurality of rules of mating with described fill order, and the action of each rule that executive condition is reached when at least one condition is reached.
Although do not describe in Figure 28, control module 170 can provide the interface of in a plurality of rules that prompting (recommendation) mates with an instruction.Control module 170 can control to carry out a plurality of actions according to a plurality of rules of being selected by user's (selection by voice, text or gesture instruction is inputted), or carries out according to the regular individual part of single selection.
Although do not describe in Figure 28, control module 170 can according in user's input checking rule for example, for example, for the item (, condition (, the condition that be identified (situation) or target)) of the execution of moving, to check, lack part.Control module 170 can check in the process of executing rule that user's is mutual.
For example, when user the situation that there is no intended target give a definition such as " if I have gone up taxi, sending text " for situation identification regular time, control module 170 can be identified the destination (for example, target) that lacks text message.In this case, while preparing to carry out the action that sends text message when corresponding conditions being detected and be reached, control module 170 can be identified and lack destination.When the action of defined rule in the situation that there is no intended target is performed, the bootable intended target of control module 170.According to exemplary embodiment of the present invention, control module 170 can show such as " text is sent to whom? " ejection text (use phonetic guiding).Then, user can carry out by the voice based on natural language or text input the target (such as " sending to father ") of specify text message institute addressing.
Figure 29 is the process flow diagram illustrating according to the process that CAS is provided in user's set of exemplary embodiment of the present invention.
Specifically, Figure 29 shows executing rule, one or more conditions (situation) of checking appointment in rule to be to perform an action or to remove the regular example process of current operation.
With reference to Figure 29, in step 1201, control module 170 is asked executing rule according to user.Next, in step 1203, the result that control module 170 feedback execution information (for example, notice) are carried out as rule.For example, control module 170 can be controlled the item (icon or text) being associated with the rule of carrying out to show on the screen at display panel 131, as mentioned above.
In step 1205, control module 170 checks the condition of appointment in rule.Whether the condition (situation) that for example, control module 170 continuously or periodically monitors to detect appointment in the rule of current operation is reached.
In step 1207, control module 170 determines based on check result whether action executing condition is reached.For example, at least one condition (situation) that control module 170 monitors appointments in the rule of current operation is with by determining that with reference to mapping table whether current situation mate with the specified conditions of the action executing of appointment in rule.
If in the condition (situation) and action executing Condition Matching of step 1207 user's set 170, in step 1209, the execution that control module 170 is controlled by the triggered action of completing of condition.For example, control module 170 guard conditions or situation, and if conditioned disjunction situation and action executing Condition Matching are carried out corresponding actions.Action can be the function (or application) of appointment in executing rule, produces execution result (for example, context information) and execution result is exported to user or other people.
If the condition (situation) at step 1207 user's set 100 is not mated with action executing condition, in step 1211, control module 170 determines whether the condition (situation) of user's set 100 mates with regular condition subsequent.For example, control module 170 monitors certain condition of appointment in the rule of current operation, and by determining with reference to mapping table whether current situation mates with the regular condition subsequent of appointment in rule.When within the user configured duration, in situation, do not exist while changing or when the regular condition subsequent that meet user's appointment (for example, rule is removed continuing of phonetic order, function key input, text input, gesture input and specified conditions) time, can reach regular condition subsequent.
If the condition (situation) at step 1211 user's set 100 is not mated with regular condition subsequent, control module 170 makes process be back to step 1205, to continue to check the condition of user's set.
If the condition (situation) at step 1211 user's set 100 is mated with regular condition subsequent, in step 1213, control module 170 is removed the rule of current operation, and removes information at step 1215 Feedback Rule.For example, control module 170 can be controlled with at least one in output audio, video and tactile feedback.According to exemplary embodiment of the present invention, control module 170 can be controlled for example, to export rule by least one the form in audio alert (, voice and sound effect), pop-up window and vibration and remove feedback.
Although do not describe in Figure 29, when regular condition subsequent is reached, control module 170 can present prompting user and stop the regular Pop-up message of carrying out, to keep or to remove the rule of current operation according to user's selection.
Figure 30 is the process flow diagram illustrating according to the process that CAS is provided in user's set of exemplary embodiment of the present invention.
With reference to Figure 30, in step 1301, control module 170 monitors detection event under the state just to move in the rule of carrying out in response to user's request, and in step 1303, determines whether to detect event.For example, control module 170 can monitor to detect inside or the external event (condition) with the rule appointment explicitly of current operation.
Event (condition) can comprise as the internal event occurring in user's set 100 of the result of the variation of interior condition and the external event receiving from outside.According to exemplary embodiment of the present invention, the translational speed that internal event can comprise user's set 100 event during faster than predetermined threshold, the recurrent event of the predetermined time interval of usining, in response to user's voice or text input event, for example, as the result of the change of operation (, move and illumination) and event etc.External event can comprise from the event of the outside receipt message target subscriber device receipt message of appointment the rule of current operation (specifically, from).
If event detected in step 1303, in step 1305, control module 170 checks the function that will carry out of appointment in executing rule.For example, the function (or application) that control module 170 can check appointment in the rule of current operation is as the action that will carry out when corresponding event (condition) is reached.
In step 1307, control module 170 activates the function checking, and performs an action by the function activating in step 1309.For example, if moved accordingly with the event (condition) of appointment in executing rule, be the volume that controls user's set 100, control module 170 controls to activate volume control function.If moving accordingly with the event (condition) of appointment in executing rule is the current location that sends user's set 100, control module 170 activates positional information sending function (or application) and the message transmission function such as GPS function (navigation feature), thereby user's set 100 sends the positional information about user's set 100 to destination apparatus.For example, control module 170 can be according to the type of the action of carrying out being carried out to the function (application) of function (application) or at least two interoperables.
In step 1311, control module 170 is controlled and is for example usingd feedback-related information, as the result of action executing (information, producing from the execution of action).For example, control module 170 can be controlled to show to user and present the screen interface while of level of sound volume in response to user's operation adjustment volume.Control module 170 can also be controlled screen interface and/or the sound (audio) of the destination apparatus that with output notice, the positional information of user's set 100 is sent to appointment.
If event (condition) do not detected in step 1303, in step 1315, control module 170 is determined whether matched rule condition subsequent of current situation.For example, control module 170 can monitor the condition of appointment in the rule of current operation, and by determining with reference to mapping table whether current situation mates with the regular condition subsequent of appointment in rule.
If do not mated with regular condition subsequent in the current situation of step 1315, in step 1301, control module 170 makes process return to continue supervision event.
If mated with regular condition subsequent in the current situation of step 1315, control module 170 is removed the rule of current operation in step 1317, and the result (information for example, producing from regular releasing) of removing as rule in step 1319 feedback releasing information.For example, if regular condition subsequent is reached, control module 170 can display reminding user stop the regular Pop-up message of current operation, thereby according to user's selection, keeps or remove the rule of current operation.Control module 170 can be notified with at least one the form in audio frequency, video and tactile feedback the regular releasing of the current operation of user.
Figure 31 A to Figure 31 N is the diagram that the exemplary screen being associated according to the operation with generation rule in user's set of exemplary embodiment of the present invention is shown.
Figure 31 A to Figure 31 N shows the process of definition rule in such a way: control module 170(for example, rule configuration module 173) the text input based on natural language that identification user writes, and input definition rule (condition and action) in response to the text of identification.
Figure 31 A to Figure 31 N can be with corresponding with reference to the regular production process of Fig. 3 A to Fig. 3 K or Fig. 4 A to Fig. 4 J description above.For example, Figure 31 A can be corresponding to Fig. 4 A, and Figure 31 C is corresponding to Fig. 4 B, and Figure 31 E is corresponding to Fig. 4 C, and Figure 31 G is corresponding to Fig. 4 D, and Figure 31 I is corresponding to Fig. 4 E, and Figure 31 K is corresponding to Fig. 4 F, Figure 31 L corresponding to Fig. 4 I and Figure 31 N corresponding to Fig. 4 J.In the following description with reference to Figure 31 A to Figure 31 N, or corresponding operation identical with the operation of describing with reference to Fig. 4 A to Fig. 4 J is omitted or mentioned briefly.
For example, Figure 31 A to Figure 31 N shows according to exemplary embodiment of the present invention and the correspondingly operation of generation rule of operation Fig. 4 A to Fig. 4 J.Yet Figure 31 A to Figure 31 N is the situation of input rather than phonetic entry generation rule that writes for the text based on user.Different from the regular production process (Fig. 4 A to Fig. 4 J) based on phonetic entry, the text that Figure 31 A to Figure 31 N shows based on user writes the operation of inputting generation rule.
Under the exemplary cases of Figure 31 A to Figure 31 N, for generation of regular user's input, be text based, therefore, user's set 100 can be with user interactions Pop-up message " text input " rather than " phonetic entry " is provided.According to exemplary embodiment of the present invention, the positive talker's of pop-up window 451 use of Fig. 4 A picto-diagram and point out user to carry out phonetic entry such as the text of " please say 000 ".Yet in Figure 31 A, Figure 31 E, Figure 31 G and Figure 31 L, pop-up window 3151,3155,3157 and 3167 is used in the picto-diagram of hand-written pen and points out user to carry out text input such as the text of " please write 000 ".
Under the exemplary cases of Figure 31 A to Figure 31 N, by writing text rather than saying text and carry out user's input.As shown in Figure 31 B, Figure 31 D, Figure 31 F, Figure 31 H, Figure 31 J and Figure 31 M, user's set 100 can be in response to for example writing window, to receive user's response (, text input) for providing the request of information to provide.According to exemplary embodiment of the present invention, user's set 100 can show the guide (inquiry) such as " please write order ", shows afterwards and for example writes window 3100, to receive the user's who writes input text (, taxi).
According to various exemplary embodiments of the present invention, can be by carrying out alternately regular production process between user's set 100 and user, and user can carry out configuration rule by carry out the text input based on natural language of regular action and condition according to the guide of user's set 100.For example, user's set 100 can receive inputting for configuring condition and the regular text based on natural language of composition rule of being made by user, and configures corresponding rule according to the user instruction of the step input by Figure 31 A to Figure 31 N.
Figure 32 A to Figure 32 E shows the diagram of the exemplary screen being associated according to the operation with executing rule in user's set of exemplary embodiment of the present invention.
The text based on natural language that Figure 32 A to Figure 32 E shows as the rule execution module 175 reception users of control module 170 writes and inputs executing rule in response to user, the condition checking module 175 of control module 170 monitors to detect the reaching of condition of appointment in rule, and the exemplary screen of the user's set 100 of action executing module 179 while carrying out with corresponding at least one action of the condition reaching.
Figure 32 A to Figure 32 E can be corresponding with the regular implementation of describing with reference to Fig. 5 A to Fig. 5 E above.For example, Figure 32 A can be corresponding to Fig. 5 A, and Figure 32 B is corresponding to Fig. 5 B, Figure 32 D corresponding to Fig. 5 C and Figure 32 E corresponding to Fig. 5 D.In the following description with reference to Figure 32 A to Figure 32 E, or corresponding operation identical with the operation of describing with reference to Fig. 5 A to Fig. 5 E is omitted or mentioned briefly.
For example, Figure 32 A to Figure 32 E shows according to exemplary embodiment of the present invention and the correspondingly operation of executing rule of operation Fig. 5 A to Fig. 5 E.Yet Figure 32 A to Figure 32 E is the situation of input rather than phonetic entry generation rule that writes for the text based on user.Different from the regular implementation (Fig. 5 A to Fig. 5 E) based on phonetic entry, the text that Figure 32 A to Figure 32 E shows based on user writes the operation of inputting executing rule.
Under the exemplary cases of Figure 32 A to Figure 32 E, for the user input of executing rule, be text based, therefore, user's set 100 can be with user interactions Pop-up message " text input " rather than " phonetic entry " is provided.According to exemplary embodiment of the present invention, the pop-up window 551 in Fig. 5 B is used positive talker's picto-diagram and points out user to carry out phonetic entry such as the text of " please say 000 ".Yet in Figure 32 B, pop-up window 3251 is used in the picto-diagram of hand-written pen and points out user to carry out text input such as the text of " please write 000 ".
With reference to Figure 32 A to Figure 32 E, by writing text rather than saying text and carry out user's input.As shown in Figure 32 C, user's set 100 can provide and for example write window, to receive user's response (, text input) in response to the request that information is provided.According to exemplary embodiment of the present invention, user's set 100 can show the guide (inquiry) such as " please write order ", shows afterwards and for example writes window 3200, to receive the user's who writes input text (, subway).
According to various exemplary embodiments of the present invention, can be by carrying out alternately regular production process between user's set 100 and user, and user can carry out executing rule by the text input of carrying out based on natural language according to the guide of user's set 100.For example, user's set 100 can receive for the text based on natural language of being made by user of executing rule and input according to the user instruction of the step input by Figure 32 A to Figure 32 E.
According to various exemplary embodiments of the present invention, user can use widget 500 executing rules.According to exemplary embodiment of the present invention, user can select the text of instruction input area (or regular executive button) 510 rules with input configuration (or order) of widget 500.Then, control module 170 with text, eject or the form of audio announcement (for example, phonetic synthesis (TTS)) by about feed back to user together with the notice of the information of the action of execution and regular beginning.Control module 170 adds the rule of execution to carry out information area 520 and at the notice item of the regular existence of the current operation of indicator area display notification to.
As shown in Figure 32 A, the widget 500 based on text input can be arranged with the widget based on phonetic entry for 500 minutes.According to exemplary embodiment, can in instruction input area (or regular executive button) 510, use positive talker's picto-diagram that the widget 500 of Fig. 5 A is set, to indicate speech identifying function to move.Can in instruction input area (or regular executive button) 510, use the widget 500 that Figure 32 A for example, is set at the picto-diagram (, icon) of hand-written pen, to indicate text recognition function just to move.
Figure 33 A to Figure 33 D is the diagram that the exemplary screen being associated according to the regular operation with suspend current operation in user's set of exemplary embodiment of the present invention is shown.
With reference to Figure 33 A to Figure 33 D, the text based on natural language that control module 170 identifications are made by user writes input, and in response to text input, temporarily stops the rule of current operation.
Figure 33 A to Figure 33 D can be corresponding with the rule time-out process of describing with reference to Figure 16 A to Figure 16 C above.For example, Figure 33 A can be corresponding to Figure 16 A, Figure 33 B corresponding to Figure 16 B and Figure 33 D corresponding to Figure 16 C.In the following description with reference to Figure 33 A to Figure 33 D, or corresponding operation identical with the operation of describing with reference to Figure 16 A to Figure 16 C is omitted or mentioned briefly.
For example, Figure 33 A to Figure 33 D shows according to operation exemplary embodiment of the present invention and the corresponding pause rule of operation Figure 16 A to Figure 16 C.Yet Figure 33 A to Figure 33 D writes input rather than the situation based on phonetic entry generation rule for the text based on user.Different from the regular implementation (Figure 16 A to Figure 16 C) based on phonetic entry, Figure 33 A to Figure 33 D shows the operation of the text input pause rule based on user.
Under the exemplary cases of Figure 33 A to Figure 33 D, for the user input of pause rule, be text based, therefore, user's set 100 can be with user interactions Pop-up message " text input " rather than " phonetic entry " is provided.According to exemplary embodiment of the present invention, the pop-up window 3351 in Figure 33 B is for example used, at the picto-diagram (, icon) of hand-written pen and points out user to carry out text input such as the text of " please write 000 ".
Under the exemplary cases of Figure 33 A to Figure 33 D, by writing text rather than saying text and carry out user's input.As shown in Figure 33 C, user's set 100 can provide and for example write window 3300, to receive user's response (, writing input) in response to the request that information is provided.According to exemplary embodiment of the present invention, user's set 100 can show the guide (inquiry) such as " please write order ", shows afterwards and for example writes window 3300, to receive the user's who writes input text (, suspending taxi).
According to various exemplary embodiments of the present invention, can be by carrying out alternately regular time-out process between user's set 100 and user, and user can carry out pause rule by the text input of carrying out based on natural language according to the guide of user's set 100.For example, user's set 100 can receive the text based on natural language that the user for pause rule makes according to the user instruction of the step input by Figure 33 A to Figure 33 D and inputs.
According to various exemplary embodiments of the present invention, user can use widget 500 executing rules.According to exemplary embodiment of the present invention, user can select the instruction input area (or instruction executive button) 510 of widget 500 with input, to be used for suspending the regular text instruction of current operation.In this manner, user can temporarily stop at least one regular periodical operation.For example, control module 170 can in response to user's text input (such as, " temporarily stopping 000 ") suspend corresponding rule.
Figure 34 A to Figure 34 D is the diagram that the exemplary screen being associated according to the regular operation with stop current operation in user's set of exemplary embodiment of the present invention is shown.
Figure 34 A to Figure 34 D can be corresponding with the rule termination process of describing with reference to Figure 22 A to Figure 22 C above.For example, Figure 34 A can be corresponding to Figure 22 A, Figure 34 B corresponding to Figure 22 B and Figure 34 D corresponding to Figure 22 C.In the following description with reference to Figure 34 A to Figure 34 D, or corresponding operation identical with the operation of describing with reference to Figure 22 A to Figure 22 C is omitted or mentioned briefly.
For example, Figure 34 A to Figure 34 D show according to exemplary embodiment of the present invention with Figure 22 A to Figure 22 D in the correspondingly operation of termination rules of operation.Yet Figure 34 A to Figure 34 D carrys out the situation of termination rules for the text input based on user rather than phonetic entry.Different from the rule termination process (Figure 22 A to Figure 22 C) based on phonetic entry, the text that Figure 34 A to Figure 34 D shows based on user writes the operation that input carrys out termination rules.
Under the exemplary cases of Figure 34 A to Figure 34 D, for the user input of termination rules, be text based, therefore, user's set 100 can be with user interactions Pop-up message " text input " rather than " phonetic entry " is provided.According to exemplary embodiment of the present invention, the pop-up window 3451 in Figure 34 B is used in the picto-diagram of hand-written pen and points out user to carry out text input such as the text of " please write 000 ".
Under exemplary cases in Figure 34 A to Figure 34 D, by writing text rather than saying text and carry out user's input.As shown in Figure 34 C, user's set 100 can provide and for example write window 3400, to receive user's response (, text input) in response to the request that information is provided.According to exemplary embodiment of the present invention, user's set 100 can show the guide (inquiry) such as " please write order ", shows afterwards and writes window 3400 for example, to receive the user's who writes input text (, stopping subway), as shown in Figure 34 C.
According to various exemplary embodiments of the present invention, can be by the rule termination process of carrying out alternately between user's set 100 and user, and user can carry out termination rules by the text input of carrying out based on natural language according to the guide of user's set 100.For example, user's set 100 can receive for the text based on natural language of being made by user of termination rules and input according to the user instruction of the step input by Figure 34 A to Figure 34 D.
According to various exemplary embodiments of the present invention, user's set can be used widget 500 executing rules.According to exemplary embodiment of the present invention, user can select the instruction input area (or regular executive button) 510 of widget 500 with input, to be used for stopping the regular text instruction of current operation.In addition, according to various exemplary embodiments of the present invention, control module can be controlled and make when stopping respective rule according to user's text input, and user's set 100 makes the configuration of user's set automatically restore to the state before the execution of respective rule.
According to the CAS supplying method of exemplary embodiment of the present invention and equipment, can differently configure according to user's definition the rule (or condition) of CAS.User's set 100 is identified in the condition of appointment at least one rule defined by the user, and when corresponding conditions is reached, carries out at least one action.According to the CAS supplying method of exemplary embodiment of the present invention and device, inner and/or outside context information can be fed back to user as the result of action executing.
According to the CAS supplying method of exemplary embodiment of the present invention and equipment can user's device 100 by the text based on natural language or phonetic entry definition rule (or situation), for carry out the instruction of respective rule and according to rule by the action of carrying out.Therefore, CAS supplying method of the present invention and equipment allow user's definition to have the condition of user's appointment and the various rules of action and have the rule in the definition of the fabrication phase of user's set 100.CAS supplying method and equipment can pass through text or phonetic entry definition rule and instruction based on natural language according to an exemplary embodiment of the present invention, and carry out executing rule in response to the detection of the movement of the text based on natural-sounding or phonetic order or user's set 100.Therefore, exemplary embodiment of the present invention can be expanded the scope of CAS and user is provided specific availability.
According to the CAS supplying method of exemplary embodiment of the present invention and equipment, can and support and the corresponding Scenario perception of described a plurality of conditions scene a plurality of conditions of each rule configuration.Therefore, according to the CAS supplying method of exemplary embodiment of the present invention and equipment, can configure various conditions according to user's hobby, and carry out a plurality of actions corresponding to Scenario perception scene simultaneously.With comparing based on statistical context aware technology according to prior art, can be functional by adopting recommendation function and context aware to improve context aware according to the CAS supplying method of exemplary embodiment of the present invention and equipment, thus sight identification accuracy improved.
According to the CAS supplying method of exemplary embodiment of the present invention and equipment, CAS environment be can optimize, thereby convenience for users and device availability, effectiveness and competitive power improved.According to the CAS supplying method of exemplary embodiment of the present invention and device, can be applicable to comprise various types of devices that can carry out CAS of cellular communication terminal, smart phone, dull and stereotyped PC, PDA etc.
According to exemplary embodiment of the present invention, module can be embodied as any one in software, firmware, hardware and/or their combination in any or combine.Some or all in module can be embodied as the entity that can carry out equally the function of individual module.According to various exemplary embodiments of the present invention, can carry out sequentially, repeatedly or concurrently a plurality of operations.Some in operation can be omitted or be substituted by other operations.
Above-mentioned exemplary embodiment of the present invention can be implemented with the form of computer executable program order and can be stored in non-instantaneous computer-readable recording medium.Non-instantaneous computer-readable recording medium can come stored program command, data file and data structure with form single or combination.The program command being recorded in storage medium can be designed and realizes or used by the technician of computer software fields for various exemplary embodiments of the present invention.
Non-instantaneous computer-readable recording medium comprises magnetic medium such as floppy disk and tape, comprises the optical medium of compact disk (CD) ROM and digital video disc (DVD) ROM, such as the magnet-optical medium of floptical and the hardware unit (such as ROM, RAM and flash memory) that is designed for storage executive routine order.Program command comprises the language codes that can be carried out by the computing machine that uses interpreter and by the machine language code of compiler-creating.Can be with realizing above-mentioned hardware unit for carrying out the software module of the operation of various exemplary embodiments of the present invention.
Although illustrate and described the present invention with reference to certain exemplary embodiments of the present invention, but it will be appreciated by those skilled in the art that, in the situation that do not depart from by the claim the spirit and scope of the present invention that extremely equivalent limits, can be in described exemplary embodiment in details with make various changes in form.
Claims (23)
1. for a method for the Context-Aware Services of user's set is provided, described method comprises:
Receive user's input, described user is input as at least one in text input and phonetic entry;
User based on receiving inputs identification and comprises condition and the rule of moving accordingly with described condition;
Activate rule with the corresponding sight of condition of detection and described rule;
When described sight being detected, carry out with described condition and move accordingly.
2. the method for claim 1, wherein in user input and text input based on natural language and the phonetic entry based on natural language is corresponding.
3. the method for claim 1, wherein, activating regular step comprises: in response to include the word that comprises in condition and in phrase at least one the text input based on natural language and a corresponding order in the phonetic entry based on natural language activate rule.
4. the method for claim 1, wherein described sight comprises the change of user's set,
Wherein, at least one in the change of user's set and the change of the posture of user's set and position change is corresponding.
5. the method for claim 1, wherein described sight comprises at least one reception in incoming call message and incoming call sound.
6. the method for claim 1, wherein described action comprises the operation of being carried out by user when described sight being detected.
7. method as claimed in claim 6, wherein, described action comprises that intraware by controlling user's set is via the corresponding context information of user's set feedback and rule with by controlling external device (ED) via at least one in external device (ED) feedback context information.
8. the method for claim 1, wherein the step of recognition rule comprises the one or more rules of identification, and
Wherein, activating regular step comprises in response to ordering to activate rule.
The method of claim 1, wherein according to user and for receive between the user's set of user input alternately to comprise the single structure of an action and to comprise that of multi-factor structure of a plurality of actions carrys out configuration rule.
10. the step that the method for claim 1, wherein receives user input comprises: provide allow user specify in the action of being supported by user's set at least one user interface and about at least one in the guide of the action supported by user's set.
11. the method for claim 1, wherein the step of recognition rule comprise:
When receiving first user input, feedback inquiry provides rule required side information with prompting user;
In response to described inquiry, receive second user's income; With
When user's input does not need side information, skip the request to side information.
12. 1 kinds for providing the method for the Context-Aware Services of user's set, and described method comprises:
Be provided for the user interface of configuration rule;
By user interface, receive at least one in phonetic entry based on natural language and the input of the text based on natural language;
Use from user and input the condition of identification and move configuration rule;
Activate rule with the corresponding event of condition of detection and described rule;
When described event being detected, carry out with described condition and move accordingly.
13. methods as claimed in claim 12, described method also comprises that reception is for activating regular order.
14. methods as claimed in claim 13, wherein, at least one in condition and order and voice, the text based on natural language, the motion detection event of user's set, at least one in the reception of the reception of incoming call sound and incoming call message based on natural language.
15. methods as claimed in claim 13, wherein, activate regular step and comprise:
When receiving order, determine with the rule of described commands match and whether exist;
When rule is not mated with described order, output guide;
When existing with described commands match rule, determine whether the regular quantity of coupling is greater than 1;
According to the regular quantity of coupling, activate at least one rule.
16. methods as claimed in claim 15, wherein, activate regular step and also comprise:
When the regular quantity of coupling is greater than 1, in described rule is selected in prompting.
17. methods as claimed in claim 12, wherein, activate regular step and also comprise:
When regular condition subsequent is satisfied under the state being activated in rule, remove the rule activating; With
The releasing information of the releasing of feedback notification rule.
18. methods as claimed in claim 17, wherein, the step of feedback releasing information comprises at least one in audible feedback, video feed and tactile feedback.
19. 1 kinds of user's sets, comprising:
Storage unit, storage comprises condition and the rule of moving accordingly with described condition;
Display unit, shows for being received in the user's input under the state that rule is activated and carrying out the user interface of information and the execution result of action; With
Control module, control inputs based on user the rule that identification comprises condition and action, control and activate rule with the corresponding sight of condition of detection and described rule, and when described sight is detected, control to carry out with described condition and move accordingly, wherein, user is input as at least one in text input and phonetic entry.
20. user's sets as claimed in claim 19, wherein, control module comprises:
Rule configuration module, receives for inputting with the user of configuration rule by identify at least one condition and at least one action according at least one of the phonetic entry based on natural language and the input of the text based on natural-sounding;
Rule execution module, receive for activating regular order and execution and described order regularly accordingly, described order is voice based on natural language, the text based on natural language, the motion detection event of user's set, in the reception of the reception of incoming call sound and incoming call message one;
Condition checking module, corresponding at least one sight of at least one condition of appointment in detection and rule; With
Action executing module is carried out and corresponding at least one action of detected sight when described at least one sight is detected.
21. user's sets as claimed in claim 19, wherein, control module, controls and carries out and corresponding at least one action of described condition when the event detecting in user's set meets the condition of appointment in the rule activating; And control as feedback execution result.
22. user's sets as claimed in claim 19, wherein, control module, removes the rule activating when regular condition subsequent is reached under the state being activated at least one rule; And control feedback for notifying the releasing information of the releasing of described at least one rule.
23. 1 kinds of user's sets, comprising:
Rule configuration module, by computer realization, for receive user input and for based on user, input identification comprise condition and with the rule of the corresponding action of described condition, user is input as phonetic entry based on natural language and at least one in the input of the text based on natural language;
Rule execution module, by computer realization, for receiving for activating regular order regular accordingly for carrying out with described order, wherein, described order is voice based on natural language, the text based on natural language, the motion detection event of user's set, in the reception of the reception of incoming call sound and incoming call message one;
Condition checking module, by computer realization, for detection of with rule in the corresponding sight of condition of appointment; With
Action executing module, by computer realization, for carrying out with described condition and move accordingly when described sight being detected.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910012868.6A CN109739469B (en) | 2012-09-20 | 2013-09-22 | Context-aware service providing method and apparatus for user device |
CN201910012570.5A CN109683848B (en) | 2012-09-20 | 2013-09-22 | Context awareness service providing method and apparatus of user device |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2012-0104357 | 2012-09-20 | ||
KR20120104357 | 2012-09-20 | ||
KR1020130048755A KR102070196B1 (en) | 2012-09-20 | 2013-04-30 | Method and apparatus for providing context aware service in a user device |
KR10-2013-0048755 | 2013-04-30 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910012868.6A Division CN109739469B (en) | 2012-09-20 | 2013-09-22 | Context-aware service providing method and apparatus for user device |
CN201910012570.5A Division CN109683848B (en) | 2012-09-20 | 2013-09-22 | Context awareness service providing method and apparatus of user device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103677261A true CN103677261A (en) | 2014-03-26 |
CN103677261B CN103677261B (en) | 2019-02-01 |
Family
ID=49231281
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310432058.9A Active CN103677261B (en) | 2012-09-20 | 2013-09-22 | The context aware service provision method and equipment of user apparatus |
CN201910012868.6A Active CN109739469B (en) | 2012-09-20 | 2013-09-22 | Context-aware service providing method and apparatus for user device |
CN201910012570.5A Active CN109683848B (en) | 2012-09-20 | 2013-09-22 | Context awareness service providing method and apparatus of user device |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910012868.6A Active CN109739469B (en) | 2012-09-20 | 2013-09-22 | Context-aware service providing method and apparatus for user device |
CN201910012570.5A Active CN109683848B (en) | 2012-09-20 | 2013-09-22 | Context awareness service providing method and apparatus of user device |
Country Status (6)
Country | Link |
---|---|
US (2) | US10042603B2 (en) |
EP (2) | EP3435645A1 (en) |
JP (1) | JP6475908B2 (en) |
CN (3) | CN103677261B (en) |
AU (2) | AU2013231030B2 (en) |
WO (1) | WO2014046475A1 (en) |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104346451A (en) * | 2014-10-29 | 2015-02-11 | 山东大学 | Situation awareness system based on user feedback, as well as operating method and application thereof |
CN104407702A (en) * | 2014-11-26 | 2015-03-11 | 三星电子(中国)研发中心 | Method, device and system for performing actions based on context awareness |
CN104468814A (en) * | 2014-12-22 | 2015-03-25 | 齐玉田 | Wireless control system and method of internet of things |
CN104505091A (en) * | 2014-12-26 | 2015-04-08 | 湖南华凯文化创意股份有限公司 | Human-machine voice interaction method and human-machine voice interaction system |
CN104902072A (en) * | 2015-04-14 | 2015-09-09 | 深圳市欧珀通信软件有限公司 | Terminal prompting method and device |
CN105224278A (en) * | 2015-08-21 | 2016-01-06 | 百度在线网络技术(北京)有限公司 | Interactive voice service processing method and device |
CN106161054A (en) * | 2015-03-31 | 2016-11-23 | 腾讯科技(深圳)有限公司 | Equipment configuration method, device and system |
CN109829108A (en) * | 2019-01-28 | 2019-05-31 | 北京三快在线科技有限公司 | Information recommendation method, device, electronic equipment and readable storage medium storing program for executing |
CN111295888A (en) * | 2017-11-01 | 2020-06-16 | 松下知识产权经营株式会社 | Action guidance system, action guidance method, and program |
CN111901481A (en) * | 2019-05-06 | 2020-11-06 | 苹果公司 | Computer-implemented method, electronic device, and storage medium |
CN112534799A (en) * | 2018-08-08 | 2021-03-19 | 三星电子株式会社 | Method for executing function based on voice and electronic device supporting the same |
CN113377426A (en) * | 2021-07-01 | 2021-09-10 | 中煤航测遥感集团有限公司 | Vehicle supervision rule configuration method and device, computer equipment and storage medium |
CN113678133A (en) * | 2019-04-05 | 2021-11-19 | 三星电子株式会社 | System and method for context-rich attention memory network with global and local encoding for dialog break detection |
CN114285930A (en) * | 2021-12-10 | 2022-04-05 | 杭州逗酷软件科技有限公司 | Interaction method, interaction device, electronic equipment and storage medium |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11360577B2 (en) | 2018-06-01 | 2022-06-14 | Apple Inc. | Attention aware virtual assistant dismissal |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11487364B2 (en) | 2018-05-07 | 2022-11-01 | Apple Inc. | Raise to speak |
US11538469B2 (en) | 2017-05-12 | 2022-12-27 | Apple Inc. | Low-latency intelligent automated assistant |
US11550542B2 (en) | 2015-09-08 | 2023-01-10 | Apple Inc. | Zero latency digital assistant |
US11557310B2 (en) | 2013-02-07 | 2023-01-17 | Apple Inc. | Voice trigger for a digital assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11657820B2 (en) | 2016-06-10 | 2023-05-23 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11699448B2 (en) | 2014-05-30 | 2023-07-11 | Apple Inc. | Intelligent assistant for home automation |
US11705130B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | Spoken notifications |
US11749275B2 (en) | 2016-06-11 | 2023-09-05 | Apple Inc. | Application integration with a digital assistant |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11783815B2 (en) | 2019-03-18 | 2023-10-10 | Apple Inc. | Multimodality in digital assistant systems |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11809886B2 (en) | 2015-11-06 | 2023-11-07 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US11810562B2 (en) | 2014-05-30 | 2023-11-07 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11842734B2 (en) | 2015-03-08 | 2023-12-12 | Apple Inc. | Virtual assistant activation |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11853647B2 (en) | 2015-12-23 | 2023-12-26 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US11900936B2 (en) | 2008-10-02 | 2024-02-13 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11900923B2 (en) | 2018-05-07 | 2024-02-13 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
Families Citing this family (175)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US9954996B2 (en) | 2007-06-28 | 2018-04-24 | Apple Inc. | Portable electronic device with conversation management for incoming instant messages |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
US9680763B2 (en) | 2012-02-14 | 2017-06-13 | Airwatch, Llc | Controlling distribution of resources in a network |
US10404615B2 (en) | 2012-02-14 | 2019-09-03 | Airwatch, Llc | Controlling distribution of resources on a network |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
KR102070196B1 (en) | 2012-09-20 | 2020-01-30 | 삼성전자 주식회사 | Method and apparatus for providing context aware service in a user device |
US10042603B2 (en) | 2012-09-20 | 2018-08-07 | Samsung Electronics Co., Ltd. | Context aware service provision method and apparatus of user device |
US20150019229A1 (en) * | 2012-10-10 | 2015-01-15 | Robert D. Fish | Using Voice Commands To Execute Contingent Instructions |
ITTO20121070A1 (en) | 2012-12-13 | 2014-06-14 | Istituto Superiore Mario Boella Sul Le Tecnologie | WIRELESS COMMUNICATION SYSTEM WITH SHORT RADIUS INCLUDING A SHORT-COMMUNICATION SENSOR AND A MOBILE TERMINAL WITH IMPROVED FUNCTIONALITY AND RELATIVE METHOD |
GB2526948A (en) | 2012-12-27 | 2015-12-09 | Kaleo Inc | Systems for locating and interacting with medicament delivery devices |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US20140280955A1 (en) | 2013-03-14 | 2014-09-18 | Sky Socket, Llc | Controlling Electronically Communicated Resources |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US9401915B2 (en) | 2013-03-15 | 2016-07-26 | Airwatch Llc | Secondary device as key for authorizing access to resources |
US9219741B2 (en) | 2013-05-02 | 2015-12-22 | Airwatch, Llc | Time-based configuration policy toggling |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10223156B2 (en) | 2013-06-09 | 2019-03-05 | Apple Inc. | Initiating background updates based on user activity |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
EP3008641A1 (en) | 2013-06-09 | 2016-04-20 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
KR101749009B1 (en) * | 2013-08-06 | 2017-06-19 | 애플 인크. | Auto-activating smart responses based on activities from remote devices |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9432796B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Dynamic adjustment of mobile device based on peer event data |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
CN106471570B (en) | 2014-05-30 | 2019-10-01 | 苹果公司 | Order single language input method more |
US9207835B1 (en) | 2014-05-31 | 2015-12-08 | Apple Inc. | Message user interfaces for capture and transmittal of media and location content |
CN106605201B (en) | 2014-08-06 | 2021-11-23 | 苹果公司 | Reduced size user interface for battery management |
KR101901796B1 (en) | 2014-09-02 | 2018-09-28 | 애플 인크. | Reduced-size interfaces for managing alerts |
CN115756154A (en) | 2014-09-02 | 2023-03-07 | 苹果公司 | Semantic framework for variable haptic output |
WO2016036541A2 (en) | 2014-09-02 | 2016-03-10 | Apple Inc. | Phone user interface |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10147421B2 (en) | 2014-12-16 | 2018-12-04 | Microcoft Technology Licensing, Llc | Digital assistant voice input integration |
US9584964B2 (en) | 2014-12-22 | 2017-02-28 | Airwatch Llc | Enforcement of proximity based policies |
US9413754B2 (en) | 2014-12-23 | 2016-08-09 | Airwatch Llc | Authenticator device facilitating file security |
KR20160084663A (en) | 2015-01-06 | 2016-07-14 | 삼성전자주식회사 | Device and method for transmitting message |
US9389928B1 (en) | 2015-02-11 | 2016-07-12 | Microsoft Technology Licensing, Llc | Platform for extension interaction with applications |
US10152299B2 (en) | 2015-03-06 | 2018-12-11 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US10133613B2 (en) * | 2015-05-14 | 2018-11-20 | Microsoft Technology Licensing, Llc | Digital assistant extensibility to third party applications |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US10491708B2 (en) * | 2015-06-05 | 2019-11-26 | Apple Inc. | Context notifications |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10003938B2 (en) | 2015-08-14 | 2018-06-19 | Apple Inc. | Easy location sharing |
US10600296B2 (en) * | 2015-08-19 | 2020-03-24 | Google Llc | Physical knowledge action triggers |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US9922648B2 (en) | 2016-03-01 | 2018-03-20 | Google Llc | Developer voice actions system |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9918006B2 (en) * | 2016-05-20 | 2018-03-13 | International Business Machines Corporation | Device, system and method for cognitive image capture |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179309B1 (en) | 2016-06-09 | 2018-04-23 | Apple Inc | Intelligent automated assistant in a home environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK179823B1 (en) | 2016-06-12 | 2019-07-12 | Apple Inc. | Devices, methods, and graphical user interfaces for providing haptic feedback |
DK179657B1 (en) | 2016-06-12 | 2019-03-13 | Apple Inc. | Devices, methods and graphical user interfaces for providing haptic feedback |
DE102016212681A1 (en) * | 2016-07-12 | 2018-01-18 | Audi Ag | Control device and method for voice-based operation of a motor vehicle |
DK201670720A1 (en) | 2016-09-06 | 2018-03-26 | Apple Inc | Devices, Methods, and Graphical User Interfaces for Generating Tactile Outputs |
DK179278B1 (en) | 2016-09-06 | 2018-03-26 | Apple Inc | Devices, methods and graphical user interfaces for haptic mixing |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
CN108289110B (en) | 2017-01-09 | 2021-10-08 | 斑马智行网络(香港)有限公司 | Device association method and device, terminal device and operating system |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
EP3570906A4 (en) * | 2017-01-17 | 2020-10-21 | Kaleo, Inc. | Medicament delivery devices with wireless connectivity and event detection |
KR20180102871A (en) * | 2017-03-08 | 2018-09-18 | 엘지전자 주식회사 | Mobile terminal and vehicle control method of mobile terminal |
KR102369309B1 (en) * | 2017-03-24 | 2022-03-03 | 삼성전자주식회사 | Electronic device for performing an operation for an user input after parital landing |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
DK201770372A1 (en) | 2017-05-16 | 2019-01-08 | Apple Inc. | Tactile feedback for locked device user interfaces |
DK179560B1 (en) | 2017-05-16 | 2019-02-18 | Apple Inc. | Far-field extension for digital assistant services |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
WO2018213788A1 (en) * | 2017-05-18 | 2018-11-22 | Aiqudo, Inc. | Systems and methods for crowdsourced actions and commands |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10503467B2 (en) * | 2017-07-13 | 2019-12-10 | International Business Machines Corporation | User interface sound emanation activity classification |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
EP3474095A1 (en) * | 2017-10-23 | 2019-04-24 | Mastercard International Incorporated | System and method for specifying rules for operational systems |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
KR102532300B1 (en) * | 2017-12-22 | 2023-05-15 | 삼성전자주식회사 | Method for executing an application and apparatus thereof |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
JP2019144684A (en) * | 2018-02-16 | 2019-08-29 | 富士ゼロックス株式会社 | Information processing system and program |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
CN108762712B (en) * | 2018-05-30 | 2021-10-08 | Oppo广东移动通信有限公司 | Electronic device control method, electronic device control device, storage medium and electronic device |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US10504518B1 (en) | 2018-06-03 | 2019-12-10 | Apple Inc. | Accelerated task performance |
US11929160B2 (en) | 2018-07-16 | 2024-03-12 | Kaleo, Inc. | Medicament delivery devices with wireless connectivity and compliance detection |
WO2020046269A1 (en) | 2018-08-27 | 2020-03-05 | Google Llc | Algorithmic determination of a story readers discontinuation of reading |
TW202009743A (en) * | 2018-08-29 | 2020-03-01 | 香港商阿里巴巴集團服務有限公司 | Method, system, and device for interfacing with a terminal with a plurality of response modes |
CN112889022A (en) * | 2018-08-31 | 2021-06-01 | 谷歌有限责任公司 | Dynamic adjustment of story time special effects based on contextual data |
US11526671B2 (en) | 2018-09-04 | 2022-12-13 | Google Llc | Reading progress estimation based on phonetic fuzzy matching and confidence interval |
WO2020050822A1 (en) | 2018-09-04 | 2020-03-12 | Google Llc | Detection of story reader progress for pre-caching special effects |
CN111226194A (en) * | 2018-09-27 | 2020-06-02 | 三星电子株式会社 | Method and system for providing interactive interface |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
GB2582910A (en) * | 2019-04-02 | 2020-10-14 | Nokia Technologies Oy | Audio codec extension |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
DK201970511A1 (en) | 2019-05-31 | 2021-02-15 | Apple Inc | Voice identification in digital assistant systems |
US11363071B2 (en) | 2019-05-31 | 2022-06-14 | Apple Inc. | User interfaces for managing a local network |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | User activity shortcut suggestions |
US10904029B2 (en) | 2019-05-31 | 2021-01-26 | Apple Inc. | User interfaces for managing controllable external devices |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
KR20210072471A (en) * | 2019-12-09 | 2021-06-17 | 현대자동차주식회사 | Apparatus for recognizing voice command, system having the same and method thereof |
WO2021171475A1 (en) * | 2020-02-27 | 2021-09-02 | 三菱電機株式会社 | Joining assistance device, joining assistance system, and joining assistance method |
JP7248615B2 (en) * | 2020-03-19 | 2023-03-29 | ヤフー株式会社 | Output device, output method and output program |
US11513667B2 (en) | 2020-05-11 | 2022-11-29 | Apple Inc. | User interface for audio message |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
CN111857457A (en) * | 2020-06-22 | 2020-10-30 | 北京百度网讯科技有限公司 | Cloud mobile phone control method and device, electronic equipment and readable storage medium |
IT202000021253A1 (en) * | 2020-09-14 | 2022-03-14 | Sistem Evo S R L | IT platform based on artificial intelligence systems to support IT security |
USD1016082S1 (en) * | 2021-06-04 | 2024-02-27 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with graphical user interface |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1270360A (en) * | 1999-04-13 | 2000-10-18 | 索尼国际(欧洲)股份有限公司 | Speech interface for simultaneous use of facility and application |
US6622119B1 (en) * | 1999-10-30 | 2003-09-16 | International Business Machines Corporation | Adaptive command predictor and method for a natural language dialog system |
US20050054290A1 (en) * | 2000-08-29 | 2005-03-10 | Logan James D. | Rules based methods and apparatus for generating notification messages based on the proximity of electronic devices to one another |
US20050283532A1 (en) * | 2003-11-14 | 2005-12-22 | Kim Doo H | System and method for multi-modal context-sensitive applications in home network environment |
US20060208872A1 (en) * | 2005-03-02 | 2006-09-21 | Matsushita Electric Industrial Co., Ltd. | Rule based intelligent alarm management system for digital surveillance system |
US20070073870A1 (en) * | 2005-09-23 | 2007-03-29 | Jun-Hee Park | User interface apparatus for context-aware environments, device controlling apparatus and method thereof |
CN101002175A (en) * | 2004-07-01 | 2007-07-18 | 诺基亚公司 | Method, apparatus and computer program product to utilize context ontology in mobile device application personalization |
CN101267600A (en) * | 2007-03-07 | 2008-09-17 | 艾格瑞系统有限公司 | Communications server for handling parallel voice and data connections and method of using the same |
CN101557432A (en) * | 2008-04-08 | 2009-10-14 | Lg电子株式会社 | Mobile terminal and menu control method thereof |
US20110038367A1 (en) * | 2009-08-11 | 2011-02-17 | Eolas Technologies Incorporated | Automated communications response system |
CN102640480A (en) * | 2009-12-04 | 2012-08-15 | 高通股份有限公司 | Creating and utilizing a context |
Family Cites Families (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5917489A (en) * | 1997-01-31 | 1999-06-29 | Microsoft Corporation | System and method for creating, editing, and distributing rules for processing electronic messages |
US6775658B1 (en) * | 1999-12-21 | 2004-08-10 | Mci, Inc. | Notification by business rule trigger control |
US8938256B2 (en) * | 2000-08-29 | 2015-01-20 | Intel Corporation | Communication and control system using location aware devices for producing notification messages operating under rule-based control |
JP2002283259A (en) | 2001-03-27 | 2002-10-03 | Sony Corp | Operation teaching device and operation teaching method for robot device and storage medium |
US20020144259A1 (en) * | 2001-03-29 | 2002-10-03 | Philips Electronics North America Corp. | Method and apparatus for controlling a media player based on user activity |
US7324947B2 (en) * | 2001-10-03 | 2008-01-29 | Promptu Systems Corporation | Global speech user interface |
US20030147624A1 (en) * | 2002-02-06 | 2003-08-07 | Koninklijke Philips Electronics N.V. | Method and apparatus for controlling a media player based on a non-user event |
KR100434545B1 (en) | 2002-03-15 | 2004-06-05 | 삼성전자주식회사 | Method and apparatus for controlling devices connected with home network |
JP2006221270A (en) | 2005-02-08 | 2006-08-24 | Nec Saitama Ltd | Multitask system and method of mobile terminal device with voice recognition function |
US8554599B2 (en) | 2005-03-25 | 2013-10-08 | Microsoft Corporation | Work item rules for a work item tracking system |
US7640160B2 (en) | 2005-08-05 | 2009-12-29 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
JP2007220045A (en) * | 2006-02-20 | 2007-08-30 | Toshiba Corp | Communication support device, method, and program |
US8311836B2 (en) * | 2006-03-13 | 2012-11-13 | Nuance Communications, Inc. | Dynamic help including available speech commands from content contained within speech grammars |
JP4786384B2 (en) * | 2006-03-27 | 2011-10-05 | 株式会社東芝 | Audio processing apparatus, audio processing method, and audio processing program |
JP2007305039A (en) | 2006-05-15 | 2007-11-22 | Sony Corp | Information processing apparatus and method, and program |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8150699B2 (en) | 2007-05-17 | 2012-04-03 | Redstart Systems, Inc. | Systems and methods of a structured grammar for a speech recognition command system |
US8620652B2 (en) * | 2007-05-17 | 2013-12-31 | Microsoft Corporation | Speech recognition macro runtime |
KR20090053179A (en) | 2007-11-22 | 2009-05-27 | 주식회사 케이티 | Service controlling apparatus and method for context-aware knowledge service |
EP2220880B1 (en) * | 2007-12-14 | 2013-11-20 | BlackBerry Limited | Method, computer-readable medium and system for a context aware mechanism for use in presence and location |
US8958848B2 (en) | 2008-04-08 | 2015-02-17 | Lg Electronics Inc. | Mobile terminal and menu control method thereof |
KR20100001928A (en) * | 2008-06-27 | 2010-01-06 | 중앙대학교 산학협력단 | Service apparatus and method based on emotional recognition |
US8489599B2 (en) | 2008-12-02 | 2013-07-16 | Palo Alto Research Center Incorporated | Context and activity-driven content delivery and interaction |
KR20100100175A (en) * | 2009-03-05 | 2010-09-15 | 중앙대학교 산학협력단 | Context-aware reasoning system for personalized u-city services |
KR101566379B1 (en) * | 2009-05-07 | 2015-11-13 | 삼성전자주식회사 | Method For Activating User Function based on a kind of input signal And Portable Device using the same |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
KR20110023977A (en) | 2009-09-01 | 2011-03-09 | 삼성전자주식회사 | Method and apparatus for managing widget in mobile terminal |
US20110099507A1 (en) | 2009-10-28 | 2011-04-28 | Google Inc. | Displaying a collection of interactive elements that trigger actions directed to an item |
TWI438675B (en) | 2010-04-30 | 2014-05-21 | Ibm | Method, device and computer program product for providing a context-aware help content |
US8359020B2 (en) * | 2010-08-06 | 2013-01-22 | Google Inc. | Automatically monitoring for voice input based on context |
US10042603B2 (en) | 2012-09-20 | 2018-08-07 | Samsung Electronics Co., Ltd. | Context aware service provision method and apparatus of user device |
-
2013
- 2013-09-16 US US14/028,021 patent/US10042603B2/en active Active
- 2013-09-17 WO PCT/KR2013/008429 patent/WO2014046475A1/en active Application Filing
- 2013-09-17 AU AU2013231030A patent/AU2013231030B2/en active Active
- 2013-09-18 EP EP18188989.0A patent/EP3435645A1/en active Pending
- 2013-09-18 EP EP13185006.7A patent/EP2723049B1/en active Active
- 2013-09-20 JP JP2013195782A patent/JP6475908B2/en active Active
- 2013-09-22 CN CN201310432058.9A patent/CN103677261B/en active Active
- 2013-09-22 CN CN201910012868.6A patent/CN109739469B/en active Active
- 2013-09-22 CN CN201910012570.5A patent/CN109683848B/en active Active
-
2018
- 2018-08-03 US US16/054,336 patent/US10684821B2/en active Active
- 2018-11-09 AU AU2018260953A patent/AU2018260953B2/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1270360A (en) * | 1999-04-13 | 2000-10-18 | 索尼国际(欧洲)股份有限公司 | Speech interface for simultaneous use of facility and application |
US6622119B1 (en) * | 1999-10-30 | 2003-09-16 | International Business Machines Corporation | Adaptive command predictor and method for a natural language dialog system |
US20050054290A1 (en) * | 2000-08-29 | 2005-03-10 | Logan James D. | Rules based methods and apparatus for generating notification messages based on the proximity of electronic devices to one another |
US20050283532A1 (en) * | 2003-11-14 | 2005-12-22 | Kim Doo H | System and method for multi-modal context-sensitive applications in home network environment |
CN101002175A (en) * | 2004-07-01 | 2007-07-18 | 诺基亚公司 | Method, apparatus and computer program product to utilize context ontology in mobile device application personalization |
US20060208872A1 (en) * | 2005-03-02 | 2006-09-21 | Matsushita Electric Industrial Co., Ltd. | Rule based intelligent alarm management system for digital surveillance system |
US20070073870A1 (en) * | 2005-09-23 | 2007-03-29 | Jun-Hee Park | User interface apparatus for context-aware environments, device controlling apparatus and method thereof |
CN101267600A (en) * | 2007-03-07 | 2008-09-17 | 艾格瑞系统有限公司 | Communications server for handling parallel voice and data connections and method of using the same |
CN101557432A (en) * | 2008-04-08 | 2009-10-14 | Lg电子株式会社 | Mobile terminal and menu control method thereof |
US20110038367A1 (en) * | 2009-08-11 | 2011-02-17 | Eolas Technologies Incorporated | Automated communications response system |
CN102640480A (en) * | 2009-12-04 | 2012-08-15 | 高通股份有限公司 | Creating and utilizing a context |
Cited By (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11671920B2 (en) | 2007-04-03 | 2023-06-06 | Apple Inc. | Method and system for operating a multifunction portable electronic device using voice-activation |
US11900936B2 (en) | 2008-10-02 | 2024-02-13 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US11321116B2 (en) | 2012-05-15 | 2022-05-03 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US11862186B2 (en) | 2013-02-07 | 2024-01-02 | Apple Inc. | Voice trigger for a digital assistant |
US11557310B2 (en) | 2013-02-07 | 2023-01-17 | Apple Inc. | Voice trigger for a digital assistant |
US11699448B2 (en) | 2014-05-30 | 2023-07-11 | Apple Inc. | Intelligent assistant for home automation |
US11810562B2 (en) | 2014-05-30 | 2023-11-07 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
CN104346451A (en) * | 2014-10-29 | 2015-02-11 | 山东大学 | Situation awareness system based on user feedback, as well as operating method and application thereof |
CN104407702A (en) * | 2014-11-26 | 2015-03-11 | 三星电子(中国)研发中心 | Method, device and system for performing actions based on context awareness |
CN104468814A (en) * | 2014-12-22 | 2015-03-25 | 齐玉田 | Wireless control system and method of internet of things |
CN104468814B (en) * | 2014-12-22 | 2018-04-13 | 齐玉田 | Wireless control system for Internet of things and method |
CN104505091A (en) * | 2014-12-26 | 2015-04-08 | 湖南华凯文化创意股份有限公司 | Human-machine voice interaction method and human-machine voice interaction system |
CN104505091B (en) * | 2014-12-26 | 2018-08-21 | 湖南华凯文化创意股份有限公司 | Man machine language's exchange method and system |
US11842734B2 (en) | 2015-03-08 | 2023-12-12 | Apple Inc. | Virtual assistant activation |
CN106161054A (en) * | 2015-03-31 | 2016-11-23 | 腾讯科技(深圳)有限公司 | Equipment configuration method, device and system |
CN104902072B (en) * | 2015-04-14 | 2017-11-07 | 广东欧珀移动通信有限公司 | A kind of terminal based reminding method and device |
CN104902072A (en) * | 2015-04-14 | 2015-09-09 | 深圳市欧珀通信软件有限公司 | Terminal prompting method and device |
US11947873B2 (en) | 2015-06-29 | 2024-04-02 | Apple Inc. | Virtual assistant for media playback |
CN105224278B (en) * | 2015-08-21 | 2019-02-22 | 百度在线网络技术(北京)有限公司 | Interactive voice service processing method and device |
CN105224278A (en) * | 2015-08-21 | 2016-01-06 | 百度在线网络技术(北京)有限公司 | Interactive voice service processing method and device |
US11853536B2 (en) | 2015-09-08 | 2023-12-26 | Apple Inc. | Intelligent automated assistant in a media environment |
US11954405B2 (en) | 2015-09-08 | 2024-04-09 | Apple Inc. | Zero latency digital assistant |
US11809483B2 (en) | 2015-09-08 | 2023-11-07 | Apple Inc. | Intelligent automated assistant for media search and playback |
US11550542B2 (en) | 2015-09-08 | 2023-01-10 | Apple Inc. | Zero latency digital assistant |
US11809886B2 (en) | 2015-11-06 | 2023-11-07 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11853647B2 (en) | 2015-12-23 | 2023-12-26 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US11657820B2 (en) | 2016-06-10 | 2023-05-23 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US11749275B2 (en) | 2016-06-11 | 2023-09-05 | Apple Inc. | Application integration with a digital assistant |
US11809783B2 (en) | 2016-06-11 | 2023-11-07 | Apple Inc. | Intelligent device arbitration and control |
US11467802B2 (en) | 2017-05-11 | 2022-10-11 | Apple Inc. | Maintaining privacy of personal information |
US11862151B2 (en) | 2017-05-12 | 2024-01-02 | Apple Inc. | Low-latency intelligent automated assistant |
US11580990B2 (en) | 2017-05-12 | 2023-02-14 | Apple Inc. | User-specific acoustic models |
US11837237B2 (en) | 2017-05-12 | 2023-12-05 | Apple Inc. | User-specific acoustic models |
US11538469B2 (en) | 2017-05-12 | 2022-12-27 | Apple Inc. | Low-latency intelligent automated assistant |
US11675829B2 (en) | 2017-05-16 | 2023-06-13 | Apple Inc. | Intelligent automated assistant for media exploration |
CN111295888B (en) * | 2017-11-01 | 2021-09-10 | 松下知识产权经营株式会社 | Action guide system, action guide method and recording medium |
CN111295888A (en) * | 2017-11-01 | 2020-06-16 | 松下知识产权经营株式会社 | Action guidance system, action guidance method, and program |
US11900923B2 (en) | 2018-05-07 | 2024-02-13 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11487364B2 (en) | 2018-05-07 | 2022-11-01 | Apple Inc. | Raise to speak |
US11907436B2 (en) | 2018-05-07 | 2024-02-20 | Apple Inc. | Raise to speak |
US11360577B2 (en) | 2018-06-01 | 2022-06-14 | Apple Inc. | Attention aware virtual assistant dismissal |
US11630525B2 (en) | 2018-06-01 | 2023-04-18 | Apple Inc. | Attention aware virtual assistant dismissal |
CN112534799A (en) * | 2018-08-08 | 2021-03-19 | 三星电子株式会社 | Method for executing function based on voice and electronic device supporting the same |
US11615788B2 (en) | 2018-08-08 | 2023-03-28 | Samsung Electronics Co., Ltd. | Method for executing function based on voice and electronic device supporting the same |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
CN109829108A (en) * | 2019-01-28 | 2019-05-31 | 北京三快在线科技有限公司 | Information recommendation method, device, electronic equipment and readable storage medium storing program for executing |
CN109829108B (en) * | 2019-01-28 | 2020-12-04 | 北京三快在线科技有限公司 | Information recommendation method and device, electronic equipment and readable storage medium |
US11783815B2 (en) | 2019-03-18 | 2023-10-10 | Apple Inc. | Multimodality in digital assistant systems |
CN113678133A (en) * | 2019-04-05 | 2021-11-19 | 三星电子株式会社 | System and method for context-rich attention memory network with global and local encoding for dialog break detection |
CN111901481A (en) * | 2019-05-06 | 2020-11-06 | 苹果公司 | Computer-implemented method, electronic device, and storage medium |
US11705130B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | Spoken notifications |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11675491B2 (en) | 2019-05-06 | 2023-06-13 | Apple Inc. | User configurable task triggers |
CN111901481B (en) * | 2019-05-06 | 2022-04-05 | 苹果公司 | Computer-implemented method, electronic device, and storage medium |
US11888791B2 (en) | 2019-05-21 | 2024-01-30 | Apple Inc. | Providing message response suggestions |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11765209B2 (en) | 2020-05-11 | 2023-09-19 | Apple Inc. | Digital assistant hardware abstraction |
US11924254B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Digital assistant hardware abstraction |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11750962B2 (en) | 2020-07-21 | 2023-09-05 | Apple Inc. | User identification using headphones |
CN113377426A (en) * | 2021-07-01 | 2021-09-10 | 中煤航测遥感集团有限公司 | Vehicle supervision rule configuration method and device, computer equipment and storage medium |
CN114285930A (en) * | 2021-12-10 | 2022-04-05 | 杭州逗酷软件科技有限公司 | Interaction method, interaction device, electronic equipment and storage medium |
CN114285930B (en) * | 2021-12-10 | 2024-02-23 | 杭州逗酷软件科技有限公司 | Interaction method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109739469A (en) | 2019-05-10 |
US10684821B2 (en) | 2020-06-16 |
US10042603B2 (en) | 2018-08-07 |
CN109739469B (en) | 2022-07-01 |
WO2014046475A1 (en) | 2014-03-27 |
EP2723049B1 (en) | 2018-08-15 |
AU2018260953B2 (en) | 2020-06-04 |
US20140082501A1 (en) | 2014-03-20 |
AU2013231030B2 (en) | 2018-08-09 |
EP3435645A1 (en) | 2019-01-30 |
CN109683848A (en) | 2019-04-26 |
JP6475908B2 (en) | 2019-02-27 |
AU2013231030A1 (en) | 2014-04-03 |
EP2723049A1 (en) | 2014-04-23 |
CN103677261B (en) | 2019-02-01 |
AU2018260953A1 (en) | 2018-11-29 |
US20180341458A1 (en) | 2018-11-29 |
JP2014064278A (en) | 2014-04-10 |
CN109683848B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103677261A (en) | Context aware service provision method and apparatus of user equipment | |
US11907615B2 (en) | Context aware service provision method and apparatus of user device | |
US20200304445A1 (en) | Terminal device and control method therefor | |
KR20180060328A (en) | Electronic apparatus for processing multi-modal input, method for processing multi-modal input and sever for processing multi-modal input | |
CN105320425A (en) | Context-based presentation of user interface | |
CN103282957A (en) | Automatically monitoring for voice input based on context | |
CN106104677A (en) | Visually indicating of the action that the voice being identified is initiated | |
CN105103457A (en) | Portable terminal, hearing aid, and method of indicating positions of sound sources in the portable terminal | |
CN103404118A (en) | Self-aware profile switching on a mobile computing device | |
CN103765787A (en) | Method and apparatus for managing schedules in a portable terminal | |
KR20180109465A (en) | Electronic device and method for screen controlling for processing user input using the same | |
CN104137130A (en) | Task performing method, system and computer-readable recording medium | |
KR101993368B1 (en) | Electronic apparatus for processing multi-modal input, method for processing multi-modal input and sever for processing multi-modal input | |
KR102084963B1 (en) | Electro device for decreasing consumption power and method for controlling thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |