WO2017209571A1 - Method and electronic device for predicting response - Google Patents

Method and electronic device for predicting response Download PDF

Info

Publication number
WO2017209571A1
WO2017209571A1 PCT/KR2017/005812 KR2017005812W WO2017209571A1 WO 2017209571 A1 WO2017209571 A1 WO 2017209571A1 KR 2017005812 W KR2017005812 W KR 2017005812W WO 2017209571 A1 WO2017209571 A1 WO 2017209571A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
message
response
contextual
context
Prior art date
Application number
PCT/KR2017/005812
Other languages
French (fr)
Inventor
Barath Raj KANDURRAJA
Balaji Vijayanagaram Ramalingam
Harshavardhana POOJARI
Raju Suresh DIXIT
Sreevatsa Dwaraka BHAMIDIPATI
Srinivasa Rao SIDDI
Kalyan KAKANI
Vibhav AGARWAL
Yashwant SAINI
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to EP17807064.5A priority Critical patent/EP3403201A4/en
Publication of WO2017209571A1 publication Critical patent/WO2017209571A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/274Converting codes to words; Guess-ahead of partial word inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • the present disclosure relates to electronic devices. More particularly, the present disclosure relates to a method and an electronic device for predicting response.
  • a word prediction technique involves an n-gram language model.
  • the goal of the n-gram language model is to compute probability of a sentence or sequence of words and use it to compute the probability of a suggesting word (i.e., upcoming word).
  • bigram or trigram models are used considering the speed and size of language models.
  • the trigram model includes unigram, bigram and trigram features.
  • the unigram feature is based on the current word being typed
  • the feature bigram is based on the current word being typed and the previous word that is typed
  • the trigram feature is based on the current word being typed and the previous two words that are typed.
  • these models have long-distance dependencies. Hence, it is difficult to track longer sentences and predictions may not be accurate.
  • n-gram language model is extended to 7-grams (for example), it may be able to the track previous 7 words and the longer sentence. This can lead to exponential growth of the number of parameters with length of the n-gram language model and hence the increase in complexity in training the language model, increase in training time of the language model, and performance expensive storage and retrieval operations in existing systems. Hence, it is not recommended to increase the size of the n-gram language model in the existing systems. Further, these models are based on the current sentence being typed. The intention behind typing the sentence is ignored.
  • an aspect of the present disclosure is to provide a method and electronic device for predicting a response.
  • Another object of the embodiments herein is to provide a method for receiving at least one message, and identifying at least one contextual category of the at least one message.
  • Another object of the embodiments herein is to provide a method for predicting at least one response for the at least one message from a language model (e.g., local language model) based on the at least one contextual category.
  • a language model e.g., local language model
  • Another object of the embodiments herein is to provide a method for causing to display the at least one predicted response on a screen of the electronic device.
  • Yet another object of the embodiments herein is to provide a method for predicting at least one response for the at least one input topic from the first application based on at least one contextual event.
  • Yet another object of the embodiments herein is to provide a method for causing to display the at least one predicted response on a screen of the electronic device.
  • Yet another object of the embodiments herein is to provide a method for receiving an input topic and identifying at least one contextual category of the input topic.
  • Yet another object of the embodiments herein is to provide a method for predicting at least one response for the input topic from a language model based on the at least one contextual category, and causing to display the at least one predicted response on a screen of the electronic device.
  • a method for predicting response at an electronic device is provided.
  • a controlling method of an electronic apparatus includes receiving at least one message at the electronic device. Further, the method includes identifying at least one contextual category of the at least one message. Further, the method includes predicting at least one response for the at least one message from a language model based on the at least one contextual category. Furthermore, the method includes causing to display the at least one predicted response on a screen of the electronic device.
  • the contextual category of the at least one message is automatically identified based on at least one context indicative.
  • the at least one context indicative is dynamically determined based on at least one of content available in the at least one message, user activities, events defined in the electronic device, a user associated with the at least one message, a user context, and a context of the electronic device.
  • the at least one message is displayed within a notification area of the electronic device.
  • the at least one predicted response is displayed within the notification area of the electronic device.
  • the at least one response for the at least one message is predicted in response to an input on the at least one received message, wherein the at least one predicted response corresponds to the at least one received message on which the input is received.
  • embodiments herein provide a method for predicting a response at an electronic device.
  • the method includes receiving an input topic from a first application. Further, the method includes identifying at least one contextual event associated with a second application. Further, the method includes predicting at least one response for the at least one input topic from the first application based on at least one contextual event. Furthermore, the method includes causing to display the at least one predicted response on a screen of the electronic device.
  • the at least one contextual event is a time bound event.
  • the at least one contextual event associated with the second application is dynamically determined based on at least one context indicative associated with the input topic of the first application.
  • the at least one context indicative is determined based on at least one of content available in the input topic, context of the first application, user activities, and events defined in the electronic device.
  • inventions herein provide a method for predicting a response at an electronic device.
  • the method includes receiving an input topic. Further, the method includes identifying at least one contextual category of the input topic. Further, the method includes predicting at least one response for the input topic from a language model based on the at least one contextual category. Furthermore, the method includes causing to display the at least one predicted response on a screen of the electronic device.
  • the input topic is one of a topic selected from a written communication, and a topic formed based at least one input filed available in an application.
  • the at least one contextual category of the input topic is automatically identified based on at least one context indicative.
  • the at least one context indicative is dynamically determined based on at least one of content available associated with the input topic, user activities, events defined in the electronic device, and a user context and a context of the electronic device.
  • the electronic device includes a context identifier configured to receive at least one message. Further, the electronic device includes a contextual category detector configured to identify at least one contextual category of the at least one message. Furthermore, the electronic device includes a response predictor configured to: predict at least one response for the at least one message from a language model based on the at least one contextual category, and cause to display the at least one predicted response on a screen.
  • the electronic device includes a context identifier configured to receive an input topic from a first application. Further, the electronic device includes a contextual category detector configured to identify at least one contextual event associated with a second application. Furthermore, the electronic device includes a response predictor configured to: predict at least one response for the at least one input topic from the first application based on at least one contextual event, and cause to display the at least one predicted response on a screen of the electronic device.
  • the electronic device includes a context identifier configured to receive an input topic. Further, the electronic device includes a contextual category detector configured to identify at least one contextual category of the input topic. Furthermore, the electronic device includes a response predictor configured to: predict at least one response for the input topic from a language model based on the at least one contextual category, and cause to display the at least one predicted response on a screen.
  • FIGS. 1A to 1D illustrate various types of N-Gram language models according to the related art
  • FIG. 2 illustrate a User Interface (UI) for responding to the message using at least one predicted response, according to the related art
  • FIG. 3A illustrates a schematic view of (N+X) gram language model/ (NN+X) language model, according to an embodiment of the present disclosure
  • FIG. 3B illustrate the UI for responding to the message using at least one predicted response, according to an embodiment of the present disclosure
  • FIG. 4 is a block diagram illustrating various hardware elements of an electronic device, according to an embodiment of the present disclosure
  • FIG. 5 is an overview illustrating communication among various hardware elements of the electronic device for automatically predicting the response, according to an embodiment of the present disclosure
  • FIGS. 6A and 6B illustrate a UI for predicting subsequent meaningful prediction during composing of a response to the received message, according to embodiments disclosed herein;
  • FIG. 7 is a flow diagram illustrating a method for predicting the response, according to an embodiment of the present disclosure.
  • FIG. 8 illustrates a UI for responding to the message using the at least one predicted response, according to an embodiment of the present disclosure
  • FIG. 9A is a step by step illustration for predicting response for a selected message from a plurality of messages, according to an embodiment of the present disclosure
  • FIG. 9B is a step by step illustration for predicting response based for a selected input topic, according to an embodiment of the present disclosure.
  • FIG. 10 is a flow diagram illustrating a method for predicting the response based on the statistical modelling manager, according embodiments as disclosed herein;
  • FIG. 11 is a graph illustrating of computing dynamic interpolation weights with time bound, according to an embodiment of the present disclosure
  • FIGS. 12A and 12B illustrate a UI in which the contextual event from the received message is identified and extended from first application to second application, according to an embodiment of the present disclosure
  • FIGS. 13A and 13B illustrate another UI in which the contextual event from the received message is identified and extended from first application to second application, according to an embodiment of the present disclosure
  • FIG. 14A illustrates a UI in which a contextual related application based on the received message is predicted and displayed on the screen of the electronic device, according to an embodiment of the present disclosure
  • FIG. 14B illustrates a UI in which the predicted response for the message is displayed with in the notification area of the electronic device, according to an embodiment of the present disclosure
  • FIG.15 illustrates a UI in which multiple response messages are predicted based on contextual grouping of the related messages, according to an embodiment of the present disclosure
  • FIGS. 16A and 16B illustrate longer pattern scenarios in which the meaningful response (i.e., next suggestion word) is predicted in a longer pattern sentence, according to an embodiment of the present disclosure
  • FIG. 17 is a flow diagram illustrating a method for predicting the response by understanding input views rendered on the screen of the electronic device, according to an embodiment of the present disclosure
  • FIGS. 18A to 18C illustrate the UI displaying at least one predicted response by understanding input views rendered on the screen of the electronic device, according to an embodiment of the present disclosure
  • FIGS. 19A to 19C illustrate the UI displaying multiple predicted response based on at least one event associated with at least one participant, according to an embodiment of the present disclosure
  • FIGS. 20A to 20C illustrate the UI displaying predicted response based on the context associated with the user and the electronic device, according to an embodiment of the present disclosure.
  • FIGS. 21A to 21D illustrates various table tabulating the response predictions and next suggestive words for different samples of inputs, according to an embodiment of the present disclosure.
  • circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports, such as printed circuit boards and the like.
  • circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block.
  • a processor e.g., one or more programmed microprocessors and associated circuitry
  • Each block of the various embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure.
  • the blocks of the various embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
  • the method includes receiving at least one message at the electronic device. Further, the method includes identifying at least one contextual category of the at least one message. Further, the method includes predicting at least one response for the at least one message from a language model based on the at least one contextual category. Furthermore, the method includes causing to display the at least one predicted response on a screen of the electronic device.
  • the predictions of word(s), graphical element, or the like are determined and updated only during typing. For example, consider a scenario in which a user of the electronic device receives a message from a "X" source. Once, the electronic device detects an input in the event of responding to the message, a keyboard may automatically launch aiding the user in composing the response to the message. Further, as the user starts composing the response for the message, the existing systems may determine the text/word during typing and predict the related text/words, graphical element, or the like. Thus, the predicted words are determined and updated only during typing. Unlike the conventional methods and systems, the proposed method can be used to provide the prediction of words even before the user starts composing the response to the message.
  • the proposed method can be used to predict the words which are relevant to the context of the conversation/ to the received message. Thus, improving the user experience by providing the relevant and related graphical elements while composing.
  • the proposed method of the present disclosure can be used to provide predictions for longer sentences or providing predictions for the sentences being typed.
  • the proposed method can be used to perform the word prediction based on the contextual input class.
  • relevant predictions when the user has to respond multiple messages, upon selecting a thread, relevant predictions shall be provided.
  • message or thread selection can be done automatically by using.
  • FIGS. 1A to 1D illustrate various types of N-Gram language models, according to the related art.
  • the N-Gram language model is driven from equation-1 (shown below).
  • the N-Gram language model includes a general n-gram language model that predicts/suggests current word based on the previous set of words.
  • the N-Gram language model can determine the set of words associated with input-1 and input-2, and provides the prediction/suggestions based on the determined set of words associated with the input-1 and the input-2.
  • the predicted/suggested words can include, for example, "is”, “are”, “so”, or the like, which are not meaningful.
  • the N-Gram language model for a class is driven from equation-2 (shown below).
  • the N-Gram Model includes predicting/suggesting the current word based on previous set of words and their respective classes.
  • the N-Gram language model for a phrase is driven from equation-3 (shown below).
  • the N-Gram language model includes predicting one or more words (phrase) based on the previous set of words.
  • the N-Gram language model for a phrase class is driven from equation-4 (shown below).
  • the N-Gram language model includes predicting one or more words (phrase) based on the previous set of words and their respective classes.
  • FIG. 2 illustrates a User Interface (UI) for responding to the message using at least one predicted response, according to the related art.
  • UI User Interface
  • the electronic device 100 may have a message transcript 200 showing a conversation between the user of the electronic device 100 and one or more participants, such as participant 204.
  • the message transcript 200 may include a message 202 received from (an electronic device used by) the participant 204.
  • the user of the electronic device 100 may intent to respond to the message 202, according to the existing mechanisms, only default graphical element 206 i.e., "Ok", "I", or the like., are predicted and displayed on the screen of the electronic device 100. Alternately, the default graphical element 206 may prone to change (i.e., update) as the user starts typing (i.e., responding) to the message 202.
  • n-gram or, neural net (NN)
  • NN neural net
  • FIG. 3A illustrates a schematic view of a (N+X) gram language model/ (NN+X) language model, according to an embodiment of the present disclosure.
  • the electronic device 100 can utilize the contextual category of the input(s) i.e., input-1 and input-2, along with the bigram or trigram features of the input(s), this can be derived using equations (5) and (6).
  • contextual category of the input(s) can be identified by parsing the screen i.e., parts of speech associated with the contents available on the screen, sentence classification, dependency parser, or the like.
  • FIG. 3B illustrates the UI for responding to the message using at least one predicted response, according to an embodiment of the present disclosure.
  • the electronic device 100 can provide the meaningful predictions before the user of the electronic device 100 starts responding (i.e., typing) to the message.
  • the meaningful predictions are based on the context indicative of the received message.
  • the proposed method can be used to provide at least one predicted response from a language model based on the at least one contextual category of the message.
  • the language model e.g., "N" gram language model/NN language model
  • utilizes the contextual category (“X") of the message e.g., (N+X)/ (NN+X) as illustrated in FIG. 3A.
  • the electronic device 100 may have a message transcript 300 showing a conversation between the user of the electronic device 100 and one or more participants, such as participant 304.
  • the message transcript 300 may include a message 302, received from (an electronic device used by) the participant 304.
  • the proposed method can be used to determine at least one contextual category of the message 302 i.e., the contextual category of the received message 302 can be for example, "Appreciation”. Further, the proposed method can be used to predict at least one response 306 i.e., "Wow”, “congrats”, “Awesome”, or the like from the language model based on the at least one contextual category.
  • the proposed method can be used to predict and display the at least one response, even before the user starts composing the response message.
  • FIG. 4 is a block diagram illustrating various hardware elements of the electronic device, according to an embodiment of the present disclosure.
  • the electronic device 100 can include, for example, a mobile phone, a smart phone, Personal Digital Assistants (PDAs), a tablet, a wearable device, a computer, a laptop, etc.
  • the electronic device 100 can include a display and a touch-sensitive surface.
  • the electronic device 100 may support a variety of applications, such as, a messaging applications, a calendar application, a browser application, a word processing application, a telephone application, an e-mail application, an instant messaging application, a Short Message Service (SMS) message, a Multimedia Message Service (MMS) message, or the like.
  • applications may optionally require at least one of keypad, keyboard, touch sensitive surface, or the like, for interacting with at least one feature of the at least one application. For example, add reminder is a feature of a calendar application, message composing is a feature of the messaging application, or the like.
  • the electronic device 100 may include a communicator 110, an information manager 120, a contextual category detector 130, and a response predictor 140. Further, the electronic device 100 may include a processor 160, (for example; a hardware unit, an apparatus, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), etc.,) communicatively coupled to a storage (memory) 150 (e.g., a volatile memory and/or a non-volatile memory). The storage 150 may include storage locations configured to be addressable through the processor 160.
  • the information manager 120, the contextual category detector 130, and the response predictor 140 may be coupled with the processor 160.
  • the information manager 120, the contextual category detector 130, and the response predictor 140 may be implemented by the processor 160.
  • the storage 150 can be can be coupled (or, communicatively coupled) with the processor 160, the communicator 110, the information manager 120, the contextual category detector 130, and the response predictor 140. In another embodiment, the storage 150 can be remotely located to that of the processor 160, the communicator 110, the information manager 120, the contextual category detector 130, and the response predictor 140.
  • the electronic device 100 includes a display 170 capable of being utilized to display on the screen of the electronic device 100.
  • the display 170 can be, for example, a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), Organic Light-Emitting Diode (OLED), a Light-emitting diode (LED), Electroluminescent Displays (ELDs), field emission display (FED), LPD (light emitting polymer display), etc.
  • the display 170 can be configured to display the one or more UI of the variety of applications.
  • the display 170 can be coupled (or, communicatively coupled) with the processor 160 and the storage 150. Further, the display 170 can be coupled (or, communicatively coupled) with the information manager 120, the contextual category detector 130, and the response predictor 140.
  • the communicator 110 facilitates communication with other devices over one or more external ports (e.g., Universal Serial Bus (USB), FIREWIRE, etc.).
  • the external port is adapted for coupling directly to other electronic devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). Further, the communicator 110 facilitates communication with the internal hardware elements of the electronic device 100.
  • the information manager 120 coupled with the communicator 110, can be configured to receive at least one message.
  • the at least one message can be at least one SMS message, SNS message, and the like that are associated with at least one application from the aforementioned variety of applications.
  • the at least one message can include at least one content.
  • the electronic device 100 can include a text input module (not shown) which may be a GUI component displayed on the screen of the display 170.
  • the GUI component can be, for example, virtual keypad, virtual keyboard, soft keyboards, and the like for entering the text in the variety of applications.
  • the electronic device 100 can include a Global positioning system (GPS) module (not shown) for determining the location of the electronic device 100 and provide this information to the variety of applications running in the electronic device 100.
  • GPS Global positioning system
  • the electronic device 100 can include one or more sensors e.g., accelerometer sensor, proximity sensor, temperature sensor, or the like.
  • the electronic device 100 can be configured to determine the context (e.g. weather related information, traffic related information, and the like) of the electronic device 100 using the one or more sensors in combination with the GPS module.
  • the contextual category detector 130 can be configured to identify the at least one contextual category of the at least one message.
  • the at least one contextual category of the at least one message is automatically identified based on at least one context indicative.
  • the at least one context indicative is dynamically determined based on the content available in the at least one message, user activities (e.g., user tracker information), events (e.g., time event, location event, relation event, and the like) defined in the electronic device 100, sensing data (e.g., temperature, humidity, location, and the like) sensed by sensors of the electronic device 100, received data from a server (e.g., weather, news, advertisement, and the like), a user (e.g., the participant) associated with the at least one message, a user context (e.g., appointment, health, user tone or the like), the context of the electronic device 100, or the like.
  • user activities e.g., user tracker information
  • events e.g., time event, location event, relation event, and the like
  • sensing data e.g., temperature, humidity, location, and the like
  • received data from a server e.g., weather, news, advertisement, and the like
  • a user e.g.
  • the response predictor 140 can be configured to predict the at least one response for the at least one message from the language model based on the at least one contextual category.
  • the language model can be, for example, N gram language model. Based on the predicted response, the response predictor 140 can be configured to display the at least one predicted response on the screen of the display 170.
  • the information manager 130 can be configured to receive the input topic from a first application.
  • the input topic can be, for example, written text, at least one received message, or the like.
  • the first application can be any one of the application from the aforementioned variety of the applications.
  • the contextual category detector 140 can be configured to identify at least one contextual event associated with a second application.
  • the contextual event can include, for example, the time event, the location event, the relation event, or the like.
  • the second application can be any of the application from the aforementioned variety of the applications.
  • the contextual event associated with the second application is dynamically determined based on at least one context indicative associated with the input topic of the first application.
  • the at least one context indicative is determined based on, for example, content available in the input topic, a context (weather application, calendar application, shopping application, etc.,) of the first application, the user activities, and the events defined in the electronic device 100.
  • the information manager 120 can be configured to receive the input topic.
  • the input topic can include, for example, topic selected from a written communication, topic formed based at least one input filed available in the application, current editor, user selected content, text on screen of the electronic device 100, and the like.
  • the contextual category detector 130 can be configured to identify at least one contextual category of the input topic.
  • the at least one contextual category of the input topic is automatically identified based on at least one context indicative.
  • the at least one context indicative is dynamically determined based on at least one of content available in the input topic, the user activities, the events defined in the electronic device 100, the user context, and a context of the electronic device 100.
  • conversation can be sent to server when the electronic device 100 is idle.
  • the operations of the contextual category detector 130 and the response predictor 140 can be done in server (remotely located). Further, the LM training is performed when the prediction response are shared with the electronic device 100.
  • the electronic device 100 may be in communication with a remote computing device (not shown) via one or more communication networks.
  • a communication network may be a local area network (LAN), a wide area network (WAN), a mobile or cellular communication network, an extranet, an intranet, the Internet and/or the like.
  • the communication network may provide communication capability between the remote computing device and the electronic device 100.
  • the remote computing device may be a cloud computing device or a networked server located remotely from the electronic device 100.
  • the remote computing device may include similar or substantially similar hardware elements to that of the electronic device 100.
  • FIG. 4 shows exemplary hardware elements of the electronic device but it is to be understood that other embodiments are not limited thereon.
  • the electronic device 100 may include less or more number of hardware elements.
  • the labels or names of the hardware elements are used only for illustrative purpose and does not limit the scope of the invention.
  • One or more hardware elements can be combined together to perform same or substantially similar function in the electronic device 100.
  • FIG. 5 is an overview illustrating communication among various hardware elements of the electronic device for automatically predicting the response, according to an embodiment of the present disclosure.
  • the information manager 120 can be configured to receive the at least one input for example, the at least one message, the written communication, the topic selected from the written communication, the topic formed based at least one input field available in the application, the topic formed based at least one input field available in the applications, received mail, complete conversation/chat, or the like.
  • the information manager 120 can be configured to communicate the received input with the contextual category detector 130.
  • the contextual category detector 130 can include a statistical modelling manager 132, a semantic modelling manager 134, and a contextual modelling manager 136.
  • the statistical modelling manager 132 can be configured to identify one or more statistical features associated with the received input.
  • the one or more statistical features can include time bound, location, etc.
  • the semantic modeling manager 134 can be configured to identify one or more words associated with the received input. Further, the semantic modelling manager 134 can be configured to identify one or more categories of the received input. The one or more categories can be identified by selecting one or more features associated with the received input. For example, the one or more features may include the context of the electronic device 100 and user of the electronic device 100, a domain identification, a Dialog Act (DA) identification, a subject identification, a topic identification, a sentiment analysis, a point of view (PoV), a user tracker information, and the like.
  • the domain identification can include a time, a distance, a time-duration, a time-tense, a quantity, a relation, a location, and an object/person, and the like.
  • the DA identification can include statement non-opinion, an acknowledgement, apology, agree/accept, appreciation, Yes-No-question, Yes-No-answers, conventional closing, WH-questions, No answers, reject, OR clause, down player, thanking, or the like.
  • the sentiment analysis can include positive, neutral, and negative.
  • the user tracker information can include user context, user tone (formal/informal).
  • the subject identification can include subject of the at least one message.
  • the contextual modelling manager 136 can be configured to identify the user personalization information with respect to any of the application of the electronic device 100, and application context in order to extend the context of the first application in the second application.
  • the contextual modelling manager 136 can be configured to create a time bound event from the at least one received message "meet suzzane" from the messaging application (i.e., the first application).
  • a "calendar application” i.e., second application
  • the time bound event is extended to the calendar application. If the electronic device 100 detects the input "meet” in the UI of the calendar application, then the text "Suzzane" is automatically predicted and displayed to the user of the electronic device 100.
  • the contextual category detector 130 can be configured to communicate with the response predictor 140.
  • the response predictor 140 include a language model 142 (hereinafter used as LM 142).
  • the LM 142 can be configured to include Language Model (LM) entries defined based on the contextual category of the received input. For example, if the contextual category of the received input is of type "PoV" i.e., "How do I look in in blue color shirt” then the predicted response can include, for example, text/words related to the PoV such as, " this color is suits you", “ blue color is too dark”, “look good in blue color", and the like.
  • LM Language Model
  • the response predictor 140 can communicate with one or more language model (LM) databases i.e., a preload LM 502, a user LM 504, and a time bound LM 506.
  • the one or more LM databases can be configured to store the one or more LM entries.
  • the response predictor 140 can be configured to retrieve the stored LM entries from the one or more LM databases.
  • the one or more LM databases can be communicatively coupled to the storage 150 illustrated in FIG. 4.
  • the one or more LM databases can be remotely located to the electronic device 100 and can be accessed through the one or more communication networks.
  • the preload LM 502 can include the statistical LM entries trained with plethora of corpus (user inputs) along with the semantic understanding.
  • the user LM 504 can be dynamically created on the electronic device 100 and is trained based on the user activates (i.e., text, and graphical element(s) frequently used/accessed by the user in the event of responding/composing the message).
  • the LM 142 can include a separate contextual category i.e., "X" component (shown below in Table.1) along with each unigram, bigram and trigram entries.
  • the "X” component can include “X1-domain identification component”, “X2- DA identification component", “X3- sentimental analysis component”, “X4-PoV component”, “X5-user tracker component”....Xn.
  • the "X” component can be updated by training the corpus for preload LM 502. Further, the "X” component can be updated by learning user's activities (Sentence(s) being typed, chat, conversation, emails and the like).
  • the user-1 of the electronic device 100 receives the message "I am really sorry” from the user-2 of the electronic device (not shown).
  • the user-1 starts typing "No need to a...,” according conventional methods and systems, all the features based on the user typed text are extracted, the features such as, for e.g., a unigram feature (e.g., "a"), a bigram feature ("to a"), and a trigram features ("need to a").
  • Table. 2 below includes additional extracted based on the text typed by the user.
  • the proposed method can be used to provide the predictions based on the contextual category of the received message from the user-2.
  • the proposed contextual category detector 130 can be configured to identify the contextual category of the received message i.e., the message "I am really sorry” is of type "APOLOGY".
  • LM entries corresponding to the contextual category "APOLOGY” along with the unigram, the bigram, the trigram features are retrieved and displayed to the user of the electronic device 100 ( as shown in Table.3)
  • the contextual category detector 130 can be configured to track the user-1 activities (response to the message sent by the user of the electronic device 100, predictions selected by the user of the electronic device 100, or the like), and alter (e.g., train, update, modify, etc.) the LM 142 based on the user activities.
  • FIGS. 6A and 6B illustrates the UI for predicting subsequent meaningful prediction during composing of the response to the received message, according to embodiments disclosed herein.
  • the proposed method can be used to automatically predict the subsequent text/word of the in the sentence being composed (i.e., next meaningful word).
  • the information manager 120 can be configured to communicate the received message ("I am really sorry") with the contextual category detector 130 illustrated in FIG. 5.
  • the contextual category detector 130 can be configured to identify the at least one contextual category of the received message i.e., the contextual category is of type "Apology”. Further, the contextual category detector 130 can communicate with the response predictor 140 to retrieve the LM entries based on the contextual category "Apology”.
  • the LM 142 can be configured to identify the at least one feature (i.e., unigram, bigram and trigram) from the text input provided, by the user of the electronic device 100, during the response.
  • the LM 142 can be configured to retrieve the LM entries based on the at least one feature along with the contextual category "Apology" of the received message 600.
  • the response predictor 140 can be configured to retrieve and display at least one subsequent response 604 i.e., "apologize", "apology", “be apologetic", or the like, from the LM 142 based on the contextual category of the received message 600.
  • sentence 606 being composed can be No need to "apologize" ( as illustrated in FIG. 6B)
  • FIG. 7 is a flow diagram illustrating a method for predicting the response, according to an embodiment of the present disclosure.
  • the electronic device 100 may receive the at least one message.
  • the information manager 120 can be configured to receive the at least one message.
  • the electronic device 100 identifies the at least one contextual category of the at least one message.
  • the contextual category detector 130 can be configured to identify the at least one contextual category of the at least one message.
  • the electronic device 100 predicts the at least one response for the at least one message from the LM 142 based on the at least one contextual category.
  • the response predictor 140 can be configured to predict the at least one response for the at least one message from the LM 142 based on the at least one contextual category.
  • the electronic device 100 prioritizes the at least one predicted response.
  • the response predictor 140 can be configured to prioritize the at least one predicted response.
  • the electronic device 100 causes to display the at least one predicted response on the screen.
  • the response predictor 140 can be configured to cause to display the at least one predicted response on the screen.
  • the electronic device 100 tracks the user activities.
  • the contextual category detector 130 can be configured to track the user activities.
  • the electronic device 100 trains the LM 142.
  • the response predictor 140 can be configured to train the LM 142.
  • FIG. 8 illustrates a UI for responding to the message using the at least one predicted response, according to an embodiment of the present disclosure.
  • the electronic device 100 may have a message transcript 800 showing the conversation between the user of the electronic device 100 and one or more participants, such as participant 804.
  • the message transcript 800 may include a message 802, received from (an electronic device used by) the participant 804.
  • the content of the message 802 includes "Hey I got my results. I am the topper! "
  • the proposed method can be used to determine at least one contextual category of the message 802 i.e., the contextual category of the received message 802 can be for e.g., "Appreciation”. Further, the proposed method can be used to predict at least one response 806 i.e., "guessed it "am happy for you” "congrats", or the like from the LM 142 from the contextual category LM 156.
  • FIG. 9A is a step by step illustration for predicting response for a selected message from the plurality of messages, according to an embodiment of the present disclosure.
  • the electronic device 100 may receive the at least one message.
  • the information manager 120 can be configured to receive the at least one message.
  • the display 170 can be configured to detect an input 902a (i.e., tap, gesture, or the like) on at least one message 904a ("You had an exam yesterday") from the plurality of messages.
  • an input 902a i.e., tap, gesture, or the like
  • the display 170 can be configured to detect an input 902a (i.e., tap, gesture, or the like) on at least one message 904a ("You had an exam yesterday") from the plurality of messages.
  • the electronic device 100 may recapture one or more words.
  • the contextual category detector 130 can be configured to recapture one or more words.
  • the electronic device 100 may identify at least one contextual category of the selected words.
  • the contextual category detector 130 can be configured to identify the at least one contextual category of the selected words.
  • the electronic device 100 may use the contextual category along with the LM 142.
  • the response predictor 140 can be configured to use the contextual category along with the LM 142.
  • the electronic device 100 may compute values (i.e., LM entries) from the LM 142.
  • the response predictor 140 can be configured to compute values from the LM 142.
  • the electronic device 100 may retrieve the response predictions and next word predictions.
  • the response predictor 140 can be configured to retrieve the response predictions and next word predictions.
  • the response predictor 140 can be configured to dynamically update and display the response predictions and next word predictions 906a i.e., "It was", “Exam was”, or the like.
  • FIG. 9B is a step by step illustration for predicting response based for a selected input topic, according to an embodiment of the present disclosure.
  • the electronic device 100 may receive the at least input topic.
  • the information manager 120 can be configured to receive the at least input topic.
  • the display 170 can be configured to detect the input 902b (i.e., tap, gesture, or the like) on the input topic 904b (i.e., at least one word/text selected from the composing text).
  • the input 902b i.e., tap, gesture, or the like
  • the input topic 904b i.e., at least one word/text selected from the composing text.
  • the electronic device 100 may recapture the one or more words.
  • the contextual category detector 130 can be configured to recapture the one or more words.
  • the electronic device 100 may identify the at least one contextual category of the selected words.
  • the contextual category detector 130 can be configured to identify the at least one contextual category of the selected words.
  • the electronic device 100 may use the contextual category along with the LM 142.
  • the response predictor 140 can be configured to use the contextual category along with the LM 142.
  • the electronic device 100 may compute values from the LM 142.
  • the response predictor 140 can be configured to compute the values from the LM 142.
  • the electronic device 100 may retrieve the meaningful predictions based on the composing text selection.
  • the response predictor 140 can be configured to retrieve (or, predict) the meaningful predictions based on the composing text selection.
  • the response predictor 140 can be configured to dynamically update and display the meaningful predictions 906b (i.e., "It papers”, “Questions”, “answers”, or the like) based on the composing text selection.
  • FIG. 10 is a flow diagram illustrating a method for predicting the response based on the statistical modelling manager, according embodiments as disclosed herein.
  • the electronic device 100 may receive the input topic from the first application.
  • the information manager 120 can be configured to receive the input topic from the first application.
  • the electronic device 100 may identify the at least one contextual event associated with the second application.
  • the contextual category detector 130 can be configured to identify the at least one contextual event associated with the second application.
  • the electronic device 100 may predict the at least one response for the at least one input topic from the first application based on the at least one contextual event.
  • the response predictor 140 can be configured to predict the at least one response for the at least one input topic from the first application based on the at least one contextual event.
  • the electronic device 100 may compute dynamic-interpolation-weights ( ⁇ 1, ⁇ 2, ⁇ 3) of each LM databases (i.e., the preload LM 502, the user LM 504, the time bound LM 506).
  • the dynamic-interpolation-weights can be used to prioritize words among the LM databases.
  • the electronic device 100 may find probabilities (P PLM, P ULM, P TLM ) of "WORD" from each of the LM databases (i.e., the preload LM 502, the user LM 504, the time bound LM 506 respectively).
  • the electronic device 100 may calculate Pc (combined probability) for each of the word(s) retrieved from each of the each of the LM databases (i.e., LM models) and prioritize the predictions based on the Pc (or, based on the parameters such as Relevancy, sort by recent and so on).
  • the response predictor 140 can be configured to calculates the Pc (combined probability) for each of the word(s) retrieved from each of the each of the LM databases (i.e., LM models).
  • the electronic device 100 may cause to display the at least one predicted response on the screen.
  • the response predictor 140 can be configured to cause to display the at least one predicted response on the screen.
  • the electronic device 100 may track the user activities.
  • the response predictor 140 can be configured to track the user activities.
  • the electronic device 100 may train the LM 142.
  • the response predictor 140 can be configured to train the LM 142 based on the user activities and LM entries retrieved from each of the LM database.
  • FIG. 11 is a waveform for computing dynamic interpolation weights with time bound, according to an embodiment of the present disclosure.
  • Table. 4 (shown below) tabulates the dynamic interpolation weights for each of the LM database with time bound LM and without time bound LM.
  • the electronic device 100 can be configured to estimate interpolation weight ( ⁇ 3) with the time bound LM using the equation (7).
  • y TBmax maximum interpolation weight for Time Bound LM
  • T TB Time limit for Time Bound LM
  • T O Value that lies between 0 and T TB
  • FIGS. 12A and 12B illustrate a UI in which the contextual event from the received message is identified and extended from first application to second application, according to an embodiment of the present disclosure.
  • the electronic device 100 may have a message transcript 1200 showing the conversation between the user of the electronic device 100 and one or more participants, such as participant 1206.
  • the message transcript 1200 may include a message 1202 received from the participant 1206 and message 1204 sent by the user of the electronic device 100.
  • the contextual category detector 130 illustrated in FIG. 4 can be configured to identify the contextual event (i.e., fixed time bound, semantic time bound, and contextual time bound) associated with the message 1202 and the message 1204.
  • the message 1204 includes "Great! Try to Meet Suzanne!
  • the LM entries during the fixed time bound are managed via parabolic/ linear on time e.g., reduce priority / frequency over time.
  • the LM entries during semantic time bound may not be useful after trip and thereby the LM 142 may delete the entry by understanding the message.
  • the contextual time bound is more useful in the communication relate applications and prioritizes entry based on the application context.
  • the user of the electronic device 100 may launch the calendar application for setting a reminder.
  • the user of the electronic device 100 composes, using the keypad, a text 1208 "meet" in an input tab of the calendar application 1210, then the next response 1212 "Suzzane” can be automatically predicted and displayed on the screen (e.g., in the text prediction tab of the keypad., default area defined by OEM, default area defined by the user, etc.,) of the electronic device 100.
  • the proposed method of the present disclosure can be used to provide the meaningful predictions.
  • the proposed method can be used to extend contextual event of the messaging application and provide predictions in the application.
  • FIGS. 13A and 13B illustrate another UI in which the contextual event from the received message is identified and extended from first application to second application, according to an embodiment of the present disclosure.
  • the electronic device 100 may have a message transcript 1300 showing the message received from one or more participants.
  • the message transcript 1300 may include the message 1302 received from the participant.
  • the contextual category detector 130 can be configured to identify the contextual event (i.e., fixed time bound, semantic time bound, and contextual time bound) associated with the message 1302.
  • the message 1302 includes "Buy Tropicana orange, cut mango and milk when you come home”.
  • the user of the electronic device 100 may launch (access/open) a shopping application (i.e., related application to that of the contextual event).
  • a shopping application i.e., related application to that of the contextual event.
  • the user of the electronic device 100 composes, using the keypad, at least one text 1304 ("Tropicana") from the message 1302 in the input tab of the shopping application, then the next word(s) 1306 "Orange", "cut mango", Milk, or the like can be automatically predicted and displayed on the screen (e.g., in the text prediction tab of the keypad., default area defined by the OEM, default area defined by the user, etc.,) of the electronic device 100.
  • FIG. 14A illustrates an exemplary UI in which a contextual related application based on the received message is predicted and displayed on the screen of the electronic device, according to an embodiment of the present disclosure.
  • the user of the electronic device 100 may receive a message 1402a from one or more participants.
  • the contextual category detector 130 can be configured to detect the at least contextual event (i.e., contextual time bound event) associated with the message 1402a.
  • the response predictor 140 can be configured to predict and display the at least one contextual related application.
  • a related application i.e., a graphical icon 1404a of the calendar application can be predicted and displayed on the screen of the electronic device 100.
  • FIG. 14B illustrates a UI in which the predicted response for the message is displayed with in the notification area of the electronic device, according to an embodiment of the present disclosure.
  • the electronic device 100 may receive a message 1402b from one or more participants.
  • the at least one predicted response 1404b for the message 1402b is automatically predicted and displayed within the notification area of the electronic device 100.
  • the proposed method can be used to provide the response predictions for the message(s) received without launching the message application.
  • FIG.15 illustrates a UI in which multiple response messages are predicted based on contextual grouping of the related messages, according to an embodiment of the present disclosure.
  • the user of the electronic device 100 may receive messages 1502 and 1504 from a participant 1506, and a message 1508 from a participant 1510.
  • the contextual category detector 130 can be configured to identify one or more contextual category of the messages 1502 (i.e., "You had an exam yesterday) and 1504 ("how was it"). Further, based on the one or more contextual category (i.e., both the messages 1502 and 1504 are received from same participant 1506, content available in both the messages 1502 and 1504 are contextually related, and the like) the response predictor 140 can be configured to predict one or more response messages and group 1512 the one or more predicted responses. Similarly, based on the one or more contextual category (i.e., of the message 1508, content available in the message 1508, or the like) the response predictor 140 can be configured to predict one or more response messages and group 1514 the one or more predicted responses.
  • the proposed method can be used to provide the response prediction by considering individual or group conversations, one or more queries from one or more participants are addressed, one or more queries from the user of the electronic device 100, and the like.
  • FIGS. 16A to 16B illustrates a longer pattern scenario in which the meaningful response (next suggested word) is predicted in the longer pattern sentence, according to an embodiment of the present disclosure.
  • the electronic device 100 detects the input topic i.e., composed text of longer sentence pattern i.e., "the sky above our head is."
  • the proposed context category manager 130 can be configured to analyze the received input topic and identify the at least one contextual category of the received input topic.
  • the response predictor 140 can be configured to predict and display the response (next word) "Blue”.
  • the LM 142 utilizes the contextual input class (to include longer pattern) along with N Gram (Tri gram) language model. Unlike to conventional methods and systems, the proposed method can provide the response predictions by considering only selective inputs ("the sky") and not the whole longer pattern.
  • the selective input (“party) is considered and according the response "Friday night” is predicted and displayed on the screen of the electronic device 100.
  • FIG. 17 is a flow diagram illustrating a method for predicting the response by understanding input views rendered on the screen of the electronic device, according to an embodiment of the present disclosure.
  • the electronic device 100 may parse the information rendered on the screen (screen reading).
  • the contextual category detector 130 can be configured to parse the information rendered on the screen (screen reading).
  • the electronic device 100 may extract the text (i.e., hint, label, or the like) in response to parsing the screen.
  • the contextual category detector 130 can be configured to extract the text in response to parsing the screen.
  • the electronic device 100 may map the extracted text with the input views.
  • the contextual category detector 130 can be configured to map the extracted text with the input views.
  • the electronic device 100 may perform a semantic based modelling. Further, at operation 1710, the electronic device 100 prioritize the predictions.
  • FIGS. 18A to 18C is a UI displaying at least one predicted response by understanding input views rendered on the screen of the electronic device, according to an embodiment of the present disclosure.
  • the electronic device 100 can be configured to parse the information rendered on the screen i.e., identifying the text rendered on the screen.
  • the texts (i.e., input views, hints, labels, or the like) on the screen such as "your name”, “Your email address”, “password”, “enter password”, “enter email”, or the like, are parsed and provided to the contextual category detector 130 as illustrated in FIG. 4.
  • the contextual category detector 130 can be configured to identify the contextual category of the text parsed i.e., "Your name” is of category "subject", “you phone number” is of category " contacts", etc., identified from the contextual LM database.
  • the prediction detector 140 illustrated in FIG. 4 can be configured to display the response predictions based on the input views/input text field in accordance with the at least one category determined.
  • the response predictions for the input text filed "Your name” can be "steph", "curry”, or the like.
  • FIGS. 19A to 19C illustrates a UI displaying multiple predicted response based on at least one event associated with at least one participant, according to an embodiment of the present disclosure.
  • the proposed contextual category detector 130 can be used to identify at least one event (e.g., birthday event, anniversary event, etc.) associated with the participant 1904.
  • the at least one event can be automatically retrieved from the at least one application (e.g., calendar application, SNS application, etc.,) associated with the electronic device 100.
  • the response predictor 140 can be configured to predict, prioritize and display multiple responses. For example, if the content of the message 1902 includes "shall we go for movie?", then the response predictor 140 can be configured to provide the response predictions 1906 i.e., "Sure, we should definitely go”, “movie will be good", Sure. Further, the response predictions 1906 can include the response predicted based on the event detection i.e., "Happy birthday buddy".
  • FIGS. 20A to 20C illustrates a UI displaying predicted response based on the context associated with the user and the electronic device, according to an embodiment of the present disclosure.
  • the proposed contextual category detector 130 can be used to identify the context (i.e., location, weather condition, etc.,) of the electronic device 100 a user context (i.e., appointment, user tone, reminder, etc.).
  • the response predictor 140 can be configured to predict, prioritize and display multiple responses. For example, if the content of the message 2002 includes "How about trip to Goa this December?", then the response predictor 140 can be configured to provide the response predictions 2006 i.e., "Wow! Let's do it". Further, the response predictions 2006 can include the response predicted based on the context (weather forecast provided by weather application, or weather forecast provided by any other means) of the electronic device 100 i.e., "it will be completely raining.”
  • the response predictor 140 can be configured to predict, prioritize and display multiple responses 2010 i.e., "will reach in one hour", “In another", 2:45 PM", or the like.
  • the response predictor 140 can be configured to predict, prioritize and display multiple responses 2016 i.e., "How are you feeling now?", "Did you visit the doctor", or the like.
  • FIGS. 21A to 21D illustrate various table tabulating the response predictions and next suggestive words for different samples of inputs, according to an embodiment of the present disclosure.
  • the electronic device 100 or method may be performed by at least one computer (for example, a processor 160) which executes instructions included in at least one program from among programs which are maintained in a computer-readable storage medium.
  • a computer for example, a processor 160
  • the at least one computer may perform a function corresponding to the instructions.
  • the computer-readable storage medium may be the memory, for example.
  • a non-transitory computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system.
  • Examples of the non-transitory computer readable recording medium include a Read-Only Memory (ROM), a Random-Access Memory (RAM), Compact Disc-ROMs (CD-ROMs), magnetic tapes, floppy disks, and optical data storage devices.
  • the non-transitory computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
  • functional programs, code, and code segments for accomplishing the present disclosure can be easily construed by programmers skilled in the art to which the present disclosure pertains.
  • the various embodiments of the present disclosure as described above typically involve the processing of input data and the generation of output data to some extent.
  • This input data processing and output data generation may be implemented in hardware or software in combination with hardware.
  • specific electronic components may be employed in a mobile device or similar or related circuitry for implementing the functions associated with the various embodiments of the present disclosure as described above.
  • one or more processors operating in accordance with stored instructions may implement the functions associated with the various embodiments of the present disclosure as described above. If such is the case, it is within the scope of the present disclosure that such instructions may be stored on one or more non-transitory processor readable mediums.
  • processor readable mediums examples include a ROM, a RAM, CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices.
  • the processor readable mediums can also be distributed over network coupled computer systems so that the instructions are stored and executed in a distributed fashion.
  • functional computer programs, instructions, and instruction segments for accomplishing the present disclosure can be easily construed by programmers skilled in the art to which the present disclosure pertains.
  • the instructions may include machine language codes created by a compiler, and high-level language codes that can be executed by a computer by using an interpreter.
  • the above-described hardware device may be configured to operate as one or more software modules to perform the operations according to various embodiments of the present disclosure, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An electronic device and a method for predicting a response are provided. The electronic device includes a display and a processor configured to receive at least one message, identify at least one contextual category of the at least one message, predict at least one response for the at least one message from a language model based on the at least one contextual category, and control the display to display the at least one predicted response.

Description

METHOD AND ELECTRONIC DEVICE FOR PREDICTING RESPONSE
The present disclosure relates to electronic devices. More particularly, the present disclosure relates to a method and an electronic device for predicting response.
In general, a word prediction technique involves an n-gram language model. The goal of the n-gram language model is to compute probability of a sentence or sequence of words and use it to compute the probability of a suggesting word (i.e., upcoming word). Typically bigram or trigram models are used considering the speed and size of language models. The trigram model includes unigram, bigram and trigram features. The unigram feature is based on the current word being typed, the feature bigram is based on the current word being typed and the previous word that is typed, and the trigram feature is based on the current word being typed and the previous two words that are typed. However, these models have long-distance dependencies. Hence, it is difficult to track longer sentences and predictions may not be accurate.
If the n-gram language model is extended to 7-grams (for example), it may be able to the track previous 7 words and the longer sentence. This can lead to exponential growth of the number of parameters with length of the n-gram language model and hence the increase in complexity in training the language model, increase in training time of the language model, and performance expensive storage and retrieval operations in existing systems. Hence, it is not recommended to increase the size of the n-gram language model in the existing systems. Further, these models are based on the current sentence being typed. The intention behind typing the sentence is ignored.
The above information is presented as background information only to assist with an understanding of the present disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the present disclosure.
Aspects of the present disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the present disclosure is to provide a method and electronic device for predicting a response.
Another object of the embodiments herein is to provide a method for receiving at least one message, and identifying at least one contextual category of the at least one message.
Another object of the embodiments herein is to provide a method for predicting at least one response for the at least one message from a language model (e.g., local language model) based on the at least one contextual category.
Another object of the embodiments herein is to provide a method for causing to display the at least one predicted response on a screen of the electronic device.
Yet another object of the embodiments herein is to provide a method for receiving an input topic from a first application and identifying at least one contextual event associated with a second application
Yet another object of the embodiments herein is to provide a method for predicting at least one response for the at least one input topic from the first application based on at least one contextual event.
Yet another object of the embodiments herein is to provide a method for causing to display the at least one predicted response on a screen of the electronic device.
Yet another object of the embodiments herein is to provide a method for receiving an input topic and identifying at least one contextual category of the input topic.
Yet another object of the embodiments herein is to provide a method for predicting at least one response for the input topic from a language model based on the at least one contextual category, and causing to display the at least one predicted response on a screen of the electronic device.
In accordance with an aspect of the present disclosure, a method for predicting response at an electronic device is provided. In accordance with another aspect of the present disclosure, a controlling method of an electronic apparatus is provided. The method includes receiving at least one message at the electronic device. Further, the method includes identifying at least one contextual category of the at least one message. Further, the method includes predicting at least one response for the at least one message from a language model based on the at least one contextual category. Furthermore, the method includes causing to display the at least one predicted response on a screen of the electronic device.
In an embodiment, the contextual category of the at least one message is automatically identified based on at least one context indicative.
In an embodiment, the at least one context indicative is dynamically determined based on at least one of content available in the at least one message, user activities, events defined in the electronic device, a user associated with the at least one message, a user context, and a context of the electronic device.
In an embodiment, the at least one message is displayed within a notification area of the electronic device.
In an embodiment, the at least one predicted response is displayed within the notification area of the electronic device.
In an embodiment, the at least one response for the at least one message is predicted in response to an input on the at least one received message, wherein the at least one predicted response corresponds to the at least one received message on which the input is received.
Accordingly, embodiments herein provide a method for predicting a response at an electronic device. The method includes receiving an input topic from a first application. Further, the method includes identifying at least one contextual event associated with a second application. Further, the method includes predicting at least one response for the at least one input topic from the first application based on at least one contextual event. Furthermore, the method includes causing to display the at least one predicted response on a screen of the electronic device.
In an embodiment, the at least one contextual event is a time bound event.
In an embodiment, the at least one contextual event associated with the second application is dynamically determined based on at least one context indicative associated with the input topic of the first application.
In an embodiment, the at least one context indicative is determined based on at least one of content available in the input topic, context of the first application, user activities, and events defined in the electronic device.
Accordingly embodiments herein provide a method for predicting a response at an electronic device. The method includes receiving an input topic. Further, the method includes identifying at least one contextual category of the input topic. Further, the method includes predicting at least one response for the input topic from a language model based on the at least one contextual category. Furthermore, the method includes causing to display the at least one predicted response on a screen of the electronic device.
In an embodiment, the input topic is one of a topic selected from a written communication, and a topic formed based at least one input filed available in an application.
In an embodiment, the at least one contextual category of the input topic is automatically identified based on at least one context indicative.
In an embodiment, the at least one context indicative is dynamically determined based on at least one of content available associated with the input topic, user activities, events defined in the electronic device, and a user context and a context of the electronic device.
Accordingly embodiments herein provide an electronic device for predicting a response. The electronic device includes a context identifier configured to receive at least one message. Further, the electronic device includes a contextual category detector configured to identify at least one contextual category of the at least one message. Furthermore, the electronic device includes a response predictor configured to: predict at least one response for the at least one message from a language model based on the at least one contextual category, and cause to display the at least one predicted response on a screen.
Accordingly embodiments herein provide an electronic device for predicting a response. The electronic device includes a context identifier configured to receive an input topic from a first application. Further, the electronic device includes a contextual category detector configured to identify at least one contextual event associated with a second application. Furthermore, the electronic device includes a response predictor configured to: predict at least one response for the at least one input topic from the first application based on at least one contextual event, and cause to display the at least one predicted response on a screen of the electronic device.
Accordingly embodiments herein provide an electronic device for predicting a response. The electronic device includes a context identifier configured to receive an input topic. Further, the electronic device includes a contextual category detector configured to identify at least one contextual category of the input topic. Furthermore, the electronic device includes a response predictor configured to: predict at least one response for the input topic from a language model based on the at least one contextual category, and cause to display the at least one predicted response on a screen.
Other aspects, advantages, and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the present disclosure.
The above and other aspects, features, and advantages of certain embodiments of the present disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
FIGS. 1A to 1D illustrate various types of N-Gram language models according to the related art;
FIG. 2 illustrate a User Interface (UI) for responding to the message using at least one predicted response, according to the related art;
FIG. 3A illustrates a schematic view of (N+X) gram language model/ (NN+X) language model, according to an embodiment of the present disclosure;
FIG. 3B illustrate the UI for responding to the message using at least one predicted response, according to an embodiment of the present disclosure;
FIG. 4 is a block diagram illustrating various hardware elements of an electronic device, according to an embodiment of the present disclosure;
FIG. 5 is an overview illustrating communication among various hardware elements of the electronic device for automatically predicting the response, according to an embodiment of the present disclosure;
FIGS. 6A and 6B illustrate a UI for predicting subsequent meaningful prediction during composing of a response to the received message, according to embodiments disclosed herein;
FIG. 7 is a flow diagram illustrating a method for predicting the response, according to an embodiment of the present disclosure;
FIG. 8 illustrates a UI for responding to the message using the at least one predicted response, according to an embodiment of the present disclosure;
FIG. 9A is a step by step illustration for predicting response for a selected message from a plurality of messages, according to an embodiment of the present disclosure;
FIG. 9B is a step by step illustration for predicting response based for a selected input topic, according to an embodiment of the present disclosure;
FIG. 10 is a flow diagram illustrating a method for predicting the response based on the statistical modelling manager, according embodiments as disclosed herein;
FIG. 11 is a graph illustrating of computing dynamic interpolation weights with time bound, according to an embodiment of the present disclosure;
FIGS. 12A and 12B illustrate a UI in which the contextual event from the received message is identified and extended from first application to second application, according to an embodiment of the present disclosure;
FIGS. 13A and 13B illustrate another UI in which the contextual event from the received message is identified and extended from first application to second application, according to an embodiment of the present disclosure;
FIG. 14A illustrates a UI in which a contextual related application based on the received message is predicted and displayed on the screen of the electronic device, according to an embodiment of the present disclosure;
FIG. 14B illustrates a UI in which the predicted response for the message is displayed with in the notification area of the electronic device, according to an embodiment of the present disclosure;
FIG.15 illustrates a UI in which multiple response messages are predicted based on contextual grouping of the related messages, according to an embodiment of the present disclosure;
FIGS. 16A and 16B illustrate longer pattern scenarios in which the meaningful response (i.e., next suggestion word) is predicted in a longer pattern sentence, according to an embodiment of the present disclosure;
FIG. 17 is a flow diagram illustrating a method for predicting the response by understanding input views rendered on the screen of the electronic device, according to an embodiment of the present disclosure;
FIGS. 18A to 18C illustrate the UI displaying at least one predicted response by understanding input views rendered on the screen of the electronic device, according to an embodiment of the present disclosure;
FIGS. 19A to 19C illustrate the UI displaying multiple predicted response based on at least one event associated with at least one participant, according to an embodiment of the present disclosure;
FIGS. 20A to 20C illustrate the UI displaying predicted response based on the context associated with the user and the electronic device, according to an embodiment of the present disclosure; and
FIGS. 21A to 21D illustrates various table tabulating the response predictions and next suggestive words for different samples of inputs, according to an embodiment of the present disclosure.
Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the present disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the present disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the present disclosure is provided for illustration purpose only and not for the purpose of limiting the present disclosure as defined by the appended claims and their equivalents.
It is to be understood that the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a component surface" includes reference to one or more of such surfaces.
By the term "substantially" it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
Various embodiments of the present disclosure described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.
Herein, the term "or" as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the various embodiments herein can be practiced and to further enable those skilled in the art to practice the various embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the various embodiments herein.
As is traditional in the field, various embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as units or modules or the like, are physically implemented by analog and/or digital circuits, such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits and the like, and may optionally be driven by firmware and/or software. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports, such as printed circuit boards and the like. The circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the various embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure. Likewise, the blocks of the various embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
Accordingly embodiments herein provide a method for predicting response at an electronic device. The method includes receiving at least one message at the electronic device. Further, the method includes identifying at least one contextual category of the at least one message. Further, the method includes predicting at least one response for the at least one message from a language model based on the at least one contextual category. Furthermore, the method includes causing to display the at least one predicted response on a screen of the electronic device.
In the related art, the predictions of word(s), graphical element, or the like are determined and updated only during typing. For example, consider a scenario in which a user of the electronic device receives a message from a "X" source. Once, the electronic device detects an input in the event of responding to the message, a keyboard may automatically launch aiding the user in composing the response to the message. Further, as the user starts composing the response for the message, the existing systems may determine the text/word during typing and predict the related text/words, graphical element, or the like. Thus, the predicted words are determined and updated only during typing. Unlike the conventional methods and systems, the proposed method can be used to provide the prediction of words even before the user starts composing the response to the message.
Generally, only default word(s)/text/graphical element, or the like are provided to the user of the electronic device prior to composing the response to the received message. Yet, again the default words are not meaningful (or, unrelated) to the context of the conversation/ to the received message. According to various embodiments of the present disclosure, the proposed method can be used to predict the words which are relevant to the context of the conversation/ to the received message. Thus, improving the user experience by providing the relevant and related graphical elements while composing.
Accordingly, the proposed method of the present disclosure can be used to provide predictions for longer sentences or providing predictions for the sentences being typed. The proposed method can be used to perform the word prediction based on the contextual input class.
In an embodiment of the present disclosure, when the user has to respond multiple messages, upon selecting a thread, relevant predictions shall be provided. In other aspect, message or thread selection can be done automatically by using.
FIGS. 1A to 1D illustrate various types of N-Gram language models, according to the related art.
Referring to the FIG. 1A, the N-Gram language model is driven from equation-1 (shown below). The N-Gram language model includes a general n-gram language model that predicts/suggests current word based on the previous set of words. Thus, the N-Gram language model can determine the set of words associated with input-1 and input-2, and provides the prediction/suggestions based on the determined set of words associated with the input-1 and the input-2.
MathFigure 1
Figure PCTKR2017005812-appb-M000001
For example, if the electronic device 100 detects the input-1 as "Friday" and the further detects the input-2 as "night", then the predicted/suggested words can include, for example, "is", "are", "so", or the like, which are not meaningful.
Referring to FIG. 1B, the N-Gram language model for a class is driven from equation-2 (shown below). The N-Gram Model includes predicting/suggesting the current word based on previous set of words and their respective classes.
MathFigure 2
Figure PCTKR2017005812-appb-M000002
Referring to FIG. 1C, the N-Gram language model for a phrase is driven from equation-3 (shown below). The N-Gram language model includes predicting one or more words (phrase) based on the previous set of words.
MathFigure 3
Figure PCTKR2017005812-appb-M000003
Referring to FIG.1D, the N-Gram language model for a phrase class is driven from equation-4 (shown below). The N-Gram language model includes predicting one or more words (phrase) based on the previous set of words and their respective classes.
MathFigure 4
Figure PCTKR2017005812-appb-M000004
FIG. 2 illustrates a User Interface (UI) for responding to the message using at least one predicted response, according to the related art.
Referring to FIG. 2, the electronic device 100 may have a message transcript 200 showing a conversation between the user of the electronic device 100 and one or more participants, such as participant 204. The message transcript 200 may include a message 202 received from (an electronic device used by) the participant 204.
The content of the message 202 "Hey I got my results. Looks like I am the topper!:D" is conveying the happiness from the participant 204 to the user of the electronic device 100. If the user of the electronic device 100 may intent to respond to the message 202, according to the existing mechanisms, only default graphical element 206 i.e., "Ok", "I", or the like., are predicted and displayed on the screen of the electronic device 100. Alternately, the default graphical element 206 may prone to change (i.e., update) as the user starts typing (i.e., responding) to the message 202.
Further, in regards to the user typing i.e., if the n-gram (or, neural net (NN)) is extended to 7-grams (for example), it may be able to track previous 7 words and the longer sentence. This can lead to exponential growth of the number of parameters with length of the n-gram language model (or, NN language model) and hence the increase in complexity in training the language model ( N/n gram, NN gram), increase in training time of the language model, and performance expensive storage and retrieval operations in the existing systems.
FIG. 3A illustrates a schematic view of a (N+X) gram language model/ (NN+X) language model, according to an embodiment of the present disclosure.
In an embodiment of the present disclosure, the electronic device 100 can utilize the contextual category of the input(s) i.e., input-1 and input-2, along with the bigram or trigram features of the input(s), this can be derived using equations (5) and (6).
MathFigure 5
Figure PCTKR2017005812-appb-M000005
MathFigure 6
Figure PCTKR2017005812-appb-M000006
For example, contextual category of the input(s) can be identified by parsing the screen i.e., parts of speech associated with the contents available on the screen, sentence classification, dependency parser, or the like.
FIG. 3B illustrates the UI for responding to the message using at least one predicted response, according to an embodiment of the present disclosure.
In an embodiment of the present disclosure, the electronic device 100 can provide the meaningful predictions before the user of the electronic device 100 starts responding (i.e., typing) to the message. The meaningful predictions are based on the context indicative of the received message. Further, the proposed method can be used to provide at least one predicted response from a language model based on the at least one contextual category of the message. Thus, the language model (e.g., "N" gram language model/NN language model) utilizes the contextual category ("X") of the message (e.g., (N+X)/ (NN+X)) as illustrated in FIG. 3A. Thus, reducing the complexity in training the language model, reducing the training time of the language model, and reducing the performance expensive storage and retrieval operations.
Referring to FIG. 3B, the electronic device 100 may have a message transcript 300 showing a conversation between the user of the electronic device 100 and one or more participants, such as participant 304. The message transcript 300 may include a message 302, received from (an electronic device used by) the participant 304.
The content of the message 302 "Hey I got my results. Looks like I am the topper!:D" is conveying the happiness from the participant 304 to the user of the electronic device 100. Unlike to conventional methods and systems, the proposed method can be used to determine at least one contextual category of the message 302 i.e., the contextual category of the received message 302 can be for example, "Appreciation". Further, the proposed method can be used to predict at least one response 306 i.e., "Wow", "congrats", "Awesome", or the like from the language model based on the at least one contextual category.
In an embodiment of the present disclosure, the proposed method can be used to predict and display the at least one response, even before the user starts composing the response message. Thus, improving the user experience by displaying the meaningful predictions to the message 302.
FIG. 4 is a block diagram illustrating various hardware elements of the electronic device, according to an embodiment of the present disclosure.
Referring to FIG. 4, the electronic device 100 can include, for example, a mobile phone, a smart phone, Personal Digital Assistants (PDAs), a tablet, a wearable device, a computer, a laptop, etc. In an embodiment of the present disclosure, the electronic device 100 can include a display and a touch-sensitive surface.
The electronic device 100 may support a variety of applications, such as, a messaging applications, a calendar application, a browser application, a word processing application, a telephone application, an e-mail application, an instant messaging application, a Short Message Service (SMS) message, a Multimedia Message Service (MMS) message, or the like. Further, the variety of applications may optionally require at least one of keypad, keyboard, touch sensitive surface, or the like, for interacting with at least one feature of the at least one application. For example, add reminder is a feature of a calendar application, message composing is a feature of the messaging application, or the like.
The electronic device 100 may include a communicator 110, an information manager 120, a contextual category detector 130, and a response predictor 140. Further, the electronic device 100 may include a processor 160, (for example; a hardware unit, an apparatus, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), etc.,) communicatively coupled to a storage (memory) 150 (e.g., a volatile memory and/or a non-volatile memory). The storage 150 may include storage locations configured to be addressable through the processor 160. The information manager 120, the contextual category detector 130, and the response predictor 140 may be coupled with the processor 160. The information manager 120, the contextual category detector 130, and the response predictor 140 may be implemented by the processor 160.
The storage 150 can be can be coupled (or, communicatively coupled) with the processor 160, the communicator 110, the information manager 120, the contextual category detector 130, and the response predictor 140. In another embodiment, the storage 150 can be remotely located to that of the processor 160, the communicator 110, the information manager 120, the contextual category detector 130, and the response predictor 140.
Furthermore, the electronic device 100 includes a display 170 capable of being utilized to display on the screen of the electronic device 100. In an embodiment of the present disclosure, the display 170 can be, for example, a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), Organic Light-Emitting Diode (OLED), a Light-emitting diode (LED), Electroluminescent Displays (ELDs), field emission display (FED), LPD (light emitting polymer display), etc. The display 170 can be configured to display the one or more UI of the variety of applications. The display 170 can be coupled (or, communicatively coupled) with the processor 160 and the storage 150. Further, the display 170 can be coupled (or, communicatively coupled) with the information manager 120, the contextual category detector 130, and the response predictor 140.
The communicator 110 facilitates communication with other devices over one or more external ports (e.g., Universal Serial Bus (USB), FIREWIRE, etc.). The external port is adapted for coupling directly to other electronic devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). Further, the communicator 110 facilitates communication with the internal hardware elements of the electronic device 100.
The information manager 120, coupled with the communicator 110, can be configured to receive at least one message. The at least one message can be at least one SMS message, SNS message, and the like that are associated with at least one application from the aforementioned variety of applications. The at least one message can include at least one content.
Further, the electronic device 100 can include a text input module (not shown) which may be a GUI component displayed on the screen of the display 170. The GUI component can be, for example, virtual keypad, virtual keyboard, soft keyboards, and the like for entering the text in the variety of applications.
In an embodiment of the present disclosure, the electronic device 100 can include a Global positioning system (GPS) module (not shown) for determining the location of the electronic device 100 and provide this information to the variety of applications running in the electronic device 100. Further, the electronic device 100 can include one or more sensors e.g., accelerometer sensor, proximity sensor, temperature sensor, or the like. The electronic device 100 can be configured to determine the context (e.g. weather related information, traffic related information, and the like) of the electronic device 100 using the one or more sensors in combination with the GPS module.
Further, the contextual category detector 130 can be configured to identify the at least one contextual category of the at least one message. In an embodiment of the present disclosure, the at least one contextual category of the at least one message is automatically identified based on at least one context indicative.
In an embodiment of the present disclosure, the at least one context indicative is dynamically determined based on the content available in the at least one message, user activities (e.g., user tracker information), events (e.g., time event, location event, relation event, and the like) defined in the electronic device 100, sensing data (e.g., temperature, humidity, location, and the like) sensed by sensors of the electronic device 100, received data from a server (e.g., weather, news, advertisement, and the like), a user (e.g., the participant) associated with the at least one message, a user context (e.g., appointment, health, user tone or the like), the context of the electronic device 100, or the like.
The response predictor 140 can be configured to predict the at least one response for the at least one message from the language model based on the at least one contextual category. In an embodiment of the present disclosure, the language model can be, for example, N gram language model. Based on the predicted response, the response predictor 140 can be configured to display the at least one predicted response on the screen of the display 170.
In another embodiment, the information manager 130 can be configured to receive the input topic from a first application. The input topic can be, for example, written text, at least one received message, or the like. The first application can be any one of the application from the aforementioned variety of the applications.
Further, the contextual category detector 140 can be configured to identify at least one contextual event associated with a second application. In an embodiment of the present disclosure, the contextual event can include, for example, the time event, the location event, the relation event, or the like. The second application can be any of the application from the aforementioned variety of the applications. In an embodiment of the present disclosure, the contextual event associated with the second application is dynamically determined based on at least one context indicative associated with the input topic of the first application.
In an embodiment of the present disclosure, the at least one context indicative is determined based on, for example, content available in the input topic, a context (weather application, calendar application, shopping application, etc.,) of the first application, the user activities, and the events defined in the electronic device 100.
In yet another embodiment, the information manager 120 can be configured to receive the input topic. The input topic can include, for example, topic selected from a written communication, topic formed based at least one input filed available in the application, current editor, user selected content, text on screen of the electronic device 100, and the like.
Further, the contextual category detector 130 can be configured to identify at least one contextual category of the input topic. In an embodiment of the present disclosure, the at least one contextual category of the input topic is automatically identified based on at least one context indicative. In an embodiment of the present disclosure, the at least one context indicative is dynamically determined based on at least one of content available in the input topic, the user activities, the events defined in the electronic device 100, the user context, and a context of the electronic device 100.
In other aspect, conversation can be sent to server when the electronic device 100 is idle. The operations of the contextual category detector 130 and the response predictor 140 can be done in server (remotely located). Further, the LM training is performed when the prediction response are shared with the electronic device 100.
In an embodiment of the present disclosure, the electronic device 100 may be in communication with a remote computing device (not shown) via one or more communication networks. A communication network may be a local area network (LAN), a wide area network (WAN), a mobile or cellular communication network, an extranet, an intranet, the Internet and/or the like. In an embodiment of the present disclosure, the communication network may provide communication capability between the remote computing device and the electronic device 100.
In an embodiment of the present disclosure, the remote computing device may be a cloud computing device or a networked server located remotely from the electronic device 100. The remote computing device may include similar or substantially similar hardware elements to that of the electronic device 100.
The FIG. 4 shows exemplary hardware elements of the electronic device but it is to be understood that other embodiments are not limited thereon. In other embodiments, the electronic device 100 may include less or more number of hardware elements. Further, the labels or names of the hardware elements are used only for illustrative purpose and does not limit the scope of the invention. One or more hardware elements can be combined together to perform same or substantially similar function in the electronic device 100.
FIG. 5 is an overview illustrating communication among various hardware elements of the electronic device for automatically predicting the response, according to an embodiment of the present disclosure.
Referring to FIG. 5, the information manager 120 can be configured to receive the at least one input for example, the at least one message, the written communication, the topic selected from the written communication, the topic formed based at least one input field available in the application, the topic formed based at least one input field available in the applications, received mail, complete conversation/chat, or the like. The information manager 120 can be configured to communicate the received input with the contextual category detector 130.
In an embodiment of the present disclosure, the contextual category detector 130 can include a statistical modelling manager 132, a semantic modelling manager 134, and a contextual modelling manager 136.
The statistical modelling manager 132 can be configured to identify one or more statistical features associated with the received input. For example, the one or more statistical features can include time bound, location, etc.
The semantic modeling manager 134 can be configured to identify one or more words associated with the received input. Further, the semantic modelling manager 134 can be configured to identify one or more categories of the received input. The one or more categories can be identified by selecting one or more features associated with the received input. For example, the one or more features may include the context of the electronic device 100 and user of the electronic device 100, a domain identification, a Dialog Act (DA) identification, a subject identification, a topic identification, a sentiment analysis, a point of view (PoV), a user tracker information, and the like.
For example, the domain identification can include a time, a distance, a time-duration, a time-tense, a quantity, a relation, a location, and an object/person, and the like. For example, the DA identification can include statement non-opinion, an acknowledgement, apology, agree/accept, appreciation, Yes-No-question, Yes-No-answers, conventional closing, WH-questions, No answers, reject, OR clause, down player, thanking, or the like.
For example, the sentiment analysis can include positive, neutral, and negative. For example, the user tracker information can include user context, user tone (formal/informal). For example, the subject identification can include subject of the at least one message.
The contextual modelling manager 136 can be configured to identify the user personalization information with respect to any of the application of the electronic device 100, and application context in order to extend the context of the first application in the second application. For example, the contextual modelling manager 136 can be configured to create a time bound event from the at least one received message "meet suzzane" from the messaging application (i.e., the first application). Thus, whenever the user of the electronic device 100 detects a "calendar application" (i.e., second application) for setting a reminder then the time bound event is extended to the calendar application. If the electronic device 100 detects the input "meet" in the UI of the calendar application, then the text "Suzzane" is automatically predicted and displayed to the user of the electronic device 100.
Further, the contextual category detector 130 can be configured to communicate with the response predictor 140. The response predictor 140 include a language model 142 (hereinafter used as LM 142). The LM 142 can be configured to include Language Model (LM) entries defined based on the contextual category of the received input. For example, if the contextual category of the received input is of type "PoV" i.e., "How do I look in in blue color shirt" then the predicted response can include, for example, text/words related to the PoV such as, " this color is suits you", " blue color is too dark", "look good in blue color", and the like.
The response predictor 140 can communicate with one or more language model (LM) databases i.e., a preload LM 502, a user LM 504, and a time bound LM 506. The one or more LM databases can be configured to store the one or more LM entries. The response predictor 140 can be configured to retrieve the stored LM entries from the one or more LM databases. The one or more LM databases can be communicatively coupled to the storage 150 illustrated in FIG. 4.
In another embodiment, the one or more LM databases can be remotely located to the electronic device 100 and can be accessed through the one or more communication networks.
The preload LM 502 can include the statistical LM entries trained with plethora of corpus (user inputs) along with the semantic understanding. The user LM 504 can be dynamically created on the electronic device 100 and is trained based on the user activates (i.e., text, and graphical element(s) frequently used/accessed by the user in the event of responding/composing the message). Thus, the LM 142 can include a separate contextual category i.e., "X" component (shown below in Table.1) along with each unigram, bigram and trigram entries. For example, the LM 142 can be (N+X)/ (NN+X), where N= N gram and NN=Neural net.
Table 1 Language Model (LM) Category
LM entry Feature "X" component Frequency
For example, the "X" component can include "X1-domain identification component", "X2- DA identification component", "X3- sentimental analysis component", "X4-PoV component", "X5-user tracker component"....Xn. The "X". In an embodiment of the present disclosure, the "X" component can be updated by training the corpus for preload LM 502. Further, the "X" component can be updated by learning user's activities (Sentence(s) being typed, chat, conversation, emails and the like).
For example, in the conventional methods and systems, the user-1 of the electronic device 100 receives the message "I am really sorry" from the user-2 of the electronic device (not shown). In the course of responding to the received message, the user-1 starts typing "No need to a...," according conventional methods and systems, all the features based on the user typed text are extracted, the features such as, for e.g., a unigram feature (e.g., "a"), a bigram feature ("to a"), and a trigram features ("need to a"). Further, Table. 2, below includes additional extracted based on the text typed by the user.
Table 2 Text extraction
Token Feature Probability
Alright UNI
Apocalypse UNI
apologize UNI
to_apologize BI
to_appear BI
need_to_apologize TRI
need_to_appear TRI
Unlike to conventional methods and systems, the proposed method can be used to provide the predictions based on the contextual category of the received message from the user-2. The proposed contextual category detector 130 can be configured to identify the contextual category of the received message i.e., the message "I am really sorry" is of type "APOLOGY". Hence, LM entries corresponding to the contextual category "APOLOGY" along with the unigram, the bigram, the trigram features are retrieved and displayed to the user of the electronic device 100 ( as shown in Table.3)
Table 3 LM entries
Token Feature DA Domain Probability
Alright UNI APOLOGY
apologize UNI APOLOGY
to_apologize BI APOLOGY
need_to_apologize TRI APOLOGY
Further, the contextual category detector 130 can be configured to track the user-1 activities (response to the message sent by the user of the electronic device 100, predictions selected by the user of the electronic device 100, or the like), and alter (e.g., train, update, modify, etc.) the LM 142 based on the user activities.
FIGS. 6A and 6B illustrates the UI for predicting subsequent meaningful prediction during composing of the response to the received message, according to embodiments disclosed herein.
For example, consider a scenario in which the user of the electronic device 100 receives a message 600 i.e., "I am really sorry" from one or more participant 602. The user of the electronic device 100 may intend to respond to the received message and starts typing/composing the text i.e., "No need to a?" Accordingly, the proposed method can be used to automatically predict the subsequent text/word of the in the sentence being composed (i.e., next meaningful word).
In order to predict the subsequent text/word of the in the sentence being composed, the information manager 120 can be configured to communicate the received message ("I am really sorry") with the contextual category detector 130 illustrated in FIG. 5. The contextual category detector 130 can be configured to identify the at least one contextual category of the received message i.e., the contextual category is of type "Apology". Further, the contextual category detector 130 can communicate with the response predictor 140 to retrieve the LM entries based on the contextual category "Apology".
The LM 142 can be configured to identify the at least one feature (i.e., unigram, bigram and trigram) from the text input provided, by the user of the electronic device 100, during the response. Thus, the LM 142 can be configured to retrieve the LM entries based on the at least one feature along with the contextual category "Apology" of the received message 600. Hence, the response predictor 140 can be configured to retrieve and display at least one subsequent response 604 i.e., "apologize", "apology", "be apologetic", or the like, from the LM 142 based on the contextual category of the received message 600. Hence, for example, sentence 606 being composed can be No need to "apologize" ( as illustrated in FIG. 6B)
FIG. 7 is a flow diagram illustrating a method for predicting the response, according to an embodiment of the present disclosure.
Referring to FIG.7, at operation 702, the electronic device 100 may receive the at least one message. For example, in the electronic device 100 as illustrated in FIG.4, the information manager 120 can be configured to receive the at least one message.
At operation 704, the electronic device 100 identifies the at least one contextual category of the at least one message. For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to identify the at least one contextual category of the at least one message.
At operation 706, the electronic device 100 predicts the at least one response for the at least one message from the LM 142 based on the at least one contextual category. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to predict the at least one response for the at least one message from the LM 142 based on the at least one contextual category.
At operation 708, the electronic device 100 prioritizes the at least one predicted response. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to prioritize the at least one predicted response.
At operation 710, the electronic device 100 causes to display the at least one predicted response on the screen. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to cause to display the at least one predicted response on the screen.
At operation 712, the electronic device 100 tracks the user activities. For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to track the user activities.
At operation 714, the electronic device 100 trains the LM 142. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to train the LM 142.
The various actions, acts, blocks, steps, etc., as illustrated in FIG. 7 may be performed in the order presented, in a different order, or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, etc., may be omitted, added, modified, skipped, etc., without departing from the scope of the disclosure.
FIG. 8 illustrates a UI for responding to the message using the at least one predicted response, according to an embodiment of the present disclosure.
Referring to FIG. 8, the electronic device 100 may have a message transcript 800 showing the conversation between the user of the electronic device 100 and one or more participants, such as participant 804. The message transcript 800 may include a message 802, received from (an electronic device used by) the participant 804.
The content of the message 802 includes "Hey I got my results. I am the topper! " In an embodiment of the present disclosure, the proposed method can be used to determine at least one contextual category of the message 802 i.e., the contextual category of the received message 802 can be for e.g., "Appreciation". Further, the proposed method can be used to predict at least one response 806 i.e., "guessed it "am happy for you" "congrats", or the like from the LM 142 from the contextual category LM 156.
FIG. 9A is a step by step illustration for predicting response for a selected message from the plurality of messages, according to an embodiment of the present disclosure.
At operation 910a, the electronic device 100 may receive the at least one message. For example, in the electronic device 100 as illustrated in the FIG.4, the information manager 120 can be configured to receive the at least one message.
For example, referring to the UI, the display 170 can be configured to detect an input 902a (i.e., tap, gesture, or the like) on at least one message 904a ("You had an exam yesterday") from the plurality of messages.
At operation 912a, the electronic device 100 may recapture one or more words. For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to recapture one or more words.
At operation 914a, the electronic device 100 may identify at least one contextual category of the selected words. For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to identify the at least one contextual category of the selected words.
At operation 916a, the electronic device 100 may use the contextual category along with the LM 142. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to use the contextual category along with the LM 142.
At operation 918a, the electronic device 100 may compute values (i.e., LM entries) from the LM 142. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to compute values from the LM 142.
At operation 920a, the electronic device 100 may retrieve the response predictions and next word predictions. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to retrieve the response predictions and next word predictions.
Thus, based on the user input 902a on the at least one message 904a, the response predictor 140 can be configured to dynamically update and display the response predictions and next word predictions 906a i.e., "It was", "Exam was", or the like.
FIG. 9B is a step by step illustration for predicting response based for a selected input topic, according to an embodiment of the present disclosure.
At operation 910b, the electronic device 100 may receive the at least input topic. For example, in the electronic device 100 as illustrated in FIG.4, the information manager 120 can be configured to receive the at least input topic.
For example, referring to the UI, the display 170 can be configured to detect the input 902b (i.e., tap, gesture, or the like) on the input topic 904b (i.e., at least one word/text selected from the composing text).
At operation 912b, the electronic device 100 may recapture the one or more words. For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to recapture the one or more words.
At operation 914b, the electronic device 100 may identify the at least one contextual category of the selected words. For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to identify the at least one contextual category of the selected words.
At operation 916b, the electronic device 100 may use the contextual category along with the LM 142. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to use the contextual category along with the LM 142.
At operation 918b, the electronic device 100 may compute values from the LM 142. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to compute the values from the LM 142.
At operation 920b, the electronic device 100 may retrieve the meaningful predictions based on the composing text selection. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to retrieve (or, predict) the meaningful predictions based on the composing text selection.
Thus, based on the user input 902b on the at least one composing text, the response predictor 140 can be configured to dynamically update and display the meaningful predictions 906b (i.e., "It papers", "Questions", "answers", or the like) based on the composing text selection.
FIG. 10 is a flow diagram illustrating a method for predicting the response based on the statistical modelling manager, according embodiments as disclosed herein.
Referring to FIG. 10, at operation 1002, the electronic device 100 may receive the input topic from the first application. For example, in the electronic device 100 as illustrated in FIG.4, the information manager 120 can be configured to receive the input topic from the first application.
At operation 1004, the electronic device 100 may identify the at least one contextual event associated with the second application. For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to identify the at least one contextual event associated with the second application.
At operation 1006, the electronic device 100 may predict the at least one response for the at least one input topic from the first application based on the at least one contextual event. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to predict the at least one response for the at least one input topic from the first application based on the at least one contextual event.
At operation 1008, the electronic device 100 may compute dynamic-interpolation-weights (λ1, λ2, λ3) of each LM databases (i.e., the preload LM 502, the user LM 504, the time bound LM 506). The dynamic-interpolation-weights can be used to prioritize words among the LM databases.
At operation 1010, the electronic device 100 may find probabilities (PPLM, PULM, PTLM) of "WORD" from each of the LM databases (i.e., the preload LM 502, the user LM 504, the time bound LM 506 respectively).
At operation 1012, the electronic device 100 may calculate Pc (combined probability) for each of the word(s) retrieved from each of the each of the LM databases (i.e., LM models) and prioritize the predictions based on the Pc (or, based on the parameters such as Relevancy, sort by recent and so on). For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to calculates the Pc (combined probability) for each of the word(s) retrieved from each of the each of the LM databases (i.e., LM models).
At operation 1014, the electronic device 100 may cause to display the at least one predicted response on the screen. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to cause to display the at least one predicted response on the screen.
At operation 1016, the electronic device 100 may track the user activities. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to track the user activities.
At operation 1018, the electronic device 100 may train the LM 142. For example, in the electronic device 100 as illustrated in FIG.4, the response predictor 140 can be configured to train the LM 142 based on the user activities and LM entries retrieved from each of the LM database.
FIG. 11 is a waveform for computing dynamic interpolation weights with time bound, according to an embodiment of the present disclosure.
Table. 4 (shown below) tabulates the dynamic interpolation weights for each of the LM database with time bound LM and without time bound LM.
Table 4 LM database
Interpolation weights λ1 λ2 λ3
Figure PCTKR2017005812-appb-I000001
Without Time Bound LM 0.7 0.3 0 1
With Time Bound LM (1 - y) * 0.7 (1- y) * 0.3 Y 1
The electronic device 100 can be configured to estimate interpolation weight (λ3) with the time bound LM using the equation (7).
MathFigure 7
Figure PCTKR2017005812-appb-M000007
where m1 and m2= rate of change of interpolation weight with respect to time;
yTBmax = maximum interpolation weight for Time Bound LM,
TTB=Time limit for Time Bound LM,
TO = Value that lies between 0 and TTB,
TO ∈[0, TTB].
MathFigure 8
Figure PCTKR2017005812-appb-M000008
FIGS. 12A and 12B illustrate a UI in which the contextual event from the received message is identified and extended from first application to second application, according to an embodiment of the present disclosure.
Referring to FIG. 12A, the electronic device 100 may have a message transcript 1200 showing the conversation between the user of the electronic device 100 and one or more participants, such as participant 1206. The message transcript 1200 may include a message 1202 received from the participant 1206 and message 1204 sent by the user of the electronic device 100.
The contextual category detector 130 illustrated in FIG. 4 can be configured to identify the contextual event (i.e., fixed time bound, semantic time bound, and contextual time bound) associated with the message 1202 and the message 1204. The message 1204 includes "Great!! Try to Meet Suzanne!" The LM entries during the fixed time bound are managed via parabolic/ linear on time e.g., reduce priority / frequency over time. The LM entries during semantic time bound may not be useful after trip and thereby the LM 142 may delete the entry by understanding the message. The contextual time bound is more useful in the communication relate applications and prioritizes entry based on the application context.
Referring to FIG. 12B, the user of the electronic device 100 may launch the calendar application for setting a reminder. When the user of the electronic device 100 composes, using the keypad, a text 1208 "meet" in an input tab of the calendar application 1210, then the next response 1212 "Suzzane" can be automatically predicted and displayed on the screen (e.g., in the text prediction tab of the keypad., default area defined by OEM, default area defined by the user, etc.,) of the electronic device 100.
Accordingly, the proposed method of the present disclosure can be used to provide the meaningful predictions. The proposed method can be used to extend contextual event of the messaging application and provide predictions in the application.
FIGS. 13A and 13B illustrate another UI in which the contextual event from the received message is identified and extended from first application to second application, according to an embodiment of the present disclosure.
Referring to FIG. 13A, the electronic device 100 may have a message transcript 1300 showing the message received from one or more participants. The message transcript 1300 may include the message 1302 received from the participant.
The contextual category detector 130 can be configured to identify the contextual event (i.e., fixed time bound, semantic time bound, and contextual time bound) associated with the message 1302. The message 1302 includes "Buy Tropicana orange, cut mango and milk when you come home".
Referring to FIG. 13B, the user of the electronic device 100 may launch (access/open) a shopping application (i.e., related application to that of the contextual event). When the user of the electronic device 100 composes, using the keypad, at least one text 1304 ("Tropicana") from the message 1302 in the input tab of the shopping application, then the next word(s) 1306 "Orange", "cut mango", Milk, or the like can be automatically predicted and displayed on the screen (e.g., in the text prediction tab of the keypad., default area defined by the OEM, default area defined by the user, etc.,) of the electronic device 100.
FIG. 14A illustrates an exemplary UI in which a contextual related application based on the received message is predicted and displayed on the screen of the electronic device, according to an embodiment of the present disclosure.
Referring to FIG. 14A, the user of the electronic device 100 may receive a message 1402a from one or more participants. The contextual category detector 130 can be configured to detect the at least contextual event (i.e., contextual time bound event) associated with the message 1402a. Thus, based on the contextual time bound event, the response predictor 140 can be configured to predict and display the at least one contextual related application.
As illustrated in FIG. 14A, based on the contextual time bound event a related application i.e., a graphical icon 1404a of the calendar application can be predicted and displayed on the screen of the electronic device 100.
FIG. 14B illustrates a UI in which the predicted response for the message is displayed with in the notification area of the electronic device, according to an embodiment of the present disclosure.
Referring to FIG. 14B, the electronic device 100 may receive a message 1402b from one or more participants. The at least one predicted response 1404b for the message 1402b is automatically predicted and displayed within the notification area of the electronic device 100.
Unlike to conventional methods and systems, the proposed method can be used to provide the response predictions for the message(s) received without launching the message application.
FIG.15 illustrates a UI in which multiple response messages are predicted based on contextual grouping of the related messages, according to an embodiment of the present disclosure.
Referring to FIG. 15, the user of the electronic device 100 may receive messages 1502 and 1504 from a participant 1506, and a message 1508 from a participant 1510. The contextual category detector 130 can be configured to identify one or more contextual category of the messages 1502 (i.e., "You had an exam yesterday) and 1504 ("how was it"). Further, based on the one or more contextual category (i.e., both the messages 1502 and 1504 are received from same participant 1506, content available in both the messages 1502 and 1504 are contextually related, and the like) the response predictor 140 can be configured to predict one or more response messages and group 1512 the one or more predicted responses. Similarly, based on the one or more contextual category (i.e., of the message 1508, content available in the message 1508, or the like) the response predictor 140 can be configured to predict one or more response messages and group 1514 the one or more predicted responses.
Unlike to conventional methods and systems, the proposed method can be used to provide the response prediction by considering individual or group conversations, one or more queries from one or more participants are addressed, one or more queries from the user of the electronic device 100, and the like.
FIGS. 16A to 16B illustrates a longer pattern scenario in which the meaningful response (next suggested word) is predicted in the longer pattern sentence, according to an embodiment of the present disclosure.
Referring to FIG. 16A, the electronic device 100 detects the input topic i.e., composed text of longer sentence pattern i.e., "the sky above our head is..." Unlike to conventional methods and systems, the proposed context category manager 130 can be configured to analyze the received input topic and identify the at least one contextual category of the received input topic. Thus, based on the at least one content "The Sky" available in the input topic, the response predictor 140 can be configured to predict and display the response (next word) "Blue".
Thus, the LM 142 utilizes the contextual input class (to include longer pattern) along with N Gram (Tri gram) language model. Unlike to conventional methods and systems, the proposed method can provide the response predictions by considering only selective inputs ("the sky") and not the whole longer pattern.
Similarly, referring to FIG. 16B, the selective input ("party") is considered and according the response "Friday night" is predicted and displayed on the screen of the electronic device 100.
FIG. 17 is a flow diagram illustrating a method for predicting the response by understanding input views rendered on the screen of the electronic device, according to an embodiment of the present disclosure.
At operation 1702, the electronic device 100 may parse the information rendered on the screen (screen reading). For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to parse the information rendered on the screen (screen reading).
At operation 1704, the electronic device 100 may extract the text (i.e., hint, label, or the like) in response to parsing the screen. For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to extract the text in response to parsing the screen.
At operation 1706, the electronic device 100 may map the extracted text with the input views. For example, in the electronic device 100 as illustrated in FIG.4, the contextual category detector 130 can be configured to map the extracted text with the input views.
At operation 1708, the electronic device 100 may perform a semantic based modelling. Further, at operation 1710, the electronic device 100 prioritize the predictions.
FIGS. 18A to 18C is a UI displaying at least one predicted response by understanding input views rendered on the screen of the electronic device, according to an embodiment of the present disclosure.
The electronic device 100 can be configured to parse the information rendered on the screen i.e., identifying the text rendered on the screen. The texts (i.e., input views, hints, labels, or the like) on the screen such as "your name", "Your email address", "password", "enter password", "enter email", or the like, are parsed and provided to the contextual category detector 130 as illustrated in FIG. 4. The contextual category detector 130 can be configured to identify the contextual category of the text parsed i.e., "Your name" is of category "subject", "you phone number" is of category " contacts", etc., identified from the contextual LM database. Further, the prediction detector 140 illustrated in FIG. 4 can be configured to display the response predictions based on the input views/input text field in accordance with the at least one category determined. The response predictions for the input text filed "Your name" can be "steph", "curry", or the like.
FIGS. 19A to 19C illustrates a UI displaying multiple predicted response based on at least one event associated with at least one participant, according to an embodiment of the present disclosure.
For example, consider a scenario in which the user of the electronic device 100 may receive at least one message 1902 from the at least one participant 1904. Unlike to conventional methods and systems, the proposed contextual category detector 130 can be used to identify at least one event (e.g., birthday event, anniversary event, etc.) associated with the participant 1904. The at least one event can be automatically retrieved from the at least one application (e.g., calendar application, SNS application, etc.,) associated with the electronic device 100.
Hence, based on the least one event the response predictor 140 can be configured to predict, prioritize and display multiple responses. For example, if the content of the message 1902 includes "shall we go for movie?", then the response predictor 140 can be configured to provide the response predictions 1906 i.e., "Sure, we should definitely go", "movie will be good", Sure. Further, the response predictions 1906 can include the response predicted based on the event detection i.e., "Happy birthday buddy".
FIGS. 20A to 20C illustrates a UI displaying predicted response based on the context associated with the user and the electronic device, according to an embodiment of the present disclosure.
For example, consider a scenario in which the user of the electronic device 100 may receive at least one message 2002 from the at least one participant 2004. Unlike to conventional methods and systems, the proposed contextual category detector 130 can be used to identify the context (i.e., location, weather condition, etc.,) of the electronic device 100 a user context (i.e., appointment, user tone, reminder, etc.).
Hence, based on the context of the electronic device 100 the response predictor 140 can be configured to predict, prioritize and display multiple responses. For example, if the content of the message 2002 includes "How about trip to Goa this December?", then the response predictor 140 can be configured to provide the response predictions 2006 i.e., "Wow! Let's do it". Further, the response predictions 2006 can include the response predicted based on the context (weather forecast provided by weather application, or weather forecast provided by any other means) of the electronic device 100 i.e., "it will be completely raining."
Further, if the content of message 2008 includes "party is at 3. When will you reach here?", then based on the context (e.g., location, time, or the like) of the electronic device 100 the response predictor 140 can be configured to predict, prioritize and display multiple responses 2010 i.e., "will reach in one hour", "In another", 2:45 PM", or the like.
Furthermore, if the content of message 2012 includes "I get frequent headache now a days" and content of message 2014 includes "Will visit doctor tomo!", then based on the context (e.g., time) of the electronic device 100 and context of at least one participant/user (i.e., Appointment, health, user tone, or the like) the response predictor 140 can be configured to predict, prioritize and display multiple responses 2016 i.e., "How are you feeling now?", "Did you visit the doctor", or the like.
FIGS. 21A to 21D illustrate various table tabulating the response predictions and next suggestive words for different samples of inputs, according to an embodiment of the present disclosure.
The electronic device 100 or method (for example, operations) according to various embodiments may be performed by at least one computer (for example, a processor 160) which executes instructions included in at least one program from among programs which are maintained in a computer-readable storage medium.
When the instructions are executed by a computer (for example, the processor 160), the at least one computer may perform a function corresponding to the instructions. In this case, the computer-readable storage medium may be the memory, for example.
Certain aspects of the present disclosure can also be embodied as computer readable code on a non-transitory computer readable recording medium. A non-transitory computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the non-transitory computer readable recording medium include a Read-Only Memory (ROM), a Random-Access Memory (RAM), Compact Disc-ROMs (CD-ROMs), magnetic tapes, floppy disks, and optical data storage devices. The non-transitory computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. In addition, functional programs, code, and code segments for accomplishing the present disclosure can be easily construed by programmers skilled in the art to which the present disclosure pertains.
At this point it should be noted that the various embodiments of the present disclosure as described above typically involve the processing of input data and the generation of output data to some extent. This input data processing and output data generation may be implemented in hardware or software in combination with hardware. For example, specific electronic components may be employed in a mobile device or similar or related circuitry for implementing the functions associated with the various embodiments of the present disclosure as described above. Alternatively, one or more processors operating in accordance with stored instructions may implement the functions associated with the various embodiments of the present disclosure as described above. If such is the case, it is within the scope of the present disclosure that such instructions may be stored on one or more non-transitory processor readable mediums. Examples of the processor readable mediums include a ROM, a RAM, CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The processor readable mediums can also be distributed over network coupled computer systems so that the instructions are stored and executed in a distributed fashion. In addition, functional computer programs, instructions, and instruction segments for accomplishing the present disclosure can be easily construed by programmers skilled in the art to which the present disclosure pertains.
The instructions may include machine language codes created by a compiler, and high-level language codes that can be executed by a computer by using an interpreter. The above-described hardware device may be configured to operate as one or more software modules to perform the operations according to various embodiments of the present disclosure, and vice versa.
While the present disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents.

Claims (15)

  1. An electronic device for automatically predicting a response, the electronic device comprising:
    a display; and
    a processor configured to:
    receive at least one message;
    identify at least one contextual category of the at least one message; and
    predict at least one response for the at least one message from a language model based on the at least one contextual category, and
    control the display to display the at least one predicted response.
  2. The electronic device of claim 1, wherein the contextual category of the at least one message is automatically identified based on at least one context indicative.
  3. The electronic device of claim 2, wherein the at least one context indicative is determined based on at least one of content available in the at least one message, user activities, events defined in the electronic device, a user associated with the at least one message, a user context of the electronic device, and a context of the electronic device.
  4. The electronic device of claim 3, wherein the context of the electronic device is determined based on sensing data sensed by at least one sensor of the electronic device.
  5. The electronic device claim 1, wherein the at least one message comprises one of a topic selected from a written communication and a topic formed based at least one input filled available in an application.
  6. The electronic device of claim 5, wherein, when the received at least one message is an input topic from a first application, the processor further configured to:
    identify at least one contextual event associated with a second application, and
    predict at least one response for the at least one input topic from the first application based on at least one contextual event.
  7. The electronic device of claim 6, wherein the at least one contextual event is a fixed time bound, a semantic time bound, and a contextual time bound.
  8. The electronic device of claim 6, wherein the at least one contextual event associated with the second application is determined based on at least one context indicative associated with the input topic of the first application, and
    wherein the at least one context indicative is determined based on at least one of content available in the input topic, context of the first application, user activities, and events defined.
  9. The electronic device of claim 1, wherein the at least one message and the at least one predicted response are displayed within the notification area.
  10. The electronic device of claim 1, wherein the at least one response for the at least one message is predicted in response to an input on the at least one received message.
  11. A method for automatically predicting a response, the method comprising:
    receiving, by an information manager, at least one message at an electronic device;
    identifying, by a contextual category detector, at least one contextual category of the at least one message;
    predicting, by a response predictor, at least one response for the at least one message from a language model based on the at least one contextual category; and
    causing, by the response predictor, to display the at least one predicted response on a screen of an electronic device.
  12. The method of claim 11, wherein the contextual category of the at least one message is automatically identified based on at least one context indicative.
  13. The method of claim 12, wherein the at least one context indicative is determined based on at least one of content available in the at least one message, user activities, events defined in the electronic device, a user associated with the at least one message, a user context of the electronic device, and a context of the electronic device.
  14. The method of claim 11, wherein the context of the electronic device is determined based on sensing data sensed by at least one sensor of the electronic device.
  15. The method of claim 11, wherein the at least one message comprises one of a topic selected from a written communication and a topic formed based at least one input filled available in an application.
PCT/KR2017/005812 2016-06-02 2017-06-02 Method and electronic device for predicting response WO2017209571A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP17807064.5A EP3403201A4 (en) 2016-06-02 2017-06-02 Method and electronic device for predicting response

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201641019244 2016-06-02
IN201641019244 2016-06-02

Publications (1)

Publication Number Publication Date
WO2017209571A1 true WO2017209571A1 (en) 2017-12-07

Family

ID=60481567

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/005812 WO2017209571A1 (en) 2016-06-02 2017-06-02 Method and electronic device for predicting response

Country Status (3)

Country Link
US (1) US10831283B2 (en)
EP (1) EP3403201A4 (en)
WO (1) WO2017209571A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106415527B (en) * 2016-08-31 2019-07-30 北京小米移动软件有限公司 Information communication method and device
EP3482363A1 (en) * 2016-09-20 2019-05-15 Google LLC System and method for transmitting a response in a messaging application
CN111819530B (en) * 2018-03-09 2024-08-06 三星电子株式会社 Electronic device and on-device method for enhancing user experience in electronic device
US11226832B2 (en) * 2018-11-09 2022-01-18 International Business Machines Corporation Dynamic generation of user interfaces based on dialogue
US11238226B2 (en) * 2018-11-15 2022-02-01 Nuance Communications, Inc. System and method for accelerating user agent chats
KR102527892B1 (en) * 2018-11-26 2023-05-02 삼성전자주식회사 Electronic device for providing predictive word and operating method thereof
US20220291789A1 (en) * 2019-07-11 2022-09-15 Google Llc System and Method for Providing an Artificial Intelligence Control Surface for a User of a Computing Device
US10635754B1 (en) * 2019-08-02 2020-04-28 Capital One Services, Llc Systems and methods for improved conversation translation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020023144A1 (en) * 2000-06-06 2002-02-21 Linyard Ronald A. Method and system for providing electronic user assistance
US20050228790A1 (en) * 2004-04-12 2005-10-13 Christopher Ronnewinkel Coherent categorization scheme
KR20120005638A (en) * 2010-07-09 2012-01-17 (주)서주모바일 Mobile device and method of providing messenger application service by the mobile device
US20130035932A1 (en) * 2007-09-18 2013-02-07 At&T Intellectual Property L, L.P. System and method of generating responses to text-based messages
US20150293602A1 (en) 2010-03-12 2015-10-15 Nuance Communications, Inc. Multimodal text input system, such as for use with touch screens on mobile phones
US20150317069A1 (en) 2009-03-30 2015-11-05 Touchtype Limited System and method for inputting text into electronic devices
WO2015165003A1 (en) 2014-04-28 2015-11-05 Google Inc. Context specific language model for input method editor
US20150347919A1 (en) * 2014-06-03 2015-12-03 International Business Machines Corporation Conversation branching for more efficient resolution
US20150370780A1 (en) 2014-05-30 2015-12-24 Apple Inc. Predictive conversion of language input

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0543329B1 (en) * 1991-11-18 2002-02-06 Kabushiki Kaisha Toshiba Speech dialogue system for facilitating human-computer interaction
US5828999A (en) 1996-05-06 1998-10-27 Apple Computer, Inc. Method and system for deriving a large-span semantic language model for large-vocabulary recognition systems
US7590603B2 (en) * 2004-10-01 2009-09-15 Microsoft Corporation Method and system for classifying and identifying messages as question or not a question within a discussion thread
EP1615124A1 (en) * 2004-07-07 2006-01-11 Alcatel Alsthom Compagnie Generale D'electricite A method for handling a multi-modal dialog
US8990126B1 (en) * 2006-08-03 2015-03-24 At&T Intellectual Property Ii, L.P. Copying human interactions through learning and discovery
US8078978B2 (en) 2007-10-19 2011-12-13 Google Inc. Method and system for predicting text
US8701046B2 (en) * 2008-06-27 2014-04-15 Microsoft Corporation Aggregate and hierarchical display of grouped items spanning multiple storage locations
US8374881B2 (en) * 2008-11-26 2013-02-12 At&T Intellectual Property I, L.P. System and method for enriching spoken language translation with dialog acts
US9129601B2 (en) * 2008-11-26 2015-09-08 At&T Intellectual Property I, L.P. System and method for dialog modeling
WO2012024585A1 (en) * 2010-08-19 2012-02-23 Othar Hansson Predictive query completion and predictive search results
KR20130057146A (en) * 2011-11-23 2013-05-31 한국전자통신연구원 Smart contents creating method and system based on user's contents
US9306878B2 (en) * 2012-02-14 2016-04-05 Salesforce.Com, Inc. Intelligent automated messaging for computer-implemented devices
KR20140004515A (en) * 2012-07-03 2014-01-13 삼성전자주식회사 Display apparatus, interactive server and method for providing response information
US9330422B2 (en) * 2013-03-15 2016-05-03 Xerox Corporation Conversation analysis of asynchronous decentralized media
RU2637874C2 (en) * 2013-06-27 2017-12-07 Гугл Инк. Generation of interactive recommendations for chat information systems
US20150100537A1 (en) * 2013-10-03 2015-04-09 Microsoft Corporation Emoji for Text Predictions
US9232063B2 (en) * 2013-10-31 2016-01-05 Verint Systems Inc. Call flow and discourse analysis
US10038786B2 (en) * 2014-03-05 2018-07-31 [24]7.ai, Inc. Method and apparatus for improving goal-directed textual conversations between agents and customers
US20150271128A1 (en) * 2014-03-21 2015-09-24 Keith M. Mantey Novel email message system and method
US9213941B2 (en) * 2014-04-22 2015-12-15 Google Inc. Automatic actions based on contextual replies
RU2608880C2 (en) * 2014-05-22 2017-01-25 Общество С Ограниченной Ответственностью "Яндекс" Electronic device and method of electronic message processing
US20150350118A1 (en) * 2014-05-30 2015-12-03 Apple Inc. Canned answers in messages
US9785891B2 (en) * 2014-12-09 2017-10-10 Conduent Business Services, Llc Multi-task conditional random field models for sequence labeling
US10529030B2 (en) * 2015-01-09 2020-01-07 Conduent Business Services, Llc System and method for labeling messages from customer-agent interactions on social media to identify an issue and a response
KR101583181B1 (en) * 2015-01-19 2016-01-06 주식회사 엔씨소프트 Method and computer program of recommending responsive sticker
US20160224524A1 (en) * 2015-02-03 2016-08-04 Nuance Communications, Inc. User generated short phrases for auto-filling, automatically collected during normal text use
US9883358B2 (en) * 2015-05-08 2018-01-30 Blackberry Limited Electronic device and method of determining suggested responses to text-based communications
US10091140B2 (en) * 2015-05-31 2018-10-02 Microsoft Technology Licensing, Llc Context-sensitive generation of conversational responses
US9886958B2 (en) * 2015-12-11 2018-02-06 Microsoft Technology Licensing, Llc Language and domain independent model based approach for on-screen item selection
US10120864B2 (en) * 2016-03-29 2018-11-06 Conduent Business Services Llc Method and system for identifying user issues in forum posts based on discourse analysis
US20170308290A1 (en) * 2016-04-20 2017-10-26 Google Inc. Iconographic suggestions within a keyboard
US10305828B2 (en) * 2016-04-20 2019-05-28 Google Llc Search query predictions by a keyboard

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020023144A1 (en) * 2000-06-06 2002-02-21 Linyard Ronald A. Method and system for providing electronic user assistance
US20050228790A1 (en) * 2004-04-12 2005-10-13 Christopher Ronnewinkel Coherent categorization scheme
US20130035932A1 (en) * 2007-09-18 2013-02-07 At&T Intellectual Property L, L.P. System and method of generating responses to text-based messages
US20150317069A1 (en) 2009-03-30 2015-11-05 Touchtype Limited System and method for inputting text into electronic devices
US20150293602A1 (en) 2010-03-12 2015-10-15 Nuance Communications, Inc. Multimodal text input system, such as for use with touch screens on mobile phones
KR20120005638A (en) * 2010-07-09 2012-01-17 (주)서주모바일 Mobile device and method of providing messenger application service by the mobile device
WO2015165003A1 (en) 2014-04-28 2015-11-05 Google Inc. Context specific language model for input method editor
US20150370780A1 (en) 2014-05-30 2015-12-24 Apple Inc. Predictive conversion of language input
US20150347919A1 (en) * 2014-06-03 2015-12-03 International Business Machines Corporation Conversation branching for more efficient resolution

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANNIN K DEY ET AL.: "CyberDesk", INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 6 January 1998 (1998-01-06)
ANNUAL INTERNATIONAL CONFERENCE ON USER INTERFACES, 1 January 1998 (1998-01-01), pages 47 - 54
GREG CORRADO: "Research Blog: Computer, respond to this email", GOOGLE RESEARCH BLOG, 3 November 2015 (2015-11-03), XP055439462, Retrieved from the Internet <URL:https://research.googleblog.com/2015/11/computer-respond-to-this-email.html>
See also references of EP3403201A4

Also Published As

Publication number Publication date
US20170351342A1 (en) 2017-12-07
EP3403201A1 (en) 2018-11-21
US10831283B2 (en) 2020-11-10
EP3403201A4 (en) 2019-01-09

Similar Documents

Publication Publication Date Title
WO2017209571A1 (en) Method and electronic device for predicting response
WO2017150860A1 (en) Predicting text input based on user demographic information and context information
WO2019212267A1 (en) Contextual recommendation
WO2020045927A1 (en) Electronic device and method for generating short cut of quick command
WO2011068375A2 (en) Method and apparatus for providing user interface
WO2014025186A1 (en) Method for providing message function and electronic device thereof
WO2016099192A1 (en) Text-based content management method and apparatus of electronic device
WO2019022567A2 (en) Method for automatically providing gesture-based auto-complete suggestions and electronic device thereof
EP3523710A1 (en) Apparatus and method for providing sentence based on user input
WO2021141419A1 (en) Method and apparatus for generating customized content based on user intent
WO2014035209A1 (en) Method and apparatus for providing intelligent service using inputted character in a user device
WO2014035199A1 (en) User interface apparatus in a user terminal and method for supporting the same
WO2015186908A1 (en) Mobile terminal and control method therefor
WO2021157897A1 (en) A system and method for efficient multi-relational entity understanding and retrieval
WO2018062974A1 (en) Electronic device and method thereof for managing notifications
WO2020032655A1 (en) Method for executing function based on voice and electronic device supporting the same
WO2019050137A1 (en) System and method of determining input characters based on swipe input
WO2017123073A1 (en) Method and system for automatically managing content in an electrnic device
WO2015088291A1 (en) Long sentence translation service apparatus and method
WO2016117854A1 (en) Text editing apparatus and text editing method based on speech signal
WO2019172718A1 (en) Electronic device and on-device method for enhancing user experience in electronic device
WO2019045441A1 (en) Method for providing cognitive semiotics based multimodal predictions and electronic device thereof
WO2018128214A1 (en) Machine learning based artificial intelligence emoticon service providing method
EP3665557A1 (en) Method for recommending one or more actions and an electronic device thereof
WO2019117567A1 (en) Method and apparatus for managing navigation of web content

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2017807064

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017807064

Country of ref document: EP

Effective date: 20180815

NENP Non-entry into the national phase

Ref country code: DE