CN110996271A - Message service providing device and server - Google Patents

Message service providing device and server Download PDF

Info

Publication number
CN110996271A
CN110996271A CN201911163405.6A CN201911163405A CN110996271A CN 110996271 A CN110996271 A CN 110996271A CN 201911163405 A CN201911163405 A CN 201911163405A CN 110996271 A CN110996271 A CN 110996271A
Authority
CN
China
Prior art keywords
electronic device
user
message
server
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911163405.6A
Other languages
Chinese (zh)
Inventor
赵相旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN110996271A publication Critical patent/CN110996271A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/12Messaging; Mailboxes; Announcements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/18Service support devices; Network management devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Economics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

An electronic device for providing a message service comprising: a communication unit configured to exchange a message with another apparatus; a display configured to display a message received from the another apparatus, a message input by the electronic apparatus, and at least one response message generated by the electronic apparatus together on a message service screen of the display; and a processor configured to determine whether the received message includes a query, and in response to the received message being determined to include the query, provide at least one piece of recommended content based on the query, and transmit at least one response message including the at least one piece of recommended content to the other apparatus.

Description

Message service providing device and server
This application is a divisional application of an invention patent application having an international application date of 2015, 7/month, and 30, chinese application number of 201580000958.3, entitled "message service providing apparatus and method for providing content via the same".
Technical Field
Apparatuses and methods consistent with exemplary embodiments relate to providing content to a user during a conversation by using an electronic device.
Background
As the distribution rate of portable terminals has rapidly increased recently, portable terminals have become daily necessities for contemporary people. The portable terminal provides various functions including not only a voice call service as its main function but also various kinds of additional services such as data transfer and the like.
In particular, communication between users has been actively performed by using a smart terminal. With this trend, various technologies configured to provide a more convenient environment for such communication have been developed.
Disclosure of Invention
Technical problem
One or more exemplary embodiments provide a method of providing a convenient communication environment by using various electronic devices.
Drawings
The above and/or other aspects will become more apparent by describing certain exemplary embodiments with reference to the attached drawings, wherein:
fig. 1 is a diagram for describing a content providing system according to an exemplary embodiment;
FIG. 2 is a diagram of a User Interface (UI) of an electronic device according to an exemplary embodiment;
FIG. 3 is a flowchart for describing a method of providing content via an electronic device according to an exemplary embodiment;
fig. 4 to 7 are diagrams for describing a method of providing contents according to an exemplary embodiment;
fig. 8 is a diagram for describing various electronic devices and various chatting methods to which a method of providing contents according to an exemplary embodiment is applied;
FIG. 9a is a diagram for describing a knowledge framework included in an electronic device that is configured to provide content related to a response;
fig. 9b is a diagram for describing an operation of each component of the electronic device according to the exemplary embodiment;
10 a-10 d are diagrams of environment setting user interfaces provided by an electronic device according to exemplary embodiments;
fig. 11a to 11d are diagrams for describing a method of providing contents related to a response via an electronic device;
FIG. 12a is a flow chart for describing a method of providing content related to a response via an electronic device;
fig. 12b is a diagram of modules included in each of the electronic device and the server;
FIG. 13a is a flow chart describing a method of providing content related to a response via an electronic device;
fig. 13b is a diagram of modules included in each of the electronic device and the server;
FIG. 14a is a flow chart for describing a method of providing content related to a response via an electronic device;
fig. 14b is a diagram of modules included in each of the electronic device and the server;
fig. 15 is a diagram for describing a method of utilizing contents stored in a database of a server for understanding a relationship between users;
fig. 16 is a diagram for describing a method of utilizing contents stored in a database of an electronic device for understanding a relationship between users;
fig. 17 is a diagram for describing a method of utilizing contents stored in a database of an electronic device and contents stored in a database of a server in order to understand a relationship between a user of the electronic device and a user of a device of another user;
fig. 18, 19a to 19d, 20a to 20d, 21, 22a, 22b, 23a to 23c, and 24a to 24e are diagrams for describing a method for providing contents according to an exemplary embodiment;
fig. 25a to 25e are diagrams of a content providing screen when providing content that can be used for a response according to an exemplary embodiment;
FIG. 26 is a diagram of an electronic device providing an interface via which to change a content provision layout;
fig. 27 to 37 are diagrams for describing a method of providing contents according to an exemplary embodiment;
fig. 38 to 42, 43a, 43b, 44, 45, 46a, 46b, 47 and 48 are diagrams for describing a method of providing contents according to an exemplary embodiment;
FIG. 49 is a block diagram of components of a user's terminal, which may be the electronic device of FIG. 1, according to an exemplary embodiment;
fig. 50, 51, 52, and 53 are diagrams for describing an overall method of providing contents according to an exemplary embodiment;
fig. 54a to 54d are diagrams for describing a method of providing contents according to an exemplary embodiment;
fig. 55a to 55d are diagrams for describing a method of providing contents according to an exemplary embodiment;
FIG. 56 is a block diagram of software components of a user's terminal according to an exemplary embodiment;
fig. 57 is a diagram of a User Interface (UI) of an electronic device according to an exemplary embodiment; and
fig. 58 is a diagram of a User Interface (UI) of an electronic device according to an exemplary embodiment.
Detailed Description
Best mode for carrying out the invention
According to an aspect of exemplary embodiments, there is provided an electronic device for providing a message service, the electronic device including: a communication unit configured to exchange a message with another apparatus; a display configured to display a message received from the another apparatus, a message input by the electronic apparatus, and at least one response message generated by the electronic apparatus together on a message service screen of the display; and a processor configured to determine whether the received message includes a query, and in response to the received message being determined to include the query, provide at least one piece of recommended content based on the query, and transmit the at least one response message including the at least one piece of recommended content to the other apparatus.
The processor may be further configured to determine whether the electronic apparatus stores data to be used for generating the at least one piece of recommended content, and obtain the data based on a result of the determination.
The processor may be further configured to determine a keyword associated with the at least one piece of recommended content, and obtain content corresponding to the keyword.
The processor may be further configured to obtain the at least one piece of recommended content based on relationship data between the user of the other apparatus and the user of the electronic apparatus.
The relationship data may include at least one of data stored in the electronic device, data stored in a server in communication with the electronic device, and data stored in the other device.
The display may be further configured to display the at least one piece of recommended content in response to a setting menu set to manually recommend the at least one piece of recommended content, based on a user input requesting the at least one piece of recommended content, to display the at least one piece of recommended content in response to the setting menu set to semi-automatically recommend the at least one piece of recommended content, based on a user input indicated in the received message, and to display the at least one piece of recommended content without a user input when the electronic apparatus obtains the at least one piece of content by recognizing the received message, in response to the setting menu set to automatically recommend the at least one piece of recommended content.
The processor may be further configured to obtain the at least one piece of recommended content based on a type of word included in the message, a relationship between words, and a meaning of the word.
The processor may be further configured to obtain the at least one piece of recommended content based on a relationship between a user of the electronic device and a user of the other device, the relationship being set based on user input.
The at least one piece of recommended content may be obtained according to an application installed in the electronic device.
The processor may be further configured to extract at least one keyword from the received message to determine whether the received message includes a query.
According to an aspect of another exemplary embodiment, there is provided a method of providing content to another device via an electronic device providing a message service, the method including: receiving a message from the other apparatus; displaying the received message on a message service screen; determining whether the received message includes a query; providing at least one piece of recommended content based on the received message; and transmitting a response message including the at least one piece of recommended content to the other apparatus.
The step of providing at least one piece of recommended content may include: it is determined whether the electronic apparatus stores data used for generating the at least one piece of recommended content, and the data is obtained based on a result of the determination.
The step of providing at least one piece of recommended content may include: and determining a keyword associated with the at least one piece of recommended content, and obtaining content corresponding to the keyword.
The step of providing at least one piece of recommended content may include: obtaining the at least one piece of recommended content based on relationship data between the user of the electronic device and the user of the other device.
The relationship data may include at least one of data stored in the electronic device, data stored in a server in communication with the electronic device, and data stored in the other device.
The step of providing at least one piece of recommended content may include: displaying the at least one piece of recommended content based on a user input requesting the at least one piece of recommended content in response to a setting menu set to manually recommend the at least one piece of recommended content; displaying the at least one piece of recommended content based on a user input indicated in the received message in response to a setting menu set to semi-automatically recommend the at least one piece of recommended content; and displaying the at least one piece of recommended content without user input when the electronic device obtains the at least one piece of content by recognizing the received message, in response to a setting menu set to automatically recommend the at least one piece of recommended content.
The step of providing at least one piece of recommended content may include: the at least one piece of recommended content is obtained based on the type of words, the relationship between words, and the meaning of words included in the message.
The step of providing at least one piece of recommended content may include: the at least one piece of recommended content is obtained based on a relationship between the user of the electronic device and the user of the other device, the relationship being set based on user input.
The at least one piece of recommended content may be obtained according to an application installed in the electronic device.
The step of providing at least one piece of recommended content includes: at least one keyword is extracted from the received message to determine whether the received message includes a query.
According to an aspect of exemplary embodiments, there is provided a method of providing a social networking service by a server, the method comprising: displaying a message issued by the first device through an application or a website; identifying a user input from the second device that activates an input area displayed on the application or website; determining whether the published message includes a query; generating a recommendation response based on information of a relationship between a user of the first device and a user of the second device in response to the posting message including the query; and provides the recommended response to the second device.
The step of determining whether the published message includes a query may include: determining whether a sentence of the message starts with an adverb of a question, determining whether a subject and a verb of the sentence are reversed, and determining whether the sentence includes a question mark.
The recommendation response may include a plurality of content belonging to at least two different categories, and the at least two different categories include applications and photos.
The method may further include determining user preferences for at least two different categories based on a number of times each category is selected by the second device, and displaying the plurality of content in an order of the user preferences.
Modes for carrying out the invention
This application claims the priority of korean patent application No. 10-2015-0026750, filed by 25.2.2015, and korean patent application No. 10-2014-0098634, filed by 31.7.2014, by the korean intellectual property office, the disclosures of which are incorporated herein by reference in their entireties.
Exemplary embodiments are described in more detail below with reference to the accompanying drawings.
In the following description, the same reference numerals are used for the same elements even in different drawings. The matters defined in the description such as a detailed construction and elements are provided to assist in a comprehensive understanding of the exemplary embodiments. It will be apparent, however, that the exemplary embodiments can be practiced without those specifically defined matters. Also, well-known functions or constructions are not described in detail since they would obscure the description with unnecessary detail.
When a portion "comprises" or "comprising" an element, the portion may further comprise, not exclude, other elements, unless there is a specific description to the contrary. Throughout this specification, it will be understood that when an element is referred to as being "connected" to another element, it can be "directly connected" to the other element or "electrically connected" to the other element with an intervening element therebetween. In addition, terms such as "… … unit", "… … module", etc., mean a unit performing at least one function or operation, and the units may be implemented as hardware or software or a combination of hardware and software.
Throughout this specification, terms such as "user" may refer to a user of an electronic device. Throughout this specification, terms such as "message service" may refer to a one-to-one, one-to-many, or many-to-many service via which users may easily exchange messages, such as a conversation of the user.
Throughout this specification, the term "application" means a collection of computer programs designed to run a specific task. The applications in this specification may vary. The applications may include game applications, video playback applications, mapping applications, presentation applications, calendar applications, phone book applications, broadcast applications, sports support applications, payment applications, photo folder applications, and the like. However, the application is not limited thereto. The application may also be referred to as App.
Expressions such as "at least one of" when preceding a list of elements modify the entire list of elements without modifying individual elements of the list.
In this specification, the term "message" may represent a set of text units or a set of sound units, including at least one selected from one or more words, one or more phrases, and one or more clauses, as part of a conversation exchanged between users.
In this specification, the term "keyword" may mean a word, phrase, and clause related to the meaning of a message, which is obtained by performing natural language analysis on the message. The keyword may represent not only a word, phrase, and clause included in the message but also a word, phrase, and clause not included in the message.
In this specification, the term "content" may refer to data, files, software, and information, including video, sound, and text that communicate information via an electronic device. For example, the image content may represent image data communicated via the electronic device. For example, the content may include text responses including two-dimensional images, three-dimensional images, two-dimensional videos, three-dimensional videos, and various languages, as well as content related to various application services.
Fig. 1 is a diagram for describing a content providing system 10 according to an exemplary embodiment.
As illustrated in fig. 1, the content providing system 10 may include an electronic device 100, a device 200 of another user, and a server 300.
The electronic device 100 according to an exemplary embodiment may exchange a text message or a voice message with the device 200 of another user. Also, the electronic device 100 may exchange a text message or a voice message with the device 200 of another user via the server 300.
In addition, the electronic device 100 may request and obtain various types of data from the server 300 and transmit the various types of data to the server 300. For example, the data obtained by the electronic device 100 from the server 300 may be data exchanged between the device 200 of another user and the server 300.
Content providing system 10 may be implemented with more or fewer components than those illustrated. For example, according to another exemplary embodiment, the server 300 may not be included in the content providing system 10.
Thereafter, the electronic device 100, the device 200 of another user, and the server 300 will be described in more detail.
The electronic device 100 according to an exemplary embodiment may exchange messages with the device 200 of another user.
In this specification, the electronic device 100 may be implemented as a smart phone, a tablet (tablet), a Personal Computer (PC), a wearable device, a Personal Digital Assistant (PDA), a laptop computer, a cellular phone, a mobile phone, an Enterprise Digital Assistant (EDA), a Portable Multimedia Player (PMP), a personal navigation device or Portable Navigation Device (PND), a handheld game console, a Mobile Internet Device (MID), or an electronic book (e-book).
The wearable device may include a Head Mounted Display (HMD) (hereinafter, referred to as "HMD") that may be worn on the head. For example, the HMD may include glasses, a helmet, a hat, and the like, but this is not limited thereto. The wearable device may also be implemented as a ring, necklace, bracelet, shoe, earring, headband, garment, glove, thimble, or the like.
The device 200 of another user may be a device that receives a first message generated in the electronic device 100 and displays the first message on an output interface. Also, the device 200 of another user may generate a second message according to the user input and transmit the second message to the electronic device 100.
According to an exemplary embodiment, the device 200 of another user may receive a first message from the electronic device 100 via the server 300 and may transmit a second message to the electronic device 100 via the server 300. According to another exemplary embodiment, the device 200 of another user may directly receive the first message from the electronic device 100 without the server 300 and may directly transmit the second message to the electronic device 100 without the server 300. According to another exemplary embodiment, the device 200 of another user may be a single device or a plurality of devices.
The device 200 of another user according to an exemplary embodiment may be implemented in various types. For example, the device 200 of another user may be implemented as a smart phone, a tablet PC, a wearable device, a Personal Digital Assistant (PDA), a laptop computer, a cellular phone, a mobile phone, an Enterprise Digital Assistant (EDA), a Portable Multimedia Player (PMP), a personal navigation device or Portable Navigation Device (PND), a handheld game console, a Mobile Internet Device (MID), or an electronic book (e-book).
The server 300 may perform communication with the electronic device 100 or the device 200 of another user. For example, the server 300 may receive a first message generated in the electronic device 100 from the electronic device 100, and may receive a second message generated in the device 200 of another user from the device 200 of another user. Also, the server 300 may transmit a first message to the device 200 of another user and transmit a second message to the electronic device 100.
The server may manage messages received from the electronic device 100 or another user's device 200. For example, the server 300 may store the exchanged messages in a message Database (DB) of each device. Also, the server 300 may update the message DB. The server 300 may update the message DB according to a cycle time period or update the message DB whenever a new message is received from the electronic device 100 or the device 200 of another user.
Fig. 2 is a diagram illustrating a User Interface (UI) of the electronic device 100 according to an exemplary embodiment.
Referring to fig. 2, the electronic device 100 according to an exemplary embodiment may extract a keyword from a message received from a device 200 of another user. Also, based on the keyword, the electronic device 100 may obtain at least one piece of content that may be used when responding to the message. Also, the electronic device 100 may display the obtained at least one piece of content.
The electronic device 100 may provide an interface such as a dialog window including the back button 20, the call button 30, the user name box 40, and the messages 50, 51, 52, and 53. The back button 20 and the call button 30 may be displayed on the user name box 40.
The user may touch or double-click the back button 20 to return to the previous menu. The user may touch or double-click the call button 21 to run a voice chat or a voice call. User name box 40 may display the names or nicknames of conversation partners. Also, the user name box 40 may display the name or nickname of the user of the electronic device 100. The conversation window may display messages 50, 51, 52, and 53 exchanged between the user of the electronic device and the conversation partners.
The electronic device 100 may determine whether each message 50, 51, 52, or 53 is a problem. When any of the messages 50, 51, 52, 53 is a problem, the electronic device 100 may extract keywords from the messages 50, 51, 52, or 53. Based on the extracted keywords, the electronic device 100 may obtain content that may be used when responding to the message.
Through the pop-up window 60, the electronic device 100 may receive a user input regarding whether to display the content obtained for each keyword. When the user touches or clicks the first answer button 61, the electronic device 100 may display the obtained content. When the user touches or clicks the second answer button 62, the electronic device 100 may not display the obtained content.
When any of the messages 50, 51, 52, and 53 is a question, the electronic device 100 may obtain content that may be used in answering the question before activating the pop-up window 60. Alternatively, when any one of the messages 50, 51, 52, and 53 is a question, the electronic device 100 may obtain content usable in answering the question after activating the pop-up window 60. Alternatively, when any one of the messages 50, 51, 52, and 53 is a question, the electronic device 100 may obtain contents usable in answering the question after activating the pop-up window 60, after the user touches the first answer button 61.
Fig. 3 is a flowchart for describing a method of providing content via the electronic device 100 according to an exemplary embodiment.
[1.1 message reception ]
Referring to fig. 3, the electronic device 100 may receive a message from a device 200 of another user in operation S110.
The electronic device 100 may receive messages 50, 51, 52, and 53 from another user's device 200. Alternatively, the electronic device 100 may receive the messages 50, 51, 52, and 53 from the device 200 of another user via the server 300.
The electronic device 100 may determine whether each of the messages 50, 51, 52, and 53 includes a question. The electronic device 100 may determine whether each of the messages 50, 51, 52, and 53 includes a question by using a semantic analysis method and a statistical analysis method, which will be described later.
[1.2 keyword extraction ]
The electronic device 100 may extract a keyword from a message received from the device 200 of another user in operation S130.
The electronic device 100 may extract keywords from the message by using a semantic analysis method and a statistical analysis method.
The electronic device 100 may extract the keyword by using semantic analysis. The electronic device 100 may determine whether a given sentence is asking for an answer or is providing some information. The electronic device 100 may analyze the sentence to determine whether the sentence is a question sentence by determining whether the sentence is a question adverb (e.g., who, what, when, where, why, which, how) or an assistant verb (e.g., is, as, can, do, has, can, should, will (is, are, can, could, did, dos, do, have, has, may, might, hall, should, will, would), whether the subject and verb of the sentence are reversed, and whether the sentence includes a question mark.
For example, the electronic device 100 may receive a message 5 "3 where are you? ". The electronic device 100 may determine the meaning of the message 53 and extract "where" as the keyword. Also, the electronic device may extract "where" as a keyword by further considering the contents of a conversation between users and data related to the users. The data related to the user may include at least one of content input by the user and recorded data about the user. The user-related data may represent content related to only one user or content related to two or more users.
The electronic device 100 may extract "where" as the keyword based on at least one selected from the meaning of the message, the content input by the user, and the recording data about the user.
The electronic device 100 may perform natural language analysis on the meaning of the message to extract keywords. For example, the electronic device 100 may extract "where" as the keyword. The electronic device 100 may analyze and determine the meaning of the message included in the conversation between the users, and may predict the content of the response message based on the meaning of the message. For example, the electronic device 100 may analyze the meaning of the message as a question posed by the user of the device 200 of another user asking the location of the user of the electronic device 100. Based on this meaning, the electronic device 100 may predict that the user of the electronic device 100 may need to provide a response to the user of the device 200 of another user about his/her own location, and may extract "where" as a keyword.
Also, when the electronic device 100 extracts a keyword based on the meaning of the message, the electronic device 100 may consider the content input by the user. For example, the content input by the user may include a home address of the user, a company address of the user, a moving path record of the user, and a place according to a schedule of the user. For example, when a user records a business trip plan via a dispatcher application, and the content of a message exchanged with another user's device 200 relates to a business trip location, the electronic device 100 may extract "where" as a keyword.
Also, when the electronic device 100 extracts a keyword based on the meaning of the message, the electronic device 100 may consider the recorded contents about the user. For example, the recorded content about the user may represent a record about the user written in the electronic device 100 and the server 300. Also, the record about the user may include a record written by the user while the user uses the electronic device 100, and a record written in the server while the user uses the App service. Also, the record about the user may include a record that is written indirectly by the user while the user uses the electronic device 100 or the server 300, not directly.
For example, the record about the user may include content of a call of the user, content of a payment of the user via a credit card, and content written by the user via a Social Network Service (SNS).
The electronic device 100 may extract the keyword by calculating the meaning of the message through statistical analysis.
For example, the electronic device 100 may determine a priority order between predicted conditions of the user. For example, electronic device 100 may determine that any of one or more predicted conditions are more likely to occur than others. For example, the electronic device 100 may determine that any of one or more keywords are more likely to occur than other keywords.
The electronic device can extract "where" as a keyword according to the above-described method through statistical analysis and semantic analysis.
[1.3 Contents acquisition ]
The electronic device 100 may obtain content usable when responding to the message based on the extracted keyword in operation S170.
The electronic device 100 may obtain content by performing various searches based on the extracted keywords. The content obtained by the electronic device 100 may include a two-dimensional image, a three-dimensional image, a two-dimensional video, a three-dimensional video, text replies formed by various languages, contents of various areas, and contents related to applications providing various services.
The electronic device 100 according to an exemplary embodiment may obtain content related to a keyword from an external search server.
For example, the electronic device 100 may obtain content related to an application service related to a keyword.
For example, when the received message is a question type message, the electronic device 100 may analyze the message and obtain content related to a weather application service.
For example, when it is recognized that the message requires a response as a result of the semantic analysis although the received message is not a problem type, the electronic device 100 may obtain the content by using a matching table or a predetermined rule.
For example, the electronic device 100 may determine an application service that can be used when the user responds to the message according to the keyword by considering the conversation content and data related to the user.
For example, when the user is located in korea, the electronic apparatus 100 may select a map-related application service provided by a service provider of korean nationality. When the user is located in japan, the electronic apparatus 100 may select a map-related application service provided by a service provider of the nationality of japan.
Also, for example, the electronic device 100 may select a map application service that is frequently used by the user of the electronic device 100 by referring to the application use frequency of the user.
Also, for example, the electronic apparatus 100 may select an application service suitable for the current situation from among application services previously selected by the user.
For example, the electronic device 100 may select a first application service suitable for the current situation of the user from among first to fifth application services previously selected by the user. For example, when there are first to fifth restaurant application services that are pre-selected by the user, the electronic apparatus 100 may select at least one restaurant application service suitable for the current situation of the user from among the first to fifth restaurant application services.
Also, the electronic device 100 may obtain content by using an internet search service. For example, the electronic device 100 may obtain content by performing various searches based on keywords after determining a search service of interest. For example, an Internet search service may be a search service that is accessible only to authorized personnel, such as college libraries, paper search sites, and research institute databases.
For example, the electronic device 100 may obtain a two-dimensional image corresponding to a keyword through a search service. For example, the electronic device 100 may obtain the contents of the respective areas by inputting a keyword as an input value in a search service.
According to another exemplary embodiment, the electronic device 100 may obtain related content stored in the electronic device 100.
For example, the electronic device 100 may obtain a two-dimensional image, a three-dimensional image, a two-dimensional video, a three-dimensional video, text replies formed by various languages, data on the contents of the respective areas stored in the electronic device 100.
[1.4 Contents provision ]
The electronic device 100 may provide at least one piece of the obtained content in operation S190.
The electronic device 100 may provide the user with the content obtained based on the keyword through various methods.
For example, the electronic device 100 may provide the obtained content to the user by using at least one of sound, image, and text. For example, a method by which the electronic device 100 provides the obtained content may vary according to the type of the electronic device 100. For example, the electronic device 100 may display the obtained content through a screen division method and a screen transformation method, and when the electronic device 100 is a wearable device such as a smart watch, the electronic device 100 may display the content by summarizing the content. For example, the electronic device 100 may display content in sound and images via an avatar (avatar), and when the electronic device 100 is a wearable device such as a smart watch, the electronic device 100 may display a summary of the content.
For example, the electronic device 100 may obtain a two-dimensional image corresponding to a keyword by using a search service. For example, the electronic device 100 may obtain the contents of the respective areas by inputting a keyword as an input value in a search service.
[2. scene 1]
Fig. 4 to 7 are diagrams for describing a method of providing contents according to an exemplary embodiment. Fig. 4 to 7 are diagrams of user interfaces according to steps of a scenario in which the electronic device 100 provides a text chat service and provides an image stored in the electronic device 100 to a user as content related to a response.
Referring to fig. 4, Chris, a user of the electronic device 100 is performing a text chat with Hyunjin, a user of the device 200 of another user.
The electronic device 100 displays the name of the user of the electronic device 100 in the user name box 40. According to another exemplary embodiment, the electronic device 100 may display the name of the user of the device 200 of another user in the user name box 40.
The electronic device 100 receives the message 50 "hi" from the device 200 of the other user on day 8: 26/8 of 2013. The electronic device 100 sends a message 51 to the device 200 of the other user and receives a message 52 from the device 200 of the other user. The user of the electronic device 100 receives a message "where do you are? ".
The electronic device 100 may determine whether each of the messages 50, 51, 52, and 53 includes a question. For example, the electronic device 100 may determine that each of the messages 50, 51, and 52 is not a problem and the message 53 is a problem.
For example, the electronic device 100 may flag messages determined to be questions to distinguish question messages from other messages. For example, the electronic device 100 may mark messages that the electronic device 100 determines to be problematic with a different color (e.g., fluorescent yellow) to distinguish from other messages.
The electronic device 100 may extract keywords for the message 53. For example, the electronic device 100 may extract the "current location" as a keyword for the message 53. Alternatively, for example, the electronic device 100 may extract "where are you? "and transform it into" where do i am? As a key for the message 53. Alternatively, for example, the electronic device 100 may extract "england" or "seoul" as a keyword for the message 53 by further considering data related to the user.
Referring to fig. 5, the electronic device 100 may display images 71, 72, and 73 associated with the message 53. The electronic device 100 may display an image stored in the electronic device 100 or an image stored in a server connected to the electronic device 100. When the electronic device 100 determines the message 53 as a problem, the electronic device 100 may display the related images 71, 72, and 73 without additional input from the user. Alternatively, when the electronic device 100 determines the message 53 as a question, the electronic device 100 may display the related images 71, 72, and 73 as contents that can be used at the time of response by considering the input value of the user in the setting.
Referring to fig. 6, the electronic device 100 may select one of the displayed images by a touch input or a click input of the user.
Referring to fig. 7, the electronic device 100 may send the image 72 selected by the touch or click input of the user as a response to another user's device 200 via a message 54.
[2.1. apparatus and applications enabling the application of the method ]
Fig. 8 is a diagram for describing various electronic devices and various chatting methods to which the method of providing contents according to the exemplary embodiment can be applied.
Referring to fig. 8, a method of providing content according to an exemplary embodiment may be performed via a voice chat or a text chat between an electronic device 100 and a device 200 of another user during a communication process. Also, the method of providing content according to an exemplary embodiment may be performed via a voice chat accompanied by a voice-to-text (STT) function between the electronic device 100 and the device 200 of another user during a communication process. Also, the electronic device 100 may be implemented as a smartphone, a tablet PC, a wearable device, or the like. Also, any electronic device that can implement the message service can apply the method of providing content according to the exemplary embodiment even if it is not the above type.
[2.2. specific Components of the apparatus ]
Fig. 9a is a diagram for describing a knowledge framework 120 included in the electronic device 100 that is configured to provide content related to a response.
The electronic device 100 may implement the knowledge framework 120, the first application 141, and the second application 143, and may store a network content list 145 and a device list 147.
Knowledge framework 120 may include a natural language processing unit (NLU)121, a session manager 122, a parser 123, a context parser 124, a response generator 125, a content metadata store 126, and an App registry 127.
NLU 121 may analyze the meaning of each message received by electronic device 100. The NLU 121 can analyze the meaning of each message through a statistical method. The NLU 121 may analyze each message by a statistical method and a semantic method, and may transmit the analyzed message to the context analyzer 124.
The conversation manager 122 can check conversation partners, conversation dates and times, conversation contents, and conversation environments for messages exchanged by the electronic device 100, and can analyze the exchanged messages in units of groups. The conversation manager 122 can analyze one or more messages exchanged by the electronic device 100 to define the messages as a conversation.
The parser 123 may collect and store content about the user of the electronic device or users exchanging messages with the user of the electronic device. The parser may maintain, process, and store content written by a user of the electronic device or users exchanging messages with a user of the electronic device.
The content analyzer 124 may analyze and determine the meaning of each message exchanged by the electronic device based on the meaning of the messages exchanged by the electronic device. The content analyzer 124 may analyze and determine the meaning of each message exchanged by the electronic device in units of one or more messages, the units being defined by the session manager 122.
The response builder 125 may generate a response to the message. The response builder 125 may generate content that may be used in responding to messages. The response builder 125 may generate various possible responses and provide the generated responses to the user via the interface screen.
The content metadata store 126 may include metadata about the content. For example, the content metadata store 126 may include content about applications. For example, the content metadata store 126 may include metadata regarding whether an application is a sports-related application or a movie-related application. Also, for example, the content metadata store 126 may include metadata about images. The content metadata store 126 may include content such as names of people appearing in the images, and relationships between the people and users of the electronic devices.
For example, the content metadata store 126 may continuously collect content on a network content list 145 on the internet. For example, the content metadata store 126 may continuously collect data about the device content list 147 of the electronic device.
The App registrar 127 may include content about various applications. For example, App registrar 127 may include data about applications included in electronic device 100. For example, App registrar 127 may include data regarding applications not included in electronic device 100. App registrar 127 may frequently update data about applications. The response generator 125 can generate a response related to the application by referring to the data about the application of the App registrar 127.
For example, App registrar 127 may store data about at least one of first application 141 and second application 142. For example, the first application 141 may be an application installed in the electronic device 100. For example, the second application 142 may be an application that is not installed in the electronic device 100.
Fig. 9b is a diagram for describing an operation of each component of the electronic device 100 according to an exemplary embodiment.
Referring to fig. 9b, the electronic device 100 may include a session manager 122, a parser 123, a context analyzer 124, a content metadata store 126, an App registrar 127, a response recommender 128, and a web search engine 129.
The response recommender 128 may include a natural language processing unit NLU 121 and a response generator 125.
The session manager 122 may transmit the message um n to the answer recommender 128. Answer recommender 128 may receive message um [ n ] from session manager 122, and natural language processing unit NLU 121 may generate keyword kw [ m ] from message um [ n ]. For example, session manager 122 may send a message um [ n ] "where do you? "to the response recommender 128 and the natural language processing unit NLU 121 may generate" place "or" information "as the keyword kw m.
The response recommender 128 may receive the user information usr. For example, for a message um [ n ] "where do you? ", the response recommender 128 may receive user information in the external service account, information about the user's home or company, or about places frequently visited by the user, as user information usr. When the response recommender 128 generates the keyword kw m, the response recommender 128 may refer to the user information usr.
The response recommender 128 may receive the context information cnt. For example, with respect to the message um [ n ] "where do you? Info may include time information, Global Positioning System (GPS) information, weather information, analyzed user activity information, and recent logs.
For example, the user activity information may include information such as the fact that the user is staying at a restaurant for an hour, and the fact that the user is running for a minute. For example, the recent log may include networking information such as information of base stations accessible to the electronic device 100. The response recommender 128 may consider the context information cnt. info when generating the recommended response rcm. ans by using the keyword kw [ m ].
The response recommender 128 may receive the application related information ap. info by transmitting the keyword kw m to the App registrar 127. Info may include metadata of the application or information about the application related to the map, for example.
The response recommender 128 may receive content information cm. info corresponding to the keyword kw m from the content metadata store 126. Info may include specific information of an image, for example. For example, the specific image information may include a location (latitude and longitude) or logo information in which the image is generated.
The response recommender 128 may receive the search result value src. Rst may include, for example, images of boston, the address of a company, and a map application.
The response recommender 128 may receive at least one of the application-related information ap.info, the content information cm.info, and the search result value src.rst, and may generate a recommended response rcm.ans corresponding to the message um [ n ] by referring to the user information usr.info and the context information cnt.info.
[2.3. environmental settings ]
Fig. 10a to 10d are diagrams of environment setting user interfaces provided by the electronic device 100 according to an exemplary embodiment.
The term "setting" may mean that an operation method of the electronic device 100 is predetermined by a user's setting or an input of the user to set a usage environment of the electronic device 100.
Referring to fig. 10a, the electronic device 100 may not provide the content related to the response or may provide the content related to the response to the user visually or aurally according to the setting. When the electronic device 100 provides the content related to the response according to the setting, the electronic device 100 may automatically, semi-automatically, and manually provide the content related to the response.
The electronic device 100 automatically providing the content related to the response according to the setting may indicate that, when the electronic device 100 determines that the exchanged message includes a question, the electronic device 100 provides the content related to the response for the message without receiving additional user input.
The electronic device 100 semi-automatically providing the content related to the response according to the setting may indicate that, when the electronic device 100 determines that the exchanged message includes a question, if the electronic device 100 receives a simple input (touch or click input) of the user with respect to the message, the electronic device 100 provides the content related to the response with respect to the message.
For example, when the electronic device 100 semi-automatically provides the content related to the response, the electronic device 100 may display a message corresponding to the question so as to be distinguished from other messages. For example, the electronic device 100 may display a background color of a message corresponding to a question differently from other messages. For example, the electronic device may display a message corresponding to the question by using underlining and/or conspicuous colors so as to be distinguished from other messages.
The user of the electronic device 100 may receive the content by touching or clicking on the message marked using one of the above methods. The electronic device 100 may obtain the content related to the response by receiving a touch or click input of the user. The electronic device 100 may obtain and store content related to the predicted response before the electronic device 100 receives the touch or click input of the user, and may then provide the content related to the response when the touch or click input of the user is received.
When the electronic device 100 is set to manually provide the content related to the response, if the electronic device 100 determines the exchanged message as a problem and identifies whether the user will receive the content related to the response by using the user input indicating that the user will receive the content related to the response, the electronic device 100 provides the content related to the response for the message.
Referring to fig. 10b, the electronic device 100 may provide the content related to the response by considering the condition related to the user set in the setting.
For example, the electronic device 100 may provide the content related to the response by considering the content related to the first User1 set in the setting. Also, for example, the electronic device 100 may provide content related to the response for messages exchanged by the second User 2. Also, for example, the electronic device 100 may provide contents related to the response only for a message input by the third User 3.
Referring to fig. 10c, the electronic device 100 may provide the content related to the response by considering the time-related condition set in the setting.
For example, when it is set in the setting to semi-automatically provide contents related to a response as illustrated in fig. 10a, the electronic device 100 may limit a period in which a message corresponding to a question is displayed, so that the message corresponding to the question is distinguished from other messages. For example, the electronic device 100 may limit a time period in which a message corresponding to a question is displayed to 10 seconds, 20 seconds, 30 seconds, or 1 minute in order to be distinguished from other messages. When the time period expires, the electronic device 100 may no longer display the message corresponding to the question so as to be distinguished from other messages.
Referring to fig. 10d, the electronic device 100 may provide contents related to the response by considering a data condition in the setting.
For example, the electronic device 100 may determine the content of the question by referring to only data stored in the electronic device 100. For example, the electronic device 100 may determine the content of the question by referring to only data stored in the electronic device 100 and the server 300.
For example, the electronic device 100 may provide the content related to the response by referring to only the data stored in the electronic device 100. For example, the electronic device 100 may provide the content related to the response by referring to only data stored in the electronic device 100 and the server 300.
In addition to the manner illustrated in fig. 10a to 10d, the electronic device 100 may operate in various manners according to the settings. For example, the user of the electronic device 100 may set the setting so that the content related to the response is provided as text, sound, image, or the like. For example, a user of the electronic device 100 may set a field of interest in which content needs to be provided. For example, the user of the electronic device 100 may set the field of interest as an item such as sports, weather, politics, movies, economy, life, and the like.
[2.4. terminal-based service provision ]
Fig. 11a illustrates a method of providing content related to a response via the electronic device 100.
Referring to fig. 11a, the electronic device 100 may receive a message from the device 200 of another user in operation S210. The electronic device 100 may continuously receive messages from the device 200 of another user. The electronic device 100 may receive a message from another user's device 200 via the server 300.
The electronic device 100 may determine whether a message received from the device 200 of another user includes a question in operation S220. The electronic device 100 may divide a sentence used in a message into grammar units and extract a relationship between the grammar units. The electronic device 100 may determine whether the received message is a problem based on the result of the operation.
The electronic device 100 may extract a keyword from a message received from the device 200 of another user in operation S230. The electronic device 100 may extract keywords from the received message by performing natural language analysis. When the electronic device 100 extracts the keyword, the electronic device 100 may extract the keyword by referring to data input by the user or record data about the user.
The electronic device 100 may obtain content usable in responding to the message based on the keyword in operation S250. The electronic device 100 may obtain content by performing various searches based on the extracted keywords. The content obtained by the electronic device 100 may include two-dimensional images, three-dimensional images, two-dimensional videos, and three-dimensional videos. Including text replies in various languages, contents of various areas, and contents regarding application services providing various services.
The electronic device 100 may provide at least one piece of the obtained content in operation S270. For example, the electronic device 100 may provide the user with at least one piece of the obtained content via at least one of sound, image, and text.
Fig. 11b illustrates a method of providing content related to a response via the electronic device 100.
Referring to fig. 11b, the electronic device 100 may receive a message from the device 200 of another user in operation S310.
The electronic device 100 may determine whether to receive content that can be used when replying to a message received from the device 200 of another user in operation S320. For example, the electronic apparatus 100 may determine whether to receive content usable in replying to a received message based on a user configuration set via a setting.
The electronic device 100 may extract a keyword from a message received from the device 200 of another user in operation S330. The electronic device 100 may extract keywords by performing natural language analysis to determine the meaning of the message.
The electronic device 100 may obtain content usable in responding to the message based on the keyword in operation S350. The electronic device 100 may obtain content related to the keyword from an external search server. The electronic device 100 may obtain the content via an internet search service. The electronic device 100 may obtain the related content stored in the electronic device 100.
The electronic device 100 may provide the list of the obtained contents in operation S370. The electronic device 100 may provide the user with the content obtained based on the keyword via various methods.
Fig. 11c illustrates a method of providing content related to a response via the electronic device 100.
Referring to fig. 11c, the electronic device 100 may receive a message from the device 200 of another user in operation S410. The electronic device 100 may extract one or more keywords from a message received from the device 200 of another user in operation S430.
The electronic device 100 may set a priority order among the one or more keywords based on internal data of the electronic device 100 or input of a user in operation S440. The electronic device 100 may set a priority order among the keywords, and may refer to the priority order for obtaining the content. For example, when the electronic device 100 may obtain content based on a plurality of keywords, the electronic device 100 may obtain content based on a keyword having a high priority order first and may obtain content based on a keyword having a low priority order later. For example, when the electronic device 100 may obtain content based on a plurality of keywords, the electronic device 100 may display the content based on the keywords having a high priority order first and may display the content based on the keywords having a low priority order later.
The electronic device 100 may obtain contents usable in responding to the message based on the keyword in operation S450 and may provide at least one piece of content via one or more methods in operation S470.
Fig. 11d illustrates a method of providing content related to a response via the electronic device 100.
Referring to fig. 11d, the electronic device 100 may receive a message from the device 200 of another user in operation S510.
The electronic device 100 may extract a keyword from a message received from the device 200 of another user in operation S530.
The electronic device 100 may analyze relationship data between the user of the device 200 of another user and the user of the electronic device 100 in operation S540. The relationship data between the user of the other device 200 and the user of the electronic device 100 may represent data stored in the electronic device 100 or the server 300.
The electronic device 100 may obtain contents usable in responding to the message based on the analyzed relationship data or keywords between the users in operation S550. Also, the electronic device 100 may provide at least one piece of content via one or more methods at operation S50.
Fig. 12a is a flowchart for describing a method of providing content related to a response via the electronic device 100.
The electronic device 100 may receive a first message from the device 200 of another user in operation S1005. The electronic device 100 may receive a first message from another user's device 200 via the server 300.
The electronic device 100 may determine whether the first message received from the device 200 of another user includes a question in operation S1010. The electronic device 100 may determine whether the received first message includes a question by performing natural language analysis.
When the received first message includes the question, the electronic device 100 may extract a keyword from the message in operation S1020.
The electronic device 100 may request the server 300 to obtain content based on the keyword in operation S1020. The server 300 may obtain content usable in responding to the message based on the keyword in operation S1030.
The server 300 may transfer the content obtained based on the keyword to the electronic device 100 in operation S1035. The electronic device 100 may provide the obtained content in operation S1040.
The electronic device 100 may transmit a second message to the device 200 of another user in operation S1045. For example, the electronic device 100 may transmit the second message to the device 200 of the other user by including at least one piece of the obtained content in the second message.
Fig. 12b is a diagram of modules included in each of the electronic device 100 and the server 300.
Fig. 12b is an exemplary diagram for describing a method of obtaining content related to a response via the electronic device 100. The particular components included in the electronic device 100 and the particular components included in the server 300 may be adjusted in one or more ways.
Referring to fig. 12b, the electronic device 100 may include a session manager 122, a parser 123, a context analyzer 124, and a response recommender 128. The response recommender 128 may include a natural language processing unit (NLU)121 and a response generator 125.
Server 300 may include content metadata storage 326 and App registry 327.
NLU 121 may analyze the meaning of messages received by electronic device 100. The NLU 121 may analyze the message via a statistical analysis method and a semantic analysis method, and may transmit the analyzed message to the context analyzer 124.
The conversation manager 122 can analyze exchanged messages by checking and using conversation partners, dates and times when conversations occur, conversation contents, and conversation environments for the messages exchanged by the electronic device 100. The session manager 122 may group one or more messages exchanged by the electronic device 100 and define the group as a session. For example, the session manager 122 may communicate to the answer recommender 128, for example, a message um [ n ] "where are you going in the afternoon today? ".
Answer recommender 128 may receive message um [ n ] from session manager 122, and NLU 121 may generate keyword kw [ m ] from message um [ n ]. For example, the session manager 122 may communicate to the answer recommender 128 a message "where are you going in the afternoon today? ", and NLU 121 may generate a schedule, an event, a place, a destination, or a time as a key kw m.
The profiler 123 may continuously collect, process, and store content about the user of the electronic device 100 as well as people related to the user of the electronic device 100. When generating the keyword kw [ m ], the response recommender 128 may receive the user information usr.info related to the keyword kw [ m ] from the parser 123 and refer to the user information usr.info. For example, for the message um [ n ] "where are you going in the afternoon today? Info may include user information in the external service accounting, and information about the user's home or company or places frequently visited by the user.
The context analyzer 124 may determine the meaning of each message exchanged by the electronic device 100 based on the meaning of the messages exchanged by the electronic device 100. The context analyzer 124 may determine the meaning of each message exchanged by the electronic device 100 in units of one or more messages, the units being defined by the session manager 122. The response recommender 128 may receive the context information cnt.
For example, where are you going in the afternoon about the message um [ n "]? Info may include time information, GPS information, analyzed user scheduling information, and recent logs. For example, the response recommender 128 may consider the context information cnt. info when the response recommender 128 generates the recommended response rcm. ans by using the keyword kw [ m ].
Electronic device 100 may receive information ap. info related to an application by transmitting a keyword kw [ m ] to App registrar 327. For example, for the message um [ n ] "where are you going in the afternoon today? Info may include information related to a scheduler application, information related to a social networking service application, and information related to a chat application. For example, the first application may be a weather-related application installed in the electronic device 100. For example, the second application may be a news-related application that is not installed in the electronic device 100.
The electronic device 100 may receive the content information cm. For example, the response recommender 128 may receive data from the content metadata store 326 regarding weather-related applications as well as news-related applications. For example, with respect to the message um [ n ] "where are you going in the afternoon today? Info may include image specific information or scheduling related information. For example, the image specific information may include a location (latitude, longitude) where the image is generated.
The response recommender 128 of the electronic device 100 may receive at least one of the information ap.info and the content information cm.info related to the application from the server 300 and refer to the user information usr.info and the context information cnt.info to generate a recommended response rcm.ans corresponding to the message um [ n ].
The response builder 125 may generate content related to the response for the message. The response builder 125 may generate content that may be used in responding to messages. The response builder 125 may provide the generated content to the user via an interface screen.
[2.5. Server-based service provision ]
Fig. 13a is a flowchart for describing a method of providing content related to a response via the electronic device 100.
The server 300 may receive a first message from the device 200 of another user in operation S2010. The server 300 may transmit a first message received from the device 200 of another user to the electronic device 100 in operation S2020.
The server 300 may determine whether the first message received from the device 200 of another user includes a question in operation S2030.
When the received first message includes the question, the server 300 may extract a keyword from the message in operation S2040.
The server 300 may obtain content usable in responding to the message based on the keyword in operation S2050.
The server 300 may provide the electronic device 100 with content obtained based on the keyword. The electronic device 100 may provide the obtained content in operation S2060.
The electronic device 100 may transmit a second message to the server 300 in operation S2070. For example, the electronic device 100 may transmit the second message to the server 300 by including at least one piece of the obtained content in the second message. The server 300 may transmit a second message to the device 200 of another user in operation S2080.
In fig. 13a, operations S2030, S2040, and S2050 are illustrated as being performed after the first message is transmitted to the electronic device 100 through operation S2020. However, the current embodiment is not limited thereto, and the operations S2030, S2040, and S2050 may be performed before the server 200 transmits the first message to the electronic device 100. For example, a first message may be transmitted along with the content obtained at operation 2050.
Alternatively, the device 200 of the other user may directly transmit the first message to the electronic device 100 before, during, or after the device 200 of the other user transmits the first message to the server 300. In this case, the server 200 may omit operation S2020.
According to another exemplary embodiment, the server 300 may provide an online social networking service through a website or mobile application. In this case, the server 300 may display the first message received from the device 200 of another user on the website or the mobile application, instead of directly forwarding the first message to the electronic device 100 (operation S2020). The first message may be one of the comment replies (served comments) posted by the other user's device 200 and/or other user. When the user of the electronic device 100 activates the comment input area by placing a cursor in the input area or touching the input area with his/her finger, the server 300 may perform operations S2030, S2040, and S2050 using one of the comments posted on the website or the mobile application. For example, the server 300 may perform operations S2030, S2040, and S2050 on a comment that is recently posted or selected by a user input.
Fig. 13b is a diagram of modules included in each of the electronic device 100 and the server 300.
Fig. 13b is an exemplary diagram for describing a method of obtaining content related to a response based on the server 300. The particular components included in the electronic device 100 and the particular components included in the server 300 may be adjusted via one or more methods.
Referring to fig. 13b, the electronic device 100 may comprise a parser 123 and a context analyzer 124.
The parser 123 may continuously recover, process and store content written by a user of the electronic device 100 or other users. The parser 123 may transmit user information usr. info to the server 300 by receiving a request of the server 300.
The context analyzer 124 may determine the meaning of each message exchanged by the electronic device 100 based on the meaning of the messages exchanged by the electronic device 100. The context analyzer 124 may transmit the context information cnt.info to the server 300 by receiving a request of the server 300.
Server 300 may include a session manager 322, a content metadata store 326, an App registry 327, and a response recommender 328. The response recommender 328 may include a natural language processing unit NLU321 and a response generator 325.
The session manager 322 may determine a conversation partner, a date and time when a conversation occurs, conversation contents, and a conversation environment for messages exchanged by the electronic device 100, and may determine messages exchanged by the electronic device 100 via the server 300 in units of groups. The conversation manager 122 can collect one or more messages exchanged by the electronic device 100 and define the collected messages as a conversation. For example, session manager 322 may transmit the received message um [ n ] to answer recommender 328.
Answer recommender 328 may receive message um [ n ] from session manager 322, and NLU321 may generate keyword kw [ m ] from message um [ n ]. For example, session manager 322 may transmit a message to answer recommender 328 "is boston present at hotspot dispute? ", and NLU321 may generate news, disputes, topics, and places as keywords kw m.
The response recommender 328 may receive user information usr. For example, the response recommender 328 may refer to the user information usr. Info may include, for example, the age and preferences of the user. Info may include user information in external services accounting, for example.
The answer recommender 328 may receive the context information cnt. For example, the response recommender 128 may consider the context information cnt. info when the response recommender 328 generates the recommended response rcm. ans by using the keyword kw [ m ]. Info may include time, user schedule of analysis, and recent log.
The response recommender 328 may receive at least one of the application-related information ap.info and the content information cm.info, and may refer to the user information usr.info and the context information cnt.info received from the electronic device 100 to generate a recommended response rcm.ans corresponding to the message um [ n ]. Info may include information of applications that provide news, magazines, articles, for example. Info may include articles and images of boston, for example.
The response generator 325 may generate content related to the response for the message. The response generator 325 may generate content that may be used in responding to a message. The response generator 325 may transmit the generated content to the electronic device 100.
[2.6. service provision based on terminal and server ]
Fig. 14a is a flowchart for describing a method of providing content related to a response via the electronic device 100.
The server 300 may receive a first message from the device 200 of another user in operation S3010. The server 300 may transmit a first message received from the device 200 of another user to the electronic device 100 in operation S3020.
The server 300 may determine whether the first message received from the apparatus 200 of another user includes a question in operation S3030. The server 300 may determine whether the first message includes a problem after transmitting the first message to the electronic device 100, and may determine whether the first message includes a problem before transmitting the first message to the electronic device 100.
When the received first message includes a question, the server 300 may extract a keyword from the message in operation S3040. The server 300 may transmit the keyword to the electronic device 100 in operation S3045.
The electronic device 100 may obtain contents usable in responding to the message based on the keyword in operation S3050. The electronic device 100 may display content obtained based on the keyword.
The electronic device 100 may transmit a second message to the server 300 in operation S3060. For example, the electronic device 100 may transmit the second message to the server 300 by including at least one piece of the obtained content in the second message. The server 300 may transmit a second message to the device 200 of another user in operation S3070.
The server 300 may perform operations S3030 and S3040 before transmitting the first message and may transmit the first message and the keyword to the electronic device 100 substantially simultaneously. Alternatively, the device 200 of the other user may directly transmit the first message to the electronic device 100 before, during, or after the device 200 of the other user transmits the first message to the server 300. In this case, the server 200 may omit operation S3020.
Fig. 14b is a diagram of modules included in each of the electronic device 100 and the server 300.
Fig. 14b is an exemplary diagram for describing a method of obtaining content related to a response based on the server 300. The particular components included in the electronic device 100 and the particular components included in the server 300 may be adjusted via one or more means.
Referring to fig. 14b, electronic device 100 may include a first parser 123, a content metadata store 126, and an App registry 127.
The first parser 123 may collect and store data about the user of the electronic device 100 or another user. Data about a user may include the user's name, work, phone number, areas of interest, friendships, and mobile records. The first parser 123 may hold, process, and store content written by a user of the electronic device 100. The user-written content of the electronic device 100 may include user-written messages, notes, replies, and comments.
The content metadata store 126 may include various types of data for the content. For example, the content metadata store 126 may include data regarding the type of application. Also, for example, the content metadata store 126 may include metadata such as the date the image was generated, and the location of the image generation.
Also, for example, the content metadata store 126 may continuously collect and store data for various types of web content on the internet, and data for various types of content for the electronic device 100.
The App registrar 127 may include data about various applications. For example, the App registrar 127 may store data regarding at least one of a first application installed in the electronic device 100 and a second application not installed in the electronic device 100.
Server 300 may include NLU321, session manager 322, second parser 323, context parser 324, and response generator 325.
The session manager 322 may analyze messages exchanged by the server 300 in units of groups by checking and using conversation partners, conversation dates and times, conversation contents, and conversation environments of the messages exchanged by the server 300. The session manager 122 may collect one or more messages exchanged by the server 300 and define the collected messages as a session.
NLU321 may analyze the meaning of the first message received by server 300. NLU321 may analyze the meaning of the first message through a statistical analysis method and a semantic analysis method and transfer the analyzed meaning to context analyzer 324.
The context analyzer 324 may determine the meaning of each message exchanged by the electronic device 100 based on the meaning of the messages exchanged by the electronic device 100. The context analyzer 324 may determine the meaning of each message exchanged by the electronic device 100 in units of one or more messages, the units being defined by the session manager 322.
The response recommender 328 may receive at least one of the user information usr.info [ p ] and the context information cnt.info, and may refer to the user information usr.info [ p ], the application-related information ap.info, and the content information cm.info received from the electronic device 100 to generate a recommended response rcm.ans corresponding to the message um [ n ]. The server 300 may transmit a recommended response rcm.
[2.7. analysis of the relationship of conversation participants ]
Fig. 15 is a diagram for describing a method of utilizing information stored in the database 370 of the server 300 in order to calculate the relationship between users.
Referring to fig. 15, the server 300 may collect, manage and analyze information of users via the database 370 and the database manager 360.
Database 370 may include data for individual users. For example, database 370 may store information about users of electronic device 100. For example, database 370 may include data regarding privacy levels between a user of device 200 and a user of electronic device 100 of another user.
Database manager 360 may manage database 370. The database manager 360 may manage various types of data recorded in the database 370, and may provide the electronic device 100 with the data managed by the database manager 360 in response to a request of the electronic device 100.
Database manager 360 may manage the records for each user written in database 370. Also, database manager 360 may manage each set of records written in database 370. The database manager 360 may transmit data requested by the electronic device 100 to the electronic device 100.
The electronic device 100 may edit and manage information stored in the server 300 via the relationship detector 140, the privacy analyzer 150, and the friend list database 190.
The relationship detector 140 may determine a relationship between the user of the device 200 and the user of the electronic device 100 of another user. The relationship detector 140 may receive information about the user of the device 200 and the user of the electronic device 100 of another user by requesting information from the database manager 360 included in the server 300, and may determine a relationship between the users. The relationship detector 140 may determine the relationship between users by analyzing previous call records and conversation content between users.
The privacy analyzer 150 may analyze a privacy level between a user of the device 200 of another user and a user of the electronic device 100. For example, the privacy analyzer 150 may calculate the privacy level between the user of the device 200 of the other user and the user of the electronic device 100 as quantized data (e.g., data such as 1, 2, 3, and 10, or data such as the first level, the second level, and the third level). For example, the privacy analyzer 150 may determine to which of one or more predetermined categories the privacy level between the user of the device 200 of another user and the user of the electronic device 100 belongs. For example, the privacy analyzer 150 may analyze the privacy level between the user of the device 200 of another user and the user of the electronic device 100 in such a way that the privacy analyzer 150 selects the privacy level from among categories such as "family," friends, "" colleagues, "or" classmates.
The friends list database 190 may store data that the relationship detector 140 and the privacy analyzer 150 analyze based on a relationship between the user and the user of the electronic device 100. Here, the information included in the friend list database 190 may correspond to not only information about friends (friends in a social context) of the user of the electronic apparatus 100 but also information about all persons accessible to the electronic apparatus 100, such as family, alumni, colleagues, and relatives.
For example, the friend list database 190 may store information that the user of the device 200 of another user is a friend in a Social Network Service (SNS) with the user of the electronic device 100. For example, the friends list database 190 may store information that the user of the device 200 of another user is a family member or relative with the user of the electronic device 100. For example, the friends list database 190 may store information that the user of the apparatus 200 of another user is an employee and employer of a company with the user of the electronic apparatus 100. For example, the friend list database 190 may store information that the user of the device 200 of another user is a alumni of a school with the user of the electronic device 100.
Fig. 16 is a diagram for describing a method of utilizing information stored in the database 170 of the electronic device 100 in order to determine the relationship between users.
Referring to fig. 16, the electronic device 100 may collect, manage, and analyze information about a user via the database 170, the database analyzer 160, the relationship detector 140, the privacy analyzer 150, and the friend list database 190.
The database 170 may include data corresponding to users of the electronic device 100 and phone records between the respective users. For example, database 170 may store information about telephone records between a user of electronic device 100 and other users in real-time. For example, the phone record may include the time of the call, the place of the call, the start time of the call, the end time of the call, and the method of the call (voice or message).
Also, the database 170 may include data written in an address book or contact list of the user of the electronic device 100. For example, the database 170 may include information such as company names, jobs, and family relationships of friends.
The database analyzer 160 may analyze and manage data, such as phone records or address lists, included in the database 170. The database analyzer 160 may analyze various types of data written in the database 170 and may provide an analysis result of the data in response to a request of the relationship detector 140 or the privacy analyzer 150.
The database analyzer 160 may manage the records written in the database 170 for each individual. Also, the database analyzer 160 may manage the records written in the database 170 for each group. The database analyzer 160 may transmit data requested by the relationship detector 140 or the privacy analyzer 150 to the electronic device 100.
The relationship detector 140 may determine a relationship between the user of the device 200 and the user of the electronic device 100 of another user. The relationship detector 140 may determine a relationship between the user of the device 200 of another user and the user of the electronic device 100 by requesting the database analyzer 160. The relationship detector 140 may determine a relationship between the user of the device 200 of the other user and the user of the electronic device 100 by analyzing previous call records between the user of the device 200 of the other user and the user of the electronic device 100.
The privacy analyzer 150 may analyze a privacy level between a user of the device 200 of another user and a user of the electronic device 100. For example, the privacy analyzer 150 may calculate the privacy level between the user of the device 200 of the other user and the user of the electronic device 100 as quantized data (e.g., data such as 1, 2, 3, and 10, or data such as the first level, the second level, and the third level). For example, the privacy analyzer 150 may analyze the privacy level between the user of the device 200 of another user and the user of the electronic device 100 in such a way that the privacy analyzer 150 selects the privacy level from among categories such as "family," friends, "" colleagues, "or" classmates.
The friends list database 190 may store data that the relationship detector 140 and the privacy analyzer 150 analyze based on a relationship between the user and the user of the electronic device 100. The friends list database 190 of fig. 16 is similar to the friends list database 190 of fig. 15.
Fig. 17 is a diagram for describing a method of using contents stored in the database 170 of the electronic device 100 and information stored in the database 370 of the server 300 in order to determine a relationship between the user of the electronic device 100 and the user of the device 200 of another user.
Referring to fig. 17, the electronic device 100 may collect, manage, and analyze information related to a user via the database 170, the database analyzer 160, the relationship detector 140, the privacy analyzer 150, and the friend list database 190. The database 170 and database analyzer 160 of fig. 17 are similar to the database 170 and database analyzer 160 of fig. 16. Hereinafter, a repetitive description will be omitted.
Server 300 may include a database 370 and a database manager 360. Database 370 and database manager 360 of FIG. 17 are similar to database 370 and database manager 360 of FIG. 15. Hereinafter, a repetitive description will be omitted.
The relationship detector 140 may determine a relationship between the user of the device 200 and the user of the electronic device 100 of another user. The relationship detector 140 may determine a relationship between the user of the device 200 of the other user and the user of the electronic device 100 by requesting at least one of the database analyzer 160 and the database manager 360. The relationship detector 140 may determine a relationship between the user of the device 200 of the other user and the user of the electronic device 100 by analyzing previous call records between the user of the device 200 of the other user and the user of the electronic device 100. The relationship detector 140 may determine a relationship between the user of the device 200 of the other user and the user of the electronic device 100 by analyzing the content of the conversation between the user of the device 200 of the other user and the user of the electronic device 100.
The privacy analyzer 150 may analyze a privacy level between a user of the device 200 of another user and a user of the electronic device 100. The privacy analyzer 150 may analyze a privacy level between the user of the device 200 of the other user and the user of the electronic device 100 by requesting at least one of the database analyzer 160 and the database manager 360.
For example, the privacy analyzer 150 may calculate the privacy level between the user of the device 200 of the other user and the user of the electronic device 100 as quantized data (e.g., data such as 1, 2, 3, and 10, or data such as the first level, the second level, and the third level). For example, the privacy analyzer 150 may determine to which of one or more predetermined categories the privacy level between the user of the device 200 of another user and the user of the electronic device 100 belongs. For example, the privacy analyzer 150 may analyze the privacy level between the user of the device 200 of another user and the user of the electronic device 100 in such a way that the privacy analyzer 150 selects the privacy level from among categories such as "family," friends, "" colleagues, "or" classmates.
The friends list database 190 may store data that is analyzed in the relationship detector 140 and the privacy analyzer 150 based on the relationship between the user and the user of the electronic device 100.
Also, according to another exemplary embodiment, at least one of the relationship detector 140, the privacy analyzer 150, and the friend list database 190 may be included in the server 300.
[3. scene 2]
[3.1. identification obtained for content ]
Fig. 18, 19a to 19d, 20a to 20d, 21, 22a, 22b, 23a to 23c, and 24a to 24e are diagrams for describing a method of providing contents according to an exemplary embodiment. Fig. 18, 19a to 19d, 20a to 20d, 21, 22a, 22b, 23a to 23c, and 24a to 24e are diagrams of a user interface according to steps of a scenario in which the electronic device 100 provides a text chat service and directly provides an image and a recommended application to a user as content related to a response.
Referring to fig. 18, Chris, a user of the electronic device 100 is performing a text chat with Hyunjin, a user of the device 200 of another user.
The electronic device 100 displays the name of the user of the electronic device 100 via the user name box 40. According to another exemplary embodiment, the electronic device 100 may display the name of the user of the device 200 of another user in the user name box 40.
The electronic device 100 receives the message 50 "hi" from the device 200 of the other user on day 8: 26/8 of 2013. The electronic device 100 sends a message 51 to the device 200 of the other user and receives a message 52 from the device 200 of the other user. The user of the electronic device 100 receives a message "where do you are? ".
The electronic device 100 may determine whether each of the messages 50, 51, 52, and 53 includes a question. For example, electronic device 100 may determine that messages 50, 51, and 52 do not include a question, and message 53 includes a question.
For example, the electronic device 100 may mark the message determined to be a problem such that the message determined to be a problem is distinguished from other messages, as illustrated in fig. 19a to 19 c. For example, the electronic device 100 may mark a message corresponding to the question by using an underline illustrated in fig. 19 a. For example, the electronic device 100 may mark a message corresponding to the question by using the shading for text illustrated in fig. 19 b. For example, the electronic device 100 may mark a message corresponding to the question by using the shadow for the voice bubble illustrated in fig. 19 c.
When the electronic device 100 receives a user input corresponding to a portion of the mark so as to be distinguished from other messages, the electronic device 100 may display contents usable in answering to the message.
For example, when the message 53 is determined to be a problem, the electronic device 100 may receive a user input indicating whether a recommendation response is required via the pop-up window 60, as illustrated in fig. 19 d.
[3.2. advice of contents ]
Referring to fig. 20a to 20d, the electronic device 100 may display contents usable in answering to the message 53 in response to a user input.
Referring to fig. 20a, when the electronic device 100 receives a user input (e.g., touch) corresponding to the mark illustrated in fig. 19c, the electronic device 100 may recommend an application related to a message corresponding to the question. For example, when a message corresponding to the question relates to a place, the electronic device 100 may recommend a map application or a navigation application.
Referring to fig. 20b, when the electronic device 100 receives a user input (e.g., a user input in which a touch is a button) corresponding to the mark illustrated in fig. 19d, the electronic device 100 may recommend an application related to a message corresponding to the question. For example, when a message corresponding to the question relates to a place, the electronic device 100 may recommend a map application or a navigation application.
Referring to fig. 20c, when the electronic device 100 receives a user input (e.g., a user input in which a touch is a button) corresponding to the mark illustrated in fig. 19d, the electronic device 100 may recommend at least one of an application and an image related to a message corresponding to the question.
The electronic device 100 may recommend at least one of an application and an image usable in responding to a message by considering a relationship between the user of the electronic device 100 and the user of the device 200 of another user according to the exemplary method described with reference to fig. 15 to 17.
For example, when the electronic apparatus 100 determines that the user of the electronic apparatus 100 and the user of the apparatus 200 of another user have an employer and employee relationship, the electronic apparatus 100 may recommend at least one of an application and an image (other than a private photograph of the user of the electronic apparatus 100). For example, if the user of the electronic device 100 is now on vacation in london, the electronic device 100 may not recommend a photograph taken in the vacation as a response by considering that the relationship between the participants of the conversation is business related. The electronic device 100 may classify the vacation pictures as private data and store category information associated with the vacation pictures. When the privacy level (e.g., level 2) of the user of the other apparatus 200 is lower than the predetermined level (e.g., level 7), the electronic apparatus 100 may not suggest any data classified as private data.
Referring to fig. 20d, when the electronic device 100 receives a user input (e.g., a user input in which a touch is a button) corresponding to the mark illustrated in fig. 19d, the electronic device 100 may recommend at least one of an application, a direct answer, and an image related to a message corresponding to the question.
The electronic device 100 may recommend at least one of an application, a direct response, and an image, which may be used in responding to a message, by considering a relationship between the user of the electronic device 100 and the user of the device 200 of another user according to the exemplary method described with reference to fig. 15 to 17. In this specification, a direct response may mean a sentence, phrase, word, or the like that can be a response to the message without an additional search.
For example, when the electronic device 100 determines that the user of the electronic device 100 and the user of the device 200 of another user have a relationship with a high privacy level (such as family or friends), the electronic device 100 may provide content that may be used in responding to a message, the content including a private photograph of the user of the electronic device 100. For example, if the user of the electronic device 100 is now on vacation in london, the electronic device 100 may recommend a photograph taken in the vacation as a response by considering that the conversation participants have a private relationship.
[3.3. specific response conditions ]
Fig. 21, 22a, 22b, 23a to 23c, and 24a to 24e are diagrams illustrating a process in which a user selects content related to a response suggested by the electronic device 100 and provides the content related to the response to the device 200 of another user.
Referring to fig. 21, the user of the electronic device 100 may select a map application from among the contents related to the recommendation response illustrated in fig. 20 d.
Referring to fig. 22a, the electronic device 100 may request the user to identify whether to edit the content via the pop-up window 70. For example, when a user wants to use content provided by the application without additional editing thereof, the user may select a response indicating that there is no content editing.
Referring to fig. 22b, when the electronic device 100 identifies that the user does not want to edit the selected content, the electronic device 100 may provide a screen indicating the user's current location in the map application selected in fig. 21 to another user's device 200 via a message without additional editing of the screen.
Referring to fig. 23a, the electronic device 100 may request the user to identify whether to edit the content via the pop-up window 70. For example, when the user wants to respond by editing content provided by the application, the user may select a response indicating the editing of the content.
Referring to fig. 23b, when the electronic device 100 identifies that the user wants to edit the selected content, the electronic device 100 may search for the user's current location in the map application selected in fig. 21 for the user to edit a screen indicating the user's current location, and may display a screen corresponding to the search result.
Referring to fig. 23c, the electronic device 100 may transmit a screen, which is a result of editing a search result provided by the mapping application via the user, to the device 100 of another user via a message.
[3.4. feedback situation ]
Fig. 24a to 24e are views of a user interface via which user feedback is received for the provision of content that may be used in a reply, according to an exemplary embodiment.
Referring to fig. 24a, the electronic device 100 may query the user via the pop-up window 80 whether the user is satisfied with the provision of content that may be used in the response.
Referring to fig. 24b to 24d, when the user has answered that the user is not satisfied with the provision of the content via the pop-up window 80, the electronic device 100 may provide pop-up windows 81, 82, 83, 84, and 85 via which the user may select categories that the user wants to add or exclude for later provision of the content that may be used in the answer.
Referring to fig. 24e, when the user has answered via pop-up window 80 that the user is not satisfied with the provision of content, electronic device 100 may provide pop-up windows 86 and 87, via pop-up windows 86 and 87, the user may select the environment settings that the user wants to adjust for later provision of content that may be used in the answer.
According to another exemplary embodiment, the electronic device 100 may determine the category preferences of the user without using the survey pop-up windows 81, 82, 83, 53, and 85. To this end, the electronic device 100 may automatically store category information of contents selected by the user every time the selection is made, and count the number of times each category is selected. The recommendation responses may be displayed in order of category preference.
[3.5. layout of proposed responses ]
Fig. 25a to 25e are diagrams of a content providing screen when providing content that can be used in response according to an exemplary embodiment.
Referring to fig. 25a, the electronic device 100 may provide contents through a screen division method. The electronic device 100 may display the recommended content list 90, the recommended content list 91, and the recommended application list 92. For example, when the user touches the recommended content list 90 in the first scene #1, the electronic device 100 transits to the second scene #2 to transmit the recommended content list 90 to another user's device. For example, the electronic device 100 may transmit the recommended content list 90 to another user's device via a message.
Referring to fig. 25b, the electronic device 100 may provide contents through a screen conversion method. The electronic device 100 may display the summarized answer-related content 92 in a third scene #3 and may display the specific answer-related content 93, 94, and 95 in a fourth scene # 4. When the user touches or clicks the third scene #3, the electronic device 100 may display the fourth scene # 4.
Referring to fig. 25c, the electronic device 100 may display an image, an application, or a possible answer directly. Electronic device 100 may display content that is usable in the response by one or more methods and arrangements.
Referring to fig. 25d, the electronic device 100 may provide contents usable in a response via the voice of the avatar.
Referring to fig. 25e, when the electronic device 100 is a smart watch, the electronic device 100 may provide recommended content via the summary screen 97. When the user touches or clicks the move button 98, the electronic device may display another summary screen.
Fig. 26 is a diagram of the electronic apparatus 100 that provides an interface via which a content providing layout is changed.
Referring to fig. 26, the electronic device 100 may receive user feedback on whether to change the layout of content provision via the pop-up window 99, and may select each content provision layout according to a user's selection.
[4. scene 3]
[4.1. identification obtained for content ]
Fig. 27 to 37 are diagrams for describing a method of providing contents according to an exemplary embodiment. Fig. 27 to 37 are diagrams of user interfaces according to operations of a scenario in which the electronic apparatus 100 provides a voice chat service and provides an image, a recommendation application, and a direct response to a user as contents related to a response.
Referring to fig. 27, Chris, a user of the electronic device 100 is performing a voice chat with Hyunjin, a user of the device 200 of another user. The electronic device 100 and the device 200 of another user may divide the conversation in units of messages.
Referring to fig. 28, the electronic device 100 may receive each voice message and perform a voice-to-text (STT) transform for the message according to the illustrated order.
The electronic device 100 may identify which language each voice message is formed of in operation S4010. For example, the electronic device 100 may identify whether each voice message is formed in an English language or a Korean language.
The electronic device 100 may transform each voice message into text of the identified language in operation S4030. For example, when the electronic device 100 determines that the first voice message is formed of an english language, the electronic device 100 may convert the first voice message into an english text.
The electronic apparatus 100 may display the converted text in operation S4050. For example, the electronic device 100 may convert the first voice message into english text and display the english text via the display unit (or the output interface).
Referring to fig. 29, the electronic device of Chris receives a message 55 "where is he? ". The electronic device 100 may determine whether each message is a problem. For example, the electronic device 100 may determine whether the message 55 is a problem. The electronic device 100 may determine the message 55 "where are he? "he" included in "refers to who.
[4.2. user input receipt ]
Referring to fig. 30, when the electronic device 100 determines that the message includes a question, the electronic device 100 may receive a user input indicating whether the user needs to recommend a response.
For example, when the electronic device 100 determines that the message 55 "where is he? "is the question, the electronic apparatus 100 may check whether the user needs to recommend a response via the pop-up window 63, as illustrated in fig. 30. When the user touches a "yes" button corresponding to the pop-up window 63 indicating that the user wants to receive contents usable in the response, the electronic device 100 may display various types of obtained contents. Alternatively, the electronic device 100 may omit displaying the pop-up window 63 according to the user setting, and may provide a recommendation response once the message 55 is determined to be the inquiry message.
[4.3. specific advice conditions ]
Referring to fig. 31, the electronic device 100 may provide messages 74_ a, 74_ b, and 74_ c, applications 75_ a, 75_ b, and 75_ c, and possibly a direct answer 76.
Referring to fig. 32, the user may select an image 74_ b from among contents usable in the response. The electronic device 100 may receive user input. For example, the user input may be a touch input selecting the image 74_ b.
Referring to fig. 33, the electronic device 100 may identify whether to edit the selected content via the pop-up window 64. The user can touch the no button indicating that the user wants to use the selected content as a response via the pop-up window 64 without editing the selected content.
Referring to fig. 34, when the user checks that the user wants to use the selected content without editing the content via the pop-up window 64, the electronic device 100 may transmit a message 74_ b to the conversation partner via the message 56.
Referring to fig. 35, the electronic device 100 may identify whether to edit the selected content via the pop-up window 64. The user may touch a "yes" button via the pop-up window 64 indicating that the user wants to use the selected content as a response by editing the content.
Referring to fig. 36, when the user touches a "yes" button indicating that the user wants to use the selected content as a response by editing the content via the pop-up window 64, the electronic device 100 may provide the user with the image editing environment 65. The user may edit the image 74_ b via the image editing environment 65.
Referring to fig. 37, when the user finishes editing the image 74_ b, the electronic device 100 may provide the image 74_ b to the user of the electronic device, the user of a device of another user, or a third user via a mail service or a social network service according to a selection of the user of the electronic device 100. Also, the electronic device 100 may provide the image 74_ b to the user of the electronic device, the user of the device of another user, or a third user regardless of the user's selection.
[5. scene 4]
[5.1. identification obtained for content ]
Fig. 38, 42, 43a, 43b, 44, 45, 46a, 46b, 47, 48, 50, 51, 52, 53(a) to 53(c), 53(d1), 53(e), 54a to 54d, 55a to 55d, and 56 are diagrams for describing a method of providing contents according to an exemplary embodiment. Fig. 38, 42, 43a, 43b, 44, 45, 46a, 46b, 47, 48, 50, 51, 52, 53, 54a to 54d, 55a to 55d, and 56 are diagrams of a user interface according to an operation of a scenario in which the electronic apparatus 100 provides a video call service and provides at least one of an image, a recommendation application, and a possible direct answer to the user as content related to an answer.
Referring to fig. 38, the user Sunny of the electronic device 100 is performing a video call service with the user John of the device of another user. The electronic device 100 may display the appearances of the user of the electronic device and the user of the device of another user in real time. The electronic device may display the content of a conversation between a user of the electronic device and a user of a device of another user in real time. For example, the electronic device 100 may display the content of a conversation between the electronic device 100 and a user of another user's device 200 as a voice bubble shape via STT functionality.
The electronic device receives a message 57 "where do you? ". The electronic device 100 and the device 200 of another user may display the exchanged messages via the STT function. The electronic device 100 continuously transmits messages to the device of another user and receives messages from the device 200 of another user. The electronic device 100 may determine whether the continuously received messages are problematic. The user of the electronic device 100 receives a message 57 "where do you? ".
Referring to fig. 39, when the electronic device 100 determines that the message 37 is a problem, the electronic device 100 may check whether it is necessary to provide the user with contents usable in answering the message 37 via the pop-up window 65.
[5.2. advice of Contents ]
Referring to fig. 40 to 42, the electronic device 100 may display contents usable in responding to the message 57 corresponding to the user input.
Referring to fig. 40, when the electronic device 100 receives the message 57 illustrated in fig. 38, the electronic device 100 may recommend an application related to the message 57. For example, when the content of the message 57 is related to a place, the electronic device 100 may recommend a map application or a navigation application.
Referring to fig. 41, when the electronic device 100 receives a user input (e.g., a user input in which a touch is a button) corresponding to the mark illustrated in fig. 39, the electronic device 100 may recommend an application related to the message 57. For example, when a message corresponding to a question relates to a place, the electronic device 100 may recommend a map application or a navigation application.
Referring to fig. 42, when the electronic device 100 receives a user input (e.g., a user input in which a touch is a button) corresponding to the mark illustrated in fig. 39, the electronic device 100 may recommend at least one of an image, an application, and a direct answer related to the message 57.
[5.3. specific proposed Condition (1) ]
Referring to fig. 43a, when a user requests contents that can be used at the time of a response, the electronic device 100 may provide an image to the user.
Referring to fig. 43b, when the user wants to receive contents usable in responding to a message, the electronic device 100 may provide an image to the user without additionally identifying whether the contents usable in responding are provided.
Referring to fig. 44, the user may select an image 77_ a from among contents usable at the time of answering. The electronic device 100 may receive a user input selecting content. For example, the user input may be a touch input selecting the image 77_ a.
The electronic device 100 may provide the user with a response edit window. The user can edit the selected image 77_ b via the response editing window.
Referring to fig. 45, when the user touches the image 77_ b of which the user has finished editing, the edited selected image 77_ b may be provided to the conversation partner. For example, when the user touches the selected image 77_ b, for which the user has finished editing, the electronic device 100 may transmit the selected image 77_ b for editing to the conversation partner. The electronic device 100 may transmit the edited selected image 77_ b via a portion displaying the appearance of the user.
When the user finishes editing the selected image, the electronic apparatus may provide the image 77_ b to the user of the electronic apparatus 100, the user of the apparatus 200 of another user, or a third user via a mail service or SNS according to the user's selection. Also, the electronic device 100 may provide the image 77_ b to the user of the electronic device 100, the user of the device 200 of another user, or a third user via one or more services regardless of the user's selection.
[5.4. specific recommended Condition (2) ]
Referring to fig. 46a, when a user requests contents that can be used at the time of answering, the electronic device 100 may provide images, applications, and immediate possible answers.
Referring to fig. 46b, the electronic device may provide images, applications, and immediate possible answers without additional identification of whether to provide content that may be used in responding to a message.
Referring to fig. 47, the user may select the map application 78_ a from among the contents available at the time of answering. The electronic device 100 may receive user input. For example, the user input may be a touch input selecting the application 78_ a.
The electronic device 100 may provide a response edit window to the user in response to the user input. The electronic device 100 may input the result of inputting the keyword extracted from the message in the application 78_ a as an input value via the response edit window display. For example, as illustrated in fig. 47, the electronic device 100 may display an image 78_ b, which is a result of inputting the keyword "current location" in the map application, via the response editing window.
The electronic device 100 may provide an editing environment via the response editing window, via which the user may edit the result image 78_ b. For example, the user may expand or contract the result image 78_ b via the response editing window, and may select a portion of the image 78_ b.
Referring to fig. 48, when the user selects the edited result image 78_ b, the result image 78_ b may be provided to the conversation partner. For example, when the user touches the edited result image 78_ b, the electronic device 100 may transmit the edited result image 78_ b to the conversation partner. The electronic device 100 may transmit the edited result image 78_ b via a portion that displays the appearance of the user.
[6. scene 5]
Fig. 50, 51, 52, and 53 are diagrams for generally describing a method of providing contents according to an exemplary embodiment. Fig. 49, 50, 51, 52, and 53 are diagrams of user interfaces according to operations of a scenario in which the electronic apparatus 100 and the apparatus 200 of another user recommend a possible direct answer, a restaurant application, an image related to the answer, a map application, and a news application to the user as content related to the answer, while the electronic apparatus 100 and the apparatus 200 of another user provide a text message service.
[6.1 suggestion of direct response ]
Referring to fig. 49(a) and 49(b), a user John of electronic device 100 is conversing with a user Mike of device 200 of another user via a text messaging service. John receives a message 111 from Mike "where are you going in the afternoon today? ". The electronic device 100 and the device 200 of another user may display the exchanged messages. The electronic device 100 may determine whether the continuously received messages are problematic.
Referring to fig. 49(a), when the electronic device 100 determines that the message 111 is a question, the electronic device 100 may provide the user with direct answer candidates 111_ a, 111_ b, and 111_ c. User John of electronic device 100 may click on one of direct answer candidates 111_ b to transmit direct answer candidate 111_ b to conversation partner Mike.
Referring to fig. 49(c) and 49(d), John and Mike may identify the exchanged messages 111, 112, 211, and 212 via each device.
[6.2. restaurant recommendation via restaurant application ]
Referring to fig. 50(a) and 50(b), Mike may transmit a message 213 "is true? Where i wait for a month ". The electronic device 100 may display a message 113 received from another user's device 200. John may communicate to Mike a message 114 "do you know about good restaurants near the Fenway park? ". The device 200 of the other user may display the message 214 received from the electronic device 100.
Referring to fig. 50(c) and 50(d), the apparatus 200 of another user may recommend an application 214_ a related to each restaurant corresponding to the received message, and may recommend menu content. Mike may touch the menu content to deliver a message 115 to John that includes the menu content.
[6.3 suggestion of response via image ]
Referring to fig. 51(a) and 51(b), a user John of an electronic device 100 may transmit a message 116 "do you have a picture" to a user Mike of a device 200 of another user? ". The device 200 of the other user may display the message 216 received from John.
Referring to fig. 51(c) and 51(d), the device 200 of another user may recommend the image 216_ b corresponding to the received message. Mike may touch the recommended food image to transmit a message 118 to John that includes the menu content.
[6.4. delivery of its own location via a map application ]
Referring to fig. 52(a) and 52(b), Mike may transmit a message 219 "where did you go? ". The electronic device 100 may display the message 119 received from Mike.
Referring to fig. 52(c) and 52(d), the electronic device 100 may recommend the map application 119_ a corresponding to the received message 119. John may transmit his location via the recommended map application to Mike via message 220.
[6.5. delivery of articles via News applications ]
Referring to fig. 53(a) and 53(b), John may transmit a message 121 "is boston present at hotspot dispute? ".
Referring to fig. 53(c), 53(d1), and 53(e), another user's device 200 may display the message 221 received from John. The device 200 of another user may recommend the news application 221_ a corresponding to the received message 221. When Mike selects the recommended news application 221_ a, another user's device 200 may display news articles via screen transformation as illustrated in fig. 53(e), and Mike may select a new article to transmit the relevant article to John's electronic device 100. The electronic device 100 can display a message 122 that includes a link to a related article and an article summary screen.
[7. scene 6]
Fig. 54a to 54d are diagrams for describing a method of providing contents according to an exemplary embodiment. Fig. 54a to 54d are diagrams of user interfaces according to operations of a scenario in which the electronic device 100 and the device 200 of another user provide a text message service during a video call and recommend an answer to the user as content related to the answer via a map application.
Referring to fig. 54a, the electronic device 100 may receive a voice message 131 "where is the incident point during a voice call? ". Referring to fig. 54b, the electronic device may identify whether the user wants to recommend a response via the pop-up screen 132. Referring to fig. 54c, the user of the electronic device 100 may select a map application through a touch input. Referring to fig. 54d, another user's device 200 may receive a message 232 from the electronic device 100 indicating the location of the user of the electronic device 100.
[8. scene 7]
Fig. 55a to 55d are diagrams for describing a method of providing contents according to an exemplary embodiment. Fig. 55a to 55d are diagrams of user interfaces according to operations of a scenario in which the electronic device 100 and the device 200 of another user recommend different answer candidates to the user as contents related to an answer while the electronic device 100 and the device 200 of another user provide a text message service.
Referring to fig. 55a, John may receive a message 141 "where are you going in the afternoon today? ". In response, the electronic device 100 may propose a direct answer, such as "13: 00 lunch meeting" 141_ a, "15: 00 business trip" 141_ b, or "18: 00 seminar" 141_ c, and related application 141_ d.
Referring to fig. 55b, John may receive a message 151 from a friend "where are you going in the afternoon today? ". In response, the electronic device 100 may propose a direct answer, such as "13: 00 lunch meeting" 151_ a, "14: 00 and Peter coffee hours" 151_ b, or "18: 00 seminar" 151_ c, and related application 151_ d.
Referring to fig. 55a and 55b, in response to the same question, the electronic device 100 or the device 200 of another user may recommend different responses according to a relationship between the user of the electronic device 100 or the user of the device 200 of another user and a conversation partner. For example, when the electronic device 100 receives the message 141 from boss, the response "15: 00 business trip" 141_ b may be recommended, and when the electronic device receives the message 151 from friend, the response "14: 00 and Peter's coffee hours" 151_ b may be recommended instead of the response "15: 00 business trip" 141_ b.
Referring to fig. 55c, John may receive a message 142 from the salesperson "how to buy a good car? ". In response, the electronic device 100 may recommend a direct answer (such as "no, i do not need to buy the car" 142_ a or "call me separately" 142_ b) and the related application 142_ c.
Referring to fig. 55d, John may receive a message 152 from the friend "how to buy a good car? ". In response, the electronic device 100 may suggest a direct answer (such as "if you send me one, i will be happy" 151_ a or "i will dream to it every day" 152_ b) and related application 151_ c.
Referring to fig. 55c and 55d, in response to the same question, the electronic device 100 or the device 200 of another user may recommend different responses according to a relationship between the user of the electronic device 100 or the user of the device 200 of another user and a conversation partner. For example, when the electronic device 100 receives the message 142 from the salesperson, the electronic device 100 may recommend directly answering "call me do not come" 142_ b, and when the electronic device 100 receives the message 152 from the friend, the electronic device 100 may recommend directly answering "i will dream to it every day" 152_ b.
[9. Components of electronic apparatus ]
Fig. 56 is a block diagram illustrating the structure of a user terminal device 1000 according to an exemplary embodiment. The user terminal apparatus 1000 illustrated in fig. 56 may be the electronic apparatus 100 of fig. 1.
Referring to fig. 56, the user terminal device 1000 may include at least one selected from a display unit 1100, a control unit (e.g., a controller) 1700, a memory 1200, a Global Positioning System (GPS) chip 1250, a communication unit 1300, a video processor 1350, an audio processor 1400, a user input unit 1450, a microphone unit 1500, a photographing unit (e.g., a camera) 1550, a speaker unit 1600, and a motion detection unit (e.g., a motion detector) 1650.
The display unit 1100 may include a display panel 1110 and a controller for controlling the display panel 1110. The display panel 1110 may be implemented as various types of displays such as a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED), an active matrix organic light emitting diode (AM-OLED), and a Plasma Display Panel (PDP). The display panel 1110 may be implemented to be flexible, transparent, and wearable. The display unit 1100 may be combined with the touch panel 1470 of the user input unit 1450 so as to be provided as a touch screen. For example, the touch screen may include an integrated module in which the display panel 1110 and the touch panel 1470 are stacked. The memory 1200 may include at least one of an internal memory and an external memory.
The internal memory may include at least one selected from, for example, volatile memory (e.g., dynamic ram (dram)), static ram (sram), and synchronous dynamic ram (sdram)), non-volatile memory (e.g., one-time programmable ROM (otprom)), programmable ROM (prom), erasable and programmable ROM (eprom), electrically erasable and programmable ROM (eeprom), mask ROM, and flash ROM, a Hard Disk Drive (HDD), and a Solid State Drive (SSD). According to an exemplary embodiment, the control unit 1700 may process a command or data received from the non-volatile memory or from at least one of the other components by loading the command or data in the volatile memory. Also, the control unit 1700 may restore data received or generated from other components in the nonvolatile memory.
The external memory may include at least one selected from, for example, a Compact Flash (CF), a secure digital card (SD), a Micro secure digital card (Micro-SD), a Mini secure digital card (Mini-SD), an ultra digital card (xD), and a memory stick.
The memory 1200 may store various programs and data used for the operation of the user terminal device 1000. For example, the memory 1200 may temporarily or semi-permanently store at least a portion of the content to be displayed on the lock screen.
The control unit 1700 may control the display unit 1100 such that a portion of the content stored in the memory 1200 is displayed in the display unit 1100. In other words, the control unit 1700 may display a portion of the content stored in the memory 1200 in the display unit 1100. Alternatively, when a gesture of a user is generated in a portion of the display unit 1100, the control unit 1700 may perform a control operation corresponding to the gesture of the user.
Control unit 1700 may include at least one of natural language processing unit (NLU)121, session manager 122, parser 123, context analyzer 124, answer generator 125, content metadata store 126, and App registry 127 of knowledge framework 120 illustrated in fig. 9 a. For example, the control unit 170 of the user terminal device 1000 may include a natural language processing unit (NLU)121, a session manager 122, a parser 123, a context parser 124, and a response generator 125, and the server 300 may include an App register 127 and a content metadata storage 126 shown in fig. 12 b. According to another exemplary embodiment, the control unit 1700 may include a parser 123 and a context parser 124, and the server 300 may include a natural language processing unit (NLU)321, a session manager 322, a response generator 325, a content metadata storage 326, and an App registry 327, which are illustrated in fig. 13 b. Alternatively, the control unit 1700 of the user terminal device 1000 may include the parser 123, the App register 127, and the content metadata storage 126, and the server 300 may include a natural language processing unit (NLU)321, a session manager 322, a response generator 325, a context analyzer 324, and a parser 323 illustrated in fig. 14 b.
The control unit 1700 may include at least one selected from cores in a Random Access Memory (RAM)1710, a Read Only Memory (ROM)1720, a Central Processing Unit (CPU)1730, a Graphics Processing Unit (GPU)1740, and a bus 1750. The RAM1710, ROM 1720, CPU 1730, and GPU 1740 may be connected to each other via a bus 1750.
The CPU 1730 accesses the memory 1200 and performs a boot operation by using the O/S stored in the memory 1200. Also, the CPU 1730 performs various operations by using various programs, contents, and data stored in the memory 1200.
A set of command languages for system boot operations is stored in ROM 1720. For example, when a turn-on command is input and power is supplied in the portable terminal 1000, the CPU 1730 may copy the O/S stored in the memory 1200 in the RAM1710 according to a command language stored in the ROM 1720 and may run the O/S to boot the system. When booting is completed, the CPU 1730 may copy various programs stored in the memory 1200 in the RAM1710 and run the copied programs in the RAM1710 to perform various operations. When the booting operation of the user terminal device 1000 is completed, the GPU 1740 displays a UI screen on a portion of the display unit 1100. In detail, the GPU 1740 may generate a screen in which an electronic file including various objects (such as contents, icons, and menus) is displayed. The GPU 1740 may calculate characteristic values (e.g., coordinate values, shapes, sizes, and colors) of display objects corresponding to the layout of the screen. Also, the GPU 1740 may generate screens including various layouts of objects based on the calculated feature values. The screens generated by the GPU 1740 may be provided to the display unit 1100, and each of them may be displayed in each part of the display unit 1100.
The GPS chip 1250 may receive Global Positioning System (GPS) signals from GPS satellites to calculate the current location of the user terminal device 1000. When a navigation program is used, or in other occasions when a current user location is required, the control unit 1700 may calculate the user location by using the GPS chip 1250.
The communication unit 1300 may perform communication with various types of devices of another user according to various types of communication methods. The communication unit 1300 may include at least one selected from a WiFi chip 1310, a bluetooth chip 1320, a wireless communication chip 1330, and an NFC chip 1340. The control unit 1700 may control communication with devices of various types of another users by using the communication unit 1300.
The WiFi chip 1310 and the bluetooth chip 1320 may perform communication by using a WiFi method and a bluetooth method, respectively. When the WiFi chip 1310 or the bluetooth chip 1320 is used, various connection contents such as an SSID and a session key may be first transmitted and received, and then a communication may be connected by using the various connection contents to transmit and receive various information. The wireless communication chip 1330 refers to a chip that performs communication according to various communication standards, such as IEEE, Zigbee, 3 rd generation (3G), 3 rd generation partnership project (3GPP), and Long Term Evolution (LTE). The NFC chip 1340 refers to a chip operating in a Near Field Communication (NFC) method using a 13.56MHz region from various RF-ID frequency regions (135kHz, 13.56MHz, 433MHz, 860-960MHz, and 2.45 GHz).
The video processor 1350 may process video data included in content received via the communication unit 1300 or content stored in the memory 1200. Video processor 1350 may perform various image processing on the video data, such as decoding, scaling, noise filtering, frame rate conversion, and resolution conversion.
The audio processor 1400 may process audio data included in content received via the communication unit 1300 or content stored in the memory 1200. The audio processor 1400 may perform various processing on the audio data, such as decoding, amplification, and noise filtering.
When a reproduction program for multimedia content is executed, the control unit 1700 may drive the video processor 1350 and the audio processor 1400 to reproduce corresponding content. The speaker unit 1600 may output audio data generated by the audio processor 1400.
The user input unit 1450 may receive input of various command languages from a user. The user input unit 1450 may include at least one selected from a key 1460, a touch pad 1470, and a pen recognition pad 1480.
The keys 1460 may include various types of keys (such as mechanical buttons and a wheel) formed in various portions (such as a front portion, a side portion, and a rear portion) of an outer body of the user terminal device 1000.
The touch panel 1470 may sense a touch input of a user and may output a touch event value corresponding to the sensed touch signal. When the touch panel 1470 is formed as a touch screen by being combined with the display panel 1110, the touch screen may be implemented as various types of touch sensors, such as a capacitance type, a decompression type, and a piezoelectric type. The capacitance type uses a method of calculating touch coordinates by sensing fine electricity (fine electric) caused by a human body of a user when a portion of the human body of the user touches a surface of a touch screen using a dielectric covered by the surface of the touch screen. The decompression type uses a method including two electrode plates equipped in a touch screen and calculates touch coordinates by sensing a current caused by an upper plate and a lower plate contacting each other at a touch point when a user touches the screen. Touch events occurring in a touch screen may be generated primarily by a human finger, but may also be generated by an object of conductive material that may cause a change in capacitance.
The pen recognition plate 1480 may sense a proximity input of a pen or a touch input of the pen according to an operation of a stylus (e.g., a stylus) or a digitizer pen of a user, and may output a sensed pen proximity event or pen touch event. The pen recognition board 1480 may be implemented as an EMR method, for example, and may sense a touch or proximity input according to a change in intensity of an electromagnetic field due to proximity or touch of a pen. In detail, the pen recognition board 1480 may be formed by including an electronic induction coil sensor having a mesh structure, and an electronic signal processing unit sequentially supplying indirect signals having a predetermined frequency in each loop coil of the electronic induction coil sensor. If there is a pen in which the resonator circuit is equipped, around the loop coil of the pen recognition plate 1480, the magnetic field transmitted from the corresponding loop coil may generate a current based on mutual electromagnetic induction in the resonator circuit of the pen. Based on the current, an induced magnetic field is generated from a coil forming a resonator circuit in the pen, and the pen recognition plate 1480 may detect the induced magnetic field in the loop coil under a signal reception state, so that a proximity or touch place of the pen may be sensed. The pen recognition plate 1480 may be provided under the display panel 1110 by having a predetermined area (e.g., an area that may cover a display portion of the display panel 1110).
The microphone unit 1500 may receive an input of a user's voice or other noise and convert the input into audio data. The control unit 1700 may use the user's voice input through the microphone unit 1500 in a call operation, or may convert the user's voice into audio data to store the audio data in the memory 1200.
The photographing unit 1550 may photograph a still image or video according to the control of the user. The photographing unit 1550 may be implemented in plurality, for example, including a front camera and a rear camera.
When the photographing unit 1550 and the microphone unit 1500 are provided, the control unit 1700 may perform a control operation according to a user voice input through the microphone unit 1500 or a motion of the user recognized by the photographing unit 1550. For example, the user terminal device 1000 may operate in a motion control mode or a voice control mode. When the user terminal device 1000 operates in the motion control mode, the control unit 1700 may activate the photographing unit 1550 to photograph the user and may track a motion change of the user to perform a control operation corresponding to the motion change of the user. When the user terminal device 1000 operates in the voice control mode, the control unit 1700 may analyze a user voice input through the microphone unit 1500 and may operate in a voice recognition mode in which a control operation is performed according to the analyzed user voice.
The motion detection unit 1650 may sense motion of the main body of the user terminal device 1000. The user terminal device 1000 may be rotated or tiltable in various ways. Here, the motion detecting unit 1650 may detect characteristics of motion, such as a direction and angle of rotation, and a degree of inclination, by using at least one of various sensors (e.g., an earth magnetic field sensor, a gyro sensor, and an acceleration sensor).
In addition, the user terminal device 1000 may further include a USB port to which a USB connector may be connected, various external input ports for connecting with various external terminals, such as a headset, a mouse, and a LAN, a Digital Multimedia Broadcasting (DMB) chip receiving and processing DMB signals, and various sensors.
The names of the above-described components of the user terminal apparatus 1000 may vary. Also, the user terminal apparatus 1000 may be formed by including one of the above-described components, omitting some of the above-described components, or further including additional components.
Fig. 57 is a block diagram of a software configuration of the user terminal apparatus 1000.
Referring to fig. 57, the memory 1200 may store an operating system for controlling resources of the user terminal device 1000, and an application program for operating an application. The operating system may include a kernel 1210, middleware, an Application Program Interface (API), and the like. The operating system may include, for example, android, iOS, Windows, saiban, Tizen, Bada, and the like.
The kernel 1210 may include at least one of a device driver 1210-1 and a system resource manager 1210-2 for managing resources. The device driver 1210-1 can access and control hardware of the user terminal device 1000 as software. To this end, the device driver 1210-1 may be divided into an interface and a separate driver module provided by each hardware company. The device driver 1210-1 may include at least one selected from, for example, a display driver, a camera driver, a bluetooth driver, a shared memory driver, a USB driver, a keyboard driver, a Wi-Fi driver, an audio driver, and an inter-process communication (IPC) driver. The system resource manager 1210-2 may include at least one of a process management unit, a memory management unit, and a file system management unit. The system resource manager 1210-2 may control, allocate, and recover system resources.
Middleware 1220 may include multiple modules that are pre-implemented to provide common features for various applications. The middleware 1220 may provide a function via the API 1230 so that the application 1240 can efficiently use resources in the user terminal device 1000. The middleware 1220 may include at least one selected from a plurality of modules including, for example, an application manager 1220-1, a window manager 1220-2, a multimedia manager 1220-3, a resource manager 1220-4, a power manager 1220-5, a database manager 1220-6, an encapsulation manager 1220-7, a connection manager 1220-8, a notification manager 1220-9, a place manager 1220-10, a graphic manager 1220-11, and a security manager 1220-12.
The application manager 1220-1 may manage a lifecycle of at least one of the applications 1240. The window manager 1220-2 may manage Graphical User Interface (GUI) resources used in a screen. The multimedia manager 1220-3 may check a format for reproducing each media file, and may encode or decode the media file by using a codec corresponding to the corresponding format. The resource manager 1220-4 may manage resources, such as source code, memory, and storage space, of at least one of the applications 1240. The power manager 1220-5 may manage a battery or power by operating with the BIOS and provide power information for operation. Database manager 1220-6 may generate, search, and change a database to be used in at least one of applications 1240. The package manager 1220-7 may install or update applications distributed in a package file format. The connection manager 1220-8 may manage wireless connections, such as Wifi or bluetooth. The notification manager 1220-9 may display or notify the user of events (such as message arrivals, appointments, and proximity notifications) in a way that does not interfere with the user. The location manager 1220-10 may manage location information of the user terminal device 1000. The graphics manager 1220-11 may manage the graphical effects provided to the user and the user interface associated therewith. The security managers 1220-12 may provide various security functions required for system security or user authentication. When the user terminal apparatus 1000 includes a call function, the middleware 1220 may further include a call manager for managing a voice or video call function of a user.
Middleware 1220 may further include runtime libraries 1220-13 or other library modules. The runtime libraries 1220-13 are library modules used by compilers to add new functionality via a programming language when running an application. For example, the runtime libraries 1220-13 may perform functions related to input and output, memory management, or mathematical functions. The middleware 1220 may generate a new middleware module by combining the respective functions of the above-described internal modules. The middleware 1220 may provide a special module according to the type of the operating system to provide a differentiated function. Middleware 1220 may dynamically partially omit previous components or add new components. The middleware 1220 may be formed by partially omitting components in the current exemplary embodiment, by further adding other components, or by replacing components with other components basically performing the same function with different names.
The API 1230 is a collection of API programming functions and may be formed from different components depending on the operating system. In the case of android or iOS, for example, one API set may be provided for each platform, and in the case of Tizen, for example, two or more API sets may be provided.
Applications 1240 may include preloaded applications installed by default or third party applications that a user may install and use during the usage process. The applications 1240 may include, for example, at least one selected from among a home application 1240-1 returning to a home screen, a dial application 1240-2 for making calls with buddies, a text message application 1240-3 for receiving messages from buddies identified by a telephone number, an Instant Message (IM) application 1240-4, a browser application 1240-5, a camera application 1240-6, an alert application 1240-7, a phonebook application 1240-8 for managing the phone number or address of buddies, a call log application 1240-9 for managing the call log of a user, text messages received or sent to the log, or no call log, an e-mail application 1240-10 for receiving messages from buddies identified by e-mail, a calendar application 1240-11, a medical player application 1240-12, a home screen, a web application 1240-1, a dial application 1240-2 for making calls with buddies, a text message application 1240-3 for receiving messages from buddies identified by, Photo album applications 1240-13, and watch applications 1240-14. Names of components of the software described in the current exemplary embodiment may vary according to the type of the operating system. Also, the software according to the current exemplary embodiment may include at least one of the above-described components, omit some of the above-described components, or may further include other additional components.
Fig. 58 is a diagram of a User Interface (UI) of the electronic device 2000 according to an exemplary embodiment. The electronic device 2000 may be the electronic device 100 of fig. 1.
Electronic device 2000 may include a processor 2700, an input interface 2400, and an output interface 2100.
Processor 2700 may include a mobile application processor or central processing unit. Processor 2700 may be referred to as a controller and control unit. The term "processor" may be used to refer to cores, display controllers, and Image Signal Processors (ISPs). The processor 2700 may include at least one of the components 121, 126, 1710, 1720, 1730 and 1740 of the control unit 1700 of fig. 56.
The processor 2700 according to an exemplary embodiment may extract at least one keyword from a message displayed on a screen via a message service. Also, the processor 2700 according to an exemplary embodiment may regenerate a keyword related to the keyword. Also, the processor 2700 according to an exemplary embodiment may obtain content based on the regenerated keyword and location information of the electronic device 2000.
The input interface 2400 may represent a device for a user to input data to control the electronic device 1000. For example, the input interface 2400 may be a keyboard, a dome switch, a touch pad (using a touch type capacitive method, a pressure type resistive method, an infrared sensing method, a surface ultrasonic conduction method, a bulk tension measurement method, a piezoelectric effect method, or the like), a jog wheel, or a jog switch. Also, the input interface 2400 may include a touch screen, a touch pad, a microphone, and a keyboard.
Also, the input interface 2400 can include at least one module for receiving data from a user. For example, the input interface 2400 may include a motion recognition module, a touch recognition module, a voice recognition module, and the like.
The touch recognition module senses a touch gesture of a user on the touch screen and transmits content related to the touch gesture to the processor. The voice recognition module may recognize a user's voice by using a voice recognition engine and transmit the recognized voice to the processor. The motion recognition module may recognize motion of the object and transmit content related to the motion of the object to the processor.
Throughout this specification, the "input" made by the user via the input interface 2400 of the electronic device 2000 may include at least one selected from a touch input, a bending input, a voice input, a key input, and a multi-mode input. However, this is not limited thereto.
The "touch input" may represent a gesture performed on the touch screen by a user controlling the electronic device 100. Touch gestures set forth in this specification may include a click, touch and hold, double click, drag, translation, flick, drag and drop, and the like.
A "click" is a motion of a user touching a screen by using a finger or a touching tool such as an electronic pen (e.g., a stylus pen), and then immediately lifting the finger or the touching tool from the screen without movement.
"touch and hold" is the motion of a user touching a screen by using a finger or a touching tool such as an electronic pen, and then maintaining the above touch action during a critical time (e.g., 2 seconds) after touching the screen. In other words, the time difference between the touch-in time and the touch-out time is greater than or equal to the critical time (e.g., 2 seconds). When the touch input persists over the critical time, a feedback signal may be provided in a visual, audible, or tactile manner in order to alert the user whether the touch input is a click or a touch and hold. The critical time may vary according to exemplary embodiments.
A "double click" is a movement of the user by touching the screen twice using a finger or a touching tool.
"drag" is a motion of a user by touching the screen with a finger or a touch tool and moving the finger or the touch tool to another position on the screen while maintaining the touch motion.
The "panning" is a motion of the user who performs a drag motion without selecting an object. Since no object is selected in the panning motion, no object moves in the page, but the page itself moves on the screen or a group of objects can move in the page.
The "flick" is a motion of a user performing a drag motion at a critical speed (e.g., 100 pixels/sec) by using a finger or a touch tool. The drag (translation) motion or flick motion may be distinguished based on whether the critical velocity of the finger or touch implement is above a critical velocity (e.g., 100 pixels/second).
"drag and drop" is the motion of a user by dragging an object to a predetermined location using a finger or touch implement and then releasing the object at that location.
"pinch" is the motion of a user moving two fingers in opposite directions that make contact on the screen. A pinch motion is a gesture to zoom in (pinch out) or zoom out (pinch in) an object or page. The zoom-in value or the zoom-out value is determined according to the distance between two fingers.
The "slide" is an operation of a user by touching an object on a screen using a finger or a touch tool while moving the object horizontally or vertically by a predetermined distance. A sliding motion in a diagonal direction may not be recognized as a sliding event.
"motion input" means that a user applies to the electronic device 100 to control the motion of the electronic device 100. For example, the motion input may include a user rotating electronic device 100, a user tilting electronic device 100, and a user moving electronic device 2000 in up, down, left, and right directions. The electronic device 2000 may detect a motion predetermined by the user by using an acceleration sensor, a tilt sensor, a gyro sensor, and a 3-axis magnetic sensor.
Throughout this specification, when the electronic device 100 is a flexible display device, "bending input" means a user input that bends the entire electronic device 2000 or a portion of the electronic device 2000 to control the electronic device 2000. According to an exemplary embodiment, the electronic device 2000 may sense a bending location (coordinate value), a bending direction, a bending angle, a bending speed, a number of bending times, an occurrence time point of a bending operation, a time period during which the bending operation is maintained, and the like, by using the bending sensor.
Throughout this specification, the term "key input" means a user input for controlling the electronic device 2000 by using a physical key attached to the electronic device 2000 or a virtual key provided by the electronic device 2000.
The output interface is configured to output an audio signal, a video signal, or an alarm signal, and may include a display module, a sound input signal, and the like. Also, the output interface may include a flat panel display device that can display a two-dimensional image, and a flat panel display device that can display a three-dimensional image. The output interface may comprise means for outputting a three-dimensional hologram.
The electronic device 2000 may exchange information with the search server via communication. For example, the electronic device 2000 may communicate with the search server via at least one protocol. For example, the electronic device 2000 may communicate with the search server via at least one protocol selected from a simple file transfer protocol (TFTP), a simple file management protocol (SNMP), a Simple Mail Transfer Protocol (SMTP), a Post Office Protocol (POP), an Internet Control Message Protocol (ICMP), a Serial Line Interface Protocol (SLIP), a point-to-point protocol (PPP), a Dynamic Host Control Protocol (DHCP), a network basic input output system (NETBIOS), an inter-network packet exchange (IPX/SPX), an Internet Control Management Protocol (ICMP), an Internet Protocol (IP), an Address Resolution Protocol (ARP), a Transmission Control Protocol (TCP), a User Datagram Protocol (UDP), a windock, a Dynamic Host Configuration Protocol (DHCP), and a Routing Information Protocol (RIP). However, this is not limited thereto.
The electronic device 2000 may perform near field communication via the near field communication module. Near field communication technologies may include wireless lan (WiFi), bluetooth, Zigbee, WiFi direct (WFD), Ultra Wideband (UWB), infrared data association (IrDA), and the like. However, this is not limited thereto.
Although not limited thereto, the exemplary embodiments can be embodied as computer readable codes on a computer readable recording medium. The computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. Also, the exemplary embodiments can be written as a computer program that is transmitted via a computer readable transmission medium (such as a carrier wave) and received and implemented in a general-purpose or a specific-purpose digital computer that runs the program. Further, it is understood that in exemplary embodiments, one or more units of the above-described apparatuses and devices can include circuits, processors, microprocessors, etc., and can execute a computer program stored in a computer-readable medium.
The foregoing exemplary embodiments are merely exemplary and are not to be construed as limiting. The present teachings can be readily applied to other types of apparatuses. Also, the description of the exemplary embodiments is intended to be illustrative, rather than limiting, of the scope of the invention, and many alternatives, modifications, and variations will be apparent to those skilled in the art.

Claims (24)

1. A server for providing message related content to an electronic device, the server comprising:
a memory storing instructions; and
at least one processor configured to execute the instructions to:
obtaining a result of a natural language interpretation of a message between the electronic device and a further electronic device,
location information corresponding to the message is obtained,
obtaining at least one recommended content related to the message from an external search service based on the obtained location information and a result of the natural language interpretation,
controlling, based on obtaining the at least one recommended content, sending the obtained at least one recommended content to the electronic device to recommend to a user of the electronic device for sending to the further electronic device related to the message,
wherein the at least one recommended content is transmitted to the electronic device after the result of the natural language interpretation is obtained without receiving a request for the at least one recommended content from the electronic device.
2. The server of claim 1, wherein the at least one processor is further configured to execute the instructions to perform natural language interpretation of messages between the electronic device and the further electronic device.
3. The server of claim 2, wherein the at least one processor is further configured to execute the instructions to determine that the message comprises a location-based query based on a natural language interpretation.
4. The server of claim 3, wherein the at least one processor is further configured to execute the instructions to determine a location for the location-based query as location information.
5. The server of claim 3, wherein the at least one processor is further configured to execute the instructions to control receiving the location information from the electronic device.
6. The server of claim 1, wherein the at least one processor is further configured to execute the instructions to control receiving a result of a natural language interpretation from the electronic device.
7. The server of claim 1, wherein the message is received by the electronic device from the other electronic device, and the at least one recommended content is at least one recommended content for responding to the message.
8. The server of claim 7, wherein the at least one processor is further configured to execute the instructions to obtain a message received by the electronic device from the further electronic device, and to perform natural language interpretation of the message between the electronic device and the further electronic device.
9. The server of claim 1, wherein the at least one recommended content includes weather information corresponding to the obtained location information based on the obtained result of the natural language interpretation indicating that the message includes a weather-related query.
10. The server of claim 9, wherein, based on the obtained result of the natural language interpretation indicating that the message includes a weather-related query, the at least one processor is further configured to execute the instructions to control obtaining weather information corresponding to the obtained location information from an external weather service.
11. The server of claim 9, wherein the at least one processor is further configured to execute the instructions to:
performing natural language interpretation of messages between the electronic device and the further electronic device; and
based on the natural language interpretation, it is determined that the message includes a weather-related query.
12. The server of claim 1, wherein the at least one processor is further configured to execute the instructions to determine, based on a result of the obtained natural language interpretation, that the at least one recommended content includes restaurant information corresponding to the obtained location information.
13. The server of claim 12, wherein the at least one processor is further configured to execute the instructions to obtain restaurant information from an external service corresponding to the obtained location information.
14. The server of claim 13, wherein the restaurant information includes information about a plurality of restaurants corresponding to the obtained location information.
15. The server of claim 12, wherein the at least one processor is further configured to execute the instructions to perform natural language interpretation of messages between the electronic device and the further electronic device.
16. The server of claim 1, wherein the messages are exchanged between the electronic device and the further electronic device via a text chat service.
17. An electronic device, comprising:
a display;
a memory storing instructions; and
at least one processor configured to execute the instructions to:
controlling a message screen of a message application to be output via a display, the message screen including one or more messages exchanged between the electronic device and a further electronic device via the message application, and an input part displaying an input message input by a user for transmission to the further electronic device,
controls transmission of information about message text displayed on the message screen to the server,
control receiving at least one recommended content determined to be related to the message text from the server,
controlling outputting the at least one recommended content via a display so as to be user-selectable, an
Controlling transmission of a message including the selected recommended content to the other electronic device based on selection of the recommended content from among the at least one recommended content output via the display,
wherein the message screen is continuously output via the display when the information on the message text is transmitted to the server and the at least one recommended content is received from the server.
18. The electronic device of claim 17, wherein the at least one recommended content is information of at least one restaurant determined to be related to the message text according to a natural language interpretation performed with respect to the message text, or weather information determined to be related to the message text according to a natural language interpretation performed with respect to the message text.
19. The electronic device of claim 17, wherein the at least one recommended content output via the display includes a displayed list of a plurality of restaurants determined to be related to the message text, and the selected recommended content is information about one of the plurality of restaurants.
20. The electronic device of claim 17, wherein the information about the at least one recommended content is determined by the server to be related to the message text according to a natural language interpretation performed by the server with respect to the message text.
21. The electronic device of claim 17, wherein the at least one processor is further configured to execute the instructions to:
performing natural language interpretation with respect to the message text;
controlling transmission of information on the message text including a result of the natural language interpretation to the server;
controlling to receive the at least one recommended content based on the transmitted result of the natural language interpretation from the server,
wherein the transmitted result of the natural language interpretation includes at least one of the determined keyword and the determined content corresponding to the message text.
22. The electronic device of claim 17, wherein the at least one processor automatically performs controlling sending, controlling receiving, and controlling outputting the at least one recommended content without user input.
23. The electronic device of claim 17, wherein:
the at least one processor is further configured to execute the instructions to control outputting, via a display, a notification that at least one recommended content related to the message text is displayable; and
performing controlling to output the at least one recommended content based on a user input received within a predetermined time with respect to the output of the notification.
24. The electronic device of claim 17, wherein:
the at least one processor is further configured to execute the instructions to control outputting via the display a notification that at least one recommended content related to the message text is displayable,
wherein the notification is not displayed when the predetermined time expires.
CN201911163405.6A 2014-07-31 2015-07-30 Message service providing device and server Pending CN110996271A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR20140098634 2014-07-31
KR10-2014-0098634 2014-07-31
KR1020150026750A KR102232929B1 (en) 2014-07-31 2015-02-25 Message Service Providing Device and Method Providing Content thereof
KR10-2015-0026750 2015-02-25
CN201580000958.3A CN105453612B (en) 2014-07-31 2015-07-30 Message service providing apparatus and method of providing content via the same

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201580000958.3A Division CN105453612B (en) 2014-07-31 2015-07-30 Message service providing apparatus and method of providing content via the same

Publications (1)

Publication Number Publication Date
CN110996271A true CN110996271A (en) 2020-04-10

Family

ID=55357231

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911163405.6A Pending CN110996271A (en) 2014-07-31 2015-07-30 Message service providing device and server
CN201580000958.3A Active CN105453612B (en) 2014-07-31 2015-07-30 Message service providing apparatus and method of providing content via the same

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201580000958.3A Active CN105453612B (en) 2014-07-31 2015-07-30 Message service providing apparatus and method of providing content via the same

Country Status (3)

Country Link
KR (2) KR102232929B1 (en)
CN (2) CN110996271A (en)
TW (1) TWI689201B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10155168B2 (en) 2012-05-08 2018-12-18 Snap Inc. System and method for adaptable avatars
US10339365B2 (en) 2016-03-31 2019-07-02 Snap Inc. Automated avatar generation
KR101787248B1 (en) * 2016-04-14 2017-10-18 라인 가부시키가이샤 Method and system for keyword search using messaging service
US10432559B2 (en) 2016-10-24 2019-10-01 Snap Inc. Generating and displaying customized avatars in electronic messages
TWI617823B (en) * 2016-12-23 2018-03-11 旺玖科技股份有限公司 Non-contact intelligent battery sensing system and method
US11030515B2 (en) * 2016-12-30 2021-06-08 Google Llc Determining semantically diverse responses for providing as suggestions for inclusion in electronic communications
KR20180084549A (en) * 2017-01-17 2018-07-25 삼성전자주식회사 Method for Producing the Message and the Wearable Electronic Device supporting the same
US10951562B2 (en) * 2017-01-18 2021-03-16 Snap. Inc. Customized contextual media content item generation
CN107222386A (en) * 2017-05-02 2017-09-29 珠海市魅族科技有限公司 A kind of message back method and terminal
CN107193395A (en) * 2017-05-31 2017-09-22 珠海市魅族科技有限公司 A kind of data inputting method, terminal and computing device
KR102144978B1 (en) * 2018-10-19 2020-08-14 인하대학교 산학협력단 Customized image recommendation system using shot classification of images
CN111309875B (en) * 2018-12-10 2023-08-04 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for answering questions
CN110311856A (en) * 2019-06-28 2019-10-08 上海连尚网络科技有限公司 Instant communicating method, equipment and computer readable storage medium
CN112269509B (en) * 2020-10-29 2022-11-25 维沃移动通信(杭州)有限公司 Information processing method and device and electronic equipment
CN113364915B (en) * 2021-06-02 2022-09-27 维沃移动通信(杭州)有限公司 Information display method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332218A1 (en) * 2009-06-29 2010-12-30 Nokia Corporation Keyword based message handling
US20120095862A1 (en) * 2010-10-15 2012-04-19 Ness Computing, Inc. (a Delaware Corportaion) Computer system and method for analyzing data sets and generating personalized recommendations
CN102868977A (en) * 2011-07-05 2013-01-09 Lg电子株式会社 Mobile device displaying instant message and control method of mobile device
CN103079008A (en) * 2013-01-07 2013-05-01 北京播思软件技术有限公司 Method and system for automatically generating replying suggestion according to content of short message
KR101290977B1 (en) * 2012-01-26 2013-07-30 한국외국어대학교 연구산학협력단 Message transfer method using push server and the system thereby
CN103597479A (en) * 2011-04-08 2014-02-19 诺基亚公司 Method and apparatus for providing a user interface in association with a recommender service
US20140074879A1 (en) * 2012-09-11 2014-03-13 Yong-Moo Kwon Method, apparatus, and system to recommend multimedia contents using metadata
US20140207806A1 (en) * 2013-01-21 2014-07-24 Samsung Electronics Co., Ltd. Method and apparatus for processing information of a terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9419819B2 (en) * 2007-12-20 2016-08-16 At&T Intellectual Property I, L.P., Via Transfer From At&T Delaware Intellectual Property, Inc. Methods and computer program products for creating preset instant message responses for instant messages received at an IPTV
CN103546362B (en) * 2012-07-11 2018-05-25 腾讯科技(深圳)有限公司 Method, system and the server pushed into row information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332218A1 (en) * 2009-06-29 2010-12-30 Nokia Corporation Keyword based message handling
US20120095862A1 (en) * 2010-10-15 2012-04-19 Ness Computing, Inc. (a Delaware Corportaion) Computer system and method for analyzing data sets and generating personalized recommendations
CN103597479A (en) * 2011-04-08 2014-02-19 诺基亚公司 Method and apparatus for providing a user interface in association with a recommender service
CN102868977A (en) * 2011-07-05 2013-01-09 Lg电子株式会社 Mobile device displaying instant message and control method of mobile device
KR101290977B1 (en) * 2012-01-26 2013-07-30 한국외국어대학교 연구산학협력단 Message transfer method using push server and the system thereby
US20140074879A1 (en) * 2012-09-11 2014-03-13 Yong-Moo Kwon Method, apparatus, and system to recommend multimedia contents using metadata
CN103079008A (en) * 2013-01-07 2013-05-01 北京播思软件技术有限公司 Method and system for automatically generating replying suggestion according to content of short message
US20140207806A1 (en) * 2013-01-21 2014-07-24 Samsung Electronics Co., Ltd. Method and apparatus for processing information of a terminal

Also Published As

Publication number Publication date
KR20160016532A (en) 2016-02-15
TWI689201B (en) 2020-03-21
KR20220038639A (en) 2022-03-29
TW201607303A (en) 2016-02-16
KR102232929B1 (en) 2021-03-29
CN105453612A (en) 2016-03-30
CN105453612B (en) 2021-06-04
KR102447503B1 (en) 2022-09-27

Similar Documents

Publication Publication Date Title
KR102378513B1 (en) Message Service Providing Device and Method Providing Content thereof
CN105453612B (en) Message service providing apparatus and method of providing content via the same
US10841265B2 (en) Apparatus and method for providing information
CN110084056B (en) Displaying private information on a personal device
US10061487B2 (en) Approaches for sharing data between electronic devices
KR102314274B1 (en) Method for processing contents and electronics device thereof
AU2010327453B2 (en) Method and apparatus for providing user interface of portable device
US9900427B2 (en) Electronic device and method for displaying call information thereof
WO2022089209A1 (en) Picture comment processing method and apparatus, electronic device and storage medium
WO2019214072A1 (en) Method for displaying virtual keyboard of input method, and terminal
KR20210003224A (en) Direct input from remote device
WO2018076269A1 (en) Data processing method, and electronic terminal
TWI554900B (en) Apparatus and method for providing information
CN111274363A (en) Method of providing activity notification and apparatus therefor
KR20150088806A (en) Using custom rtf commands to extend chat functionality
KR102370373B1 (en) Method for Providing Information and Device thereof
CN106031101A (en) Deriving atomic communication threads from independently addressable messages
US10909138B2 (en) Transforming data to share across applications
CN106415626B (en) Group selection initiated from a single item
CN115018659A (en) User account grouping method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination