CN109522399B - Method and apparatus for generating information - Google Patents

Method and apparatus for generating information Download PDF

Info

Publication number
CN109522399B
CN109522399B CN201811382265.7A CN201811382265A CN109522399B CN 109522399 B CN109522399 B CN 109522399B CN 201811382265 A CN201811382265 A CN 201811382265A CN 109522399 B CN109522399 B CN 109522399B
Authority
CN
China
Prior art keywords
information
user
user input
target
input information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811382265.7A
Other languages
Chinese (zh)
Other versions
CN109522399A (en
Inventor
高毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201811382265.7A priority Critical patent/CN109522399B/en
Publication of CN109522399A publication Critical patent/CN109522399A/en
Application granted granted Critical
Publication of CN109522399B publication Critical patent/CN109522399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application discloses a method and a device for generating information. One embodiment of the method comprises: acquiring user input information presented by a target page, and acquiring context information of the user input information presented by the target page before the user input information is presented as previous context information; extracting entity information from user input information; extracting information related to the entity information from information sets predetermined for the target user set and the target product set as related information; generating reply information for the user input information based on the user input information, the previous context information, and the related information. The method enriches the information generation mode, reduces the maintenance cost, is favorable for improving the efficiency of generating the reply information of the user input information, and is favorable for improving the accuracy of identifying the user intention.

Description

Method and apparatus for generating information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for generating information.
Background
Chat robots (chatbots) are currently in common use in many industries. Such as customer service, pre-sale consultation, meal ordering and ticket ordering, etc. The chat robot mainly uses natural language processing technology to process user sentences, including processes of intention identification, answer retrieval, logic rule judgment and the like, and then the chat robot can generate reply information of information input by a user.
Existing intent recognition schemes include the following three:
the first scheme comprises the following steps: intent recognition models only the current single statement.
In current chat systems, such schemes are mostly employed. Inputting the current user sentence into the trained model (such as logistic regression model, CNN model), and classifying by the intention with the maximum output probability.
Scheme II: and splicing a plurality of sentences in the chat context, and performing intention extraction on the spliced whole sentence.
For the case that identification of a single sentence is not accurate enough, some chat systems try to use a mode of splicing a plurality of sentences in a context (if the latest 5 sentences are retained or the latest 100 words of content are retained), splice the sentences into a whole, and then perform model classification and identification on the whole sentences, so that the model is expected to be able to acquire other information in the context.
The third scheme is as follows: and acquiring the adjusted final intention based on the historical intention superposition of the business rules.
In the whole conversation process, each statement is subjected to model classification, and expected intentions are output. And combines logic rules to adjust the intent of the current output. For example, the user consults a question regarding the size of the garment (intent: size consultation) and then asks how long to arrive at the good (arrival time). If only the intent recognition of a single statement is used, the user may consult the time of arrival of an existing order or consult the approximate time of arrival likely after the purchase of the item, which can be combined with business rules: and the user carries out consultation related to the attributes before sale, and then the arrival time consulted by the user is judged to be the expected arrival time after purchase.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating information.
In a first aspect, an embodiment of the present application provides a method for generating information, where the method includes: acquiring user input information presented by a target page, and acquiring context information of the user input information presented by the target page before the user input information is presented as previous context information; extracting entity information from user input information; extracting information related to the entity information from information sets predetermined for the target user set and the target product set as related information; generating reply information for the user input information based on the user input information, the previous context information, and the related information.
In some embodiments, prior to obtaining the user input information for the target page presentation, the method further comprises: in response to detecting that the user performs a target operation on the target page, extracting information related to the user as user information from information sets predetermined for the target user set and the target product set, and acquiring state information of the user.
In some embodiments, after obtaining context information of the user input information, presented by the target page prior to presenting the user input information, as prior context information, the method further comprises: based on the user information, the state information, and the previous context information, a knowledge graph is constructed as a user context information graph.
In some embodiments, the method further comprises: in response to detecting that the target page presents the contextual information of the user input information after presenting the user input information, extracting entity information and relationship information from the contextual information presented after presenting the user input information, and adding the extracted entity information and relationship information to the user contextual information graph.
In some embodiments, generating reply information for the user input information based on the user input information, the previous context information, and the related information comprises: and generating reply information of the user input information based on the user input information, the user context information map and the related information.
In some embodiments, extracting entity information from the user input information comprises: parsing the user input information to extract entity information from the user input information; determining the type of entity information from a predetermined type set; determining at least one of the following user input information: information of the principal and the subordinate guest, language and word, language state and sentence category.
In some embodiments, generating reply information for the user input information based on the user input information, the previous context information, and the related information comprises: generating reply information for the user input information based on the user input information, the previous context information, the related information, the type, and at least one item.
In some embodiments, generating reply information for the user input information based on the user input information, the previous context information, and the related information comprises: fusing user input information, previous context information and relevant information to obtain fused information; identifying an intent of a user inputting the user input information based on the user input information, the previous context information, and the related information; and generating reply information of the information input by the user based on the intention and the fused information.
In some embodiments, the predetermined set of information is a pre-constructed knowledge graph, the pre-constructed knowledge graph includes entity information characterizing entities and relationship information characterizing relationships between the entities, the pre-constructed knowledge graph includes entity information characterizing information related to a target user in the set of target users or a target product in the set of target products, the pre-constructed knowledge graph includes relationship information characterizing any one of: the relationship between the target users in the target user set and the target users, the relationship between the target products in the target product set and the target products, and the relationship between the target users in the target user set and the target products in the target product set.
In a second aspect, an embodiment of the present application provides an apparatus for generating information, where the apparatus includes: an acquisition unit configured to acquire user input information presented by a target page and acquire context information of the user input information presented by the target page before presenting the user input information as previous context information; a first extraction unit configured to extract entity information from user input information; a second extraction unit configured to extract information related to the entity information as related information from a predetermined information set, wherein the knowledge graph includes entity information characterizing entities and relationship information characterizing relationships between the entities; a generating unit configured to generate reply information of the user input information based on the user input information, the previous context information, and the related information.
In some embodiments, the apparatus further comprises: a third extraction unit configured to, in response to detecting that the user performs a target operation with respect to the target page, extract information related to the user as user information from information sets predetermined with respect to the target user set and the target product set, and acquire status information of the user.
In some embodiments, the apparatus further comprises: a construction unit configured to construct a knowledge graph as a user context information graph based on the user information, the state information, and the previous context information.
In some embodiments, the apparatus further comprises: a fourth extraction unit configured to, in response to detecting that the target page presents the context information of the user input information after presenting the user input information, extract the entity information and the relationship information from the context information presented after presenting the user input information, and add the extracted entity information and relationship information to the user context information graph.
In some embodiments, the generating unit comprises: the first generation module is configured to generate reply information of the user input information based on the user input information, the user context information map and the related information.
In some embodiments, the first extraction unit comprises: a parsing module configured to parse the user input information to extract entity information from the user input information; a first determining module configured to determine a type of entity information from a predetermined set of types; a second determination module configured to determine at least one of the following for the user input information: information of the principal and the subordinate guest, language and word, language state and sentence category.
In some embodiments, the generating unit comprises: a second generation module configured to generate reply information for the user input information based on the user input information, the previous context information, the related information, the type, and at least one item.
In some embodiments, the generating unit comprises: the fusion module is configured to fuse the user input information, the previous context information and the related information to obtain fused information; an identification module configured to identify an intent of a user inputting the user input information based on the user input information, the previous context information, and the related information; and a third generation module configured to generate reply information of the user input information based on the intention and the fused information.
In some embodiments, the predetermined set of information is a pre-constructed knowledge graph, the pre-constructed knowledge graph includes entity information characterizing entities and relationship information characterizing relationships between the entities, the pre-constructed knowledge graph includes entity information characterizing information related to a target user in the set of target users or a target product in the set of target products, the pre-constructed knowledge graph includes relationship information characterizing any one of: the relationship between the target users in the target user set and the target users, the relationship between the target products in the target product set and the target products, and the relationship between the target users in the target user set and the target products in the target product set.
In a third aspect, an embodiment of the present application provides an electronic device for generating information, including: one or more processors; a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method of any of the embodiments of the method for generating information as described above.
In a fourth aspect, the present application provides a computer-readable medium for generating information, on which a computer program is stored, which when executed by a processor implements the method of any one of the embodiments of the method for generating information as described above.
According to the method and the device for generating the information, the user input information presented by the target page is obtained, the context information of the user input information presented by the target page before the user input information is presented is obtained as the previous context information, then the entity information is extracted from the user input information, then the information related to the entity information is extracted from the information sets which are predetermined aiming at the target user set and the target product set to serve as the related information, and finally the reply information of the user input information is generated based on the user input information, the previous context information and the related information. Therefore, the intention recognition is carried out on the user input information based on the predetermined information set and the context information, so that the reply information of the user input information is generated, the generation modes of the information can be enriched, the cost of maintaining the chat robot is reduced, the efficiency of generating the reply information of the user input information is improved, and the accuracy of the intention recognition of the user is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for generating information according to the present application;
FIG. 3 is a schematic illustration of a pre-constructed knowledge-graph of a method for generating information according to the present application;
4A-4C are schematic diagrams of an application scenario of a method for generating information according to the present application;
FIG. 5 is a flow diagram of yet another embodiment of a method for generating information according to the present application;
FIG. 6 is a schematic block diagram illustrating one embodiment of an apparatus for generating information according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of a method for generating information or an apparatus for generating information of embodiments of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various client applications installed thereon, such as a web browser application, a shopping-like application, a search-like application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting page browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as a plurality of software or software modules (e.g., software or software modules used to provide distributed services) or as a single software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for user input information sent by the terminal devices 101, 102, 103. The background server may analyze and perform other processing on the received data such as the user input information, and feed back a processing result (e.g., information for replying to the user input information) to the terminal device.
It should be noted that the method for generating information provided in the embodiment of the present application may be executed by the terminal devices 101, 102, and 103, and accordingly, the apparatus for generating information may be disposed in the terminal devices 101, 102, and 103. In addition, the method for generating information provided by the embodiment of the present application may also be executed by the server 105, and accordingly, the apparatus for generating information may also be disposed in the server 105.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. The system architecture may include only the electronic device on which the method for generating information is run, when the electronic device on which the method for generating information is run does not need to perform data transmission with other electronic devices.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating information in accordance with the present application is shown. The method for generating information comprises the following steps:
step 201, obtaining the user input information presented by the target page, and obtaining the context information of the user input information presented by the target page before presenting the user input information as the prior context information.
In this embodiment, an execution subject (for example, a server or a terminal device shown in fig. 1) of the method for generating information may obtain, through a wired connection manner or a wireless connection manner, user input information presented by a target page from other electronic devices communicatively connected thereto or locally, and obtain context information of the user input information presented by the target page before presenting the user input information as previous context information.
The target page may be a page for presenting information input by the user. For example, the target page may be a page on which the user can consult, communicate, converse, etc. Through the target page, conversation can be carried out between two users, between the user and customer service personnel and between the user and the chat robot. Here, the chat robot may be software or hardware having a chat function. For example, the chat bot may be an application or application plug-in with chat functionality.
The user input information may be information input by a user. For example, when a user needs to consult information related to a product, the user may input information, and at this time, the information input by the user is the user input information. Here, the user input information may be text, emoticons, pictures, video, voice, and the like. As an example, the user input information may be a link to a page presenting information of the product.
It will be appreciated that during a user session, typically, a user will input multiple pieces of information (i.e., user input information), and a terminal used by the user will also receive multiple pieces of reply information (e.g., information that the chat robot replies to the user input information). The user input information and the reply information can be sequentially presented on the target page according to the time sequence. The context information may include information input by the user and reply information received by a terminal used by the user. It should be appreciated that the context information may generally reflect the context of the information entered by the user, and thus, through the context information, the executing entity may more accurately determine the intent of the user.
In some optional implementations of this embodiment, after the executing main body executes step 201, the executing main body may further construct a Knowledge Graph (Knowledge Graph) as the user context information Graph based on the user information, the state information, and the previous context information.
Here, the execution main body may extract entity information and relationship information from the user information, the state information, and the previous context information, respectively, and then construct the knowledge graph using the extracted entity information as entity information of the knowledge graph to be constructed and the extracted relationship information as relationship information of the knowledge graph to be constructed.
It is understood that a knowledge graph, also known as a scientific knowledge graph, is a graph-based data structure, and is composed of nodes (points) and edges (edges). In the knowledge-graph, each node represents an "entity" existing in the real world, and each edge represents a "relationship" between entities.
In some optional implementation manners of this embodiment, in the case that it is detected that the target page presents the context information of the user input information after presenting the user input information, the execution main body may further extract the entity information and the relationship information from the context information presented after presenting the user input information, and add the extracted entity information and relationship information to the user context information map.
It can be understood that, in the case that the target page is detected to present the context information of the user input information after the user input information is presented, the execution main body may update and adjust the context information map in time by adding the entity information and the relationship information in the context information presented after the user input information is presented to the user context information map, so as to ensure that the latest entity information and/or relationship information in the user context information is included in the context information map, thereby helping to improve the identification accuracy of the user intention. Context information in the consultation process of the user can be more efficiently saved or extracted. In addition, by constructing a user context information map for each user, information related to the user (such as historical purchase record information of the user, shopping preference information of the user) can be more specifically known. Compared with the manual maintenance of the context information of the user, the automation degree of the maintenance of the context information of the user can be improved by constructing the knowledge graph, the labor cost is saved, and the maintenance cost is reduced.
Step 202, entity information is extracted from the user input information.
In this embodiment, the execution subject may extract entity information from the user input information acquired in step 201. The entity information may represent a physical entity, or represent information related to the physical entity.
Here, the execution body may extract entity information from user input information by using a named entity extraction method based on handl (Java toolkit consisting of a series of models and algorithms) segmentation, an entity extraction method based on deep learning, or another entity extraction method.
As an example, the execution body may extract a product name, a product number, an order number, a mobile phone number, an address, and the like from the user input information as the entity information.
In some optional implementations of this embodiment, the step 202 may further include the following steps:
first, user input information is parsed to extract entity information from the user input information.
Here, the execution body may parse the user input information using a named entity extraction method based on HanLP segmentation, an entity extraction method based on deep learning, or another entity extraction method to extract entity information from the user input information.
Then, from a predetermined set of types, the type of entity information is determined. The type set may be a set of types of entity information determined by a technician or other relevant personnel. Wherein, the type of the entity information may include but is not limited to one of the following: product name, product number, order number, cell phone number, address, etc.
Finally, at least one of the following user input information is determined: information of the principal and the subordinate guest, language and word, language state and sentence category. Wherein, the above-mentioned morphism may include but is not limited to: active speech, passive speech, etc. The sentence types may include, but are not limited to: statement sentences, interrogative sentences, anti-question sentences, imperative sentences, and the like.
Step 203, extracting information related to the entity information as related information from the information sets predetermined for the target user set and the target product set.
In this embodiment, the execution subject may extract, as the related information, information related to the entity information extracted in step 202 from information sets predetermined for the target user set and the target product set.
The target user set may be a set of all or a portion of users who have used a software (e.g., shopping-like software) or a website.
The target product set may be a set of all or part of products displayed by a certain software (e.g., shopping software) or a website, or may be a set of all or part of products displayed in a store (virtual store) in the software or the website.
The information set may include product information of all or some of the products in the target product set and user information of all or some of the users in the target user set.
The information related to the entity information may be various information related to the entity information in an information set.
It is to be understood that the executing entity may extract information related to the entity information from the information set according to a predetermined extraction rule. As an example, assume that the entity information is "first mobile phone". The information set is' the storage information of the first mobile phone comprises a storage memory and an operation memory. The storage memory is 64 GB. The running memory is 4GB ". And, there is a direct relationship between "first mobile phone" and "stored information" (as an example, in a knowledge graph, the direct relationship may be represented as an edge connection between nodes characterizing an entity). The storage information is directly related to the storage memory and the operation memory. There is a direct relationship between "store memory" and "64 GB". The 'running memory' and the '4 GB' are in direct relation. In this scenario, if the extraction rule is "extract information if the information in the information set has a direct relationship with the entity information", then the execution subject may extract information "stored information" related to the entity information from the information set as related information. If the extraction rule is "if the information in the information set has a direct relationship or an indirect relationship with the entity information, the information is extracted", then the execution main body may extract the information "storage information", "storage memory", "64 GB", "running memory", "4 GB" related to the entity information from the information set as the related information.
In some optional implementations of this embodiment, the predetermined set of information is a pre-constructed knowledge-graph. The pre-constructed knowledge graph includes entity information characterizing entities and relationship information characterizing relationships between entities. The pre-constructed knowledge graph includes entity information characterizing information related to a target user in the set of target users or a target product in the set of target products. The relationship information included in the pre-constructed knowledge graph characterizes any one of: the relationship between the target users in the target user set and the target users, the relationship between the target products in the target product set and the target products, and the relationship between the target users in the target user set and the target products in the target product set.
It can be understood that by constructing a knowledge graph to store the relationships between products and users, between products, between users and the like, compared with the existing method for mining user intentions, the method can quickly and simply query information related to user input information, thereby providing more knowledge related to users for the executing main body in the chat process, facilitating the executing main body to more accurately understand the intentions of the users and providing more accurate services. Compared with the manual maintenance of the context information of the user, the automation degree of the maintenance of the context information of the user can be improved by constructing the knowledge graph, the labor cost is saved, and the maintenance cost is reduced.
In some optional implementations of this embodiment, before performing step 201, in a case that the execution main body or the electronic device communicatively connected thereto detects that the user performs the target operation on the target page, the execution main body may further extract information related to the user from the predetermined information set as user information, and acquire status information of the user.
The target operation may be any operation performed by the user with respect to the target page. As an example, the target operation may be an operation of the user to enter the target page (e.g., an operation of the user to enter the target page by clicking a button (e.g., "consult" button, "customer service" button)).
The information related to the user may be various information related to the user in the predetermined information set. For example, the information related to the user may include, but is not limited to, at least one of: user basic registration information, user profile, user purchase record, order record (e.g., purchased product information, purchased product attribute information), and the like. When the information set is characterized by a knowledge graph (in this case, the knowledge graph includes entity information characterizing the user), the information related to the user may also be entity information having edges (relationship between the entity information characterizing the user) between the entity information characterizing the user included in the knowledge graph, may also be information related to the user, may also be entity information having a path length between the entity information characterizing the user included in the knowledge graph smaller than a preset threshold (for example, 5), may also be entity information characterized by all child nodes of the entity information characterizing the user included in the knowledge graph, and so on.
The status information may be used to characterize the status of the user. For example, the status information may include, but is not limited to, at least one of: geographical location information of the user, time zone of the user, acquisition date of acquiring the status information of the user, acquisition time of acquiring the status information of the user, and the like.
It is to be understood that the executing body may extract information related to the user from the information set as user information according to a predetermined extraction rule, and acquire the state information of the user. As an example, assume that the entity information is "first mobile phone". The information set is' information stored in the first mobile phone and comprises a storage memory and an operation memory. The storage memory is 64 GB. The running memory is 4GB ". And a direct relation is formed between the 'first mobile phone' and the 'storage information'. The storage information is directly related to the storage memory and the operation memory. There is a direct relationship between "store memory" and "64 GB". The 'running memory' and the '4 GB' are in direct relation. And indirect relations exist among other entity information. In practice, it is possible to distinguish between two entity information as a direct relationship or an indirect relationship by identification (e.g., 0,1, etc.). In this scenario, if the extraction rule is "extract information if the information in the information set has a direct relationship with the entity information", then the execution subject may extract information "stored information" related to the entity information from the information set as related information. If the extraction rule is "if the information in the information set has a direct relationship or an indirect relationship with the entity information, the information is extracted", then the execution main body may extract the information "storage information", "storage memory", "64 GB", "running memory", "4 GB" related to the entity information from the information set as the related information.
As an example, please refer to fig. 3. The execution main body detects that the first user performs a target operation (for example, an operation of entering the target page) on the target page, extracts information related to the first user as user information (for example, the entity information 303 and the entity information 305 which characterize the first user, the entity information 307 and the second user which characterize the second user, the relationship information 302 and the relationship information 304 which characterize the browsing relationship between the first user and the first mobile phone and the relationship information 307 and the purchasing relationship between the second user and the second mobile phone) from the knowledge graph shown in fig. 3, and acquires state information of the user (for example, geographical location information of the user, time zone of the user, time information of the user, and the like, Acquisition date when the status information of the user is acquired, acquisition time information when the status information of the user is acquired).
And step 204, generating reply information of the user input information based on the user input information, the previous context information and the relevant information.
In this embodiment, the execution subject may generate reply information of the user input information acquired in step 201 based on the user input information acquired in step 201, the previous context information acquired in step 201, and the related information acquired in step 203.
The reply information may be information for replying to the information input by the user. As an example, assuming that the user inputs information "when to ship", the reply information may be "within 24 hours after order placement".
In some optional implementations of this embodiment, this step 204 may be performed according to the following steps:
firstly, fusing user input information, previous context information and relevant information to obtain fused information.
Here, the executing entity may adopt an existing (for example, a fusion method based on machine learning, a fusion method based on a predetermined rule), or a text information fusion method that is not known to be proposed in the future, to fuse the user input information, the previous context information, and the related information, so as to obtain fused information, which is not described herein again.
In a second step, an intention of the user inputting the user input information is identified based on the user input information, the previous context information, and the related information.
Here, the execution subject may recognize the user input information, the previous context information, and the related information using an intention recognition model based on deep learning, thereby obtaining an intention of the user inputting the user input information (i.e., an intention of the user represented by a form of information). Optionally, the executing body may also use an intention recognition model based on deep learning to recognize the user input information to obtain multiple possible intentions of the user inputting the user input information, and then recognize according to the previous context information and the related information to determine a final intention of the user inputting the user input information from the multiple intentions. For example, assume that the user inputs information "how much money is apple". The executing body may recognize the user input information by using an intention recognition model based on deep learning, and the intention of the user who inputs the user input information may be "information for inquiring fruits and apples" or "information for inquiring a mobile phone". Then, the executing body may determine a final intention of the user inputting the user input information from the plurality of intentions obtained according to the previous context information and the related information. Here, it is assumed that the previous context information and the related information characterize the related information of the user concerning the handset. Then, the execution main body may determine "inquiring information of the cellular phone" as the intention of the user inputting the user input information.
Here, the intention recognition model is a well-known technology widely studied at present and will not be described in detail herein.
And thirdly, generating reply information of the information input by the user based on the intention obtained in the second step and the fused information obtained in the first step.
Here, the execution subject may employ a method based on machine learning and/or generate reply information of the user input information based on a predetermined reply information generation rule.
For example, the executing agent may input the intention and the fused information into a pre-trained first reply information generation model, so as to obtain reply information of the information input by the user. The first reply information generation model can be used for representing the corresponding relation between the intention and the fused information and the reply information of the information input by the user.
For example, the first reply information generation model may be a table or a database, which is obtained by a technician through a large number of statistics and stores the user's intention, the fused information, the reply information of the user input information, and the corresponding relationship therebetween.
The first reply information generation model may be obtained by the execution subject or other electronic device through training as follows: first, a first set of training samples is obtained. The first training sample comprises the intention, the fused information and reply information of the user input information corresponding to the intention and the fused information. Then, regarding a first training sample in the first training sample set, using the intention and the fused information included in the first training sample as the input of an initial model (for example, a neural network model), using reply information of the intention and the user input information of the fused information included in the first training sample as the expected output of the initial model, continuously adjusting the training parameters of the initial model until a stopping condition is met, stopping training, and using the model obtained after stopping training as the first reply information to generate the model. Wherein the stop condition may include at least one of: the training times reach the preset times, the value of the loss function is smaller than the preset threshold value, and the training time exceeds the preset duration.
The above reply information generation rule may be a rule set for a large amount of user input information and the user's intention. For example, the reply information generation rule may be "if the user input information is ' all support telecommunications ' and the user's intention is ' communication system consultation ', the reply information is ' all support telecommunications card use, but card 2 supports 2G network '.
As an example, assume that the user inputs information "all telecommunications are supported". Here, if the execution subject does not incorporate the context information, it is difficult to correctly understand the user intention and generate the reply information by performing only the intention analysis or search of a single sentence. At this time, through syntax tree analysis and entity recognition, the execution main body can recognize entity information "telecommunication" and relationship information "support", and then the execution main body can perform retrieval, so that the most relevant intention "communication system consultation" and entity information "dual card dual standby" can be retrieved. Then, the executing agent can fuse the information to obtain the intention "communication system consultation", the entity information "dual card dual standby", and the relation information "whether the telecommunication is supported". The fused information is comprehensive, and the intention of the user can be understood and the reply information can be generated completely. Here, the execution agent may generate reply information of the user input information based on the first reply information generation model (for example, all support the use of a telecommunication card, but the card 2 supports a 2G network).
In some optional implementations of this embodiment, this step 204 may also be performed according to the following steps: and generating reply information of the user input information based on the user input information, the user context information map and the related information.
Here, the executing agent may first identify the user input information, information related to the user input information in the user context information map (e.g., entity information in the user context information map that is related to (e.g., connected by edges) entity information in the user input information), and the related information using an intention identification model based on deep learning, so as to obtain the intention of the user inputting the user input information.
Optionally, the executing body may also use an intention recognition model based on deep learning to recognize the user input information to obtain multiple possible intentions of the user inputting the user input information, and then recognize according to the information related to the user input information in the user context information map and the related information, and determine a final intention of the user inputting the user input information from the multiple intentions. Then, the execution body may generate reply information of the user input information according to the obtained user intention. For example, the executing entity may input the user's intention and the user input information into a second reply information generation model trained in advance, so as to obtain reply information of the user input information. The second reply information generation model can be used for representing the corresponding relationship between the intention of the user, the user input information and the reply information of the user input information.
For example, the second reply information generation model may be a table or a database, which is obtained by a technician through a large number of statistics and stores the user's intention, the user input information, the reply information of the user input information, and the corresponding relationship therebetween.
The second reply information generation model may be obtained by the executing entity or other electronic device through training as follows: first, a second set of training samples is obtained. Wherein the second training sample includes an intention of the user, user input information, and reply information of the user input information corresponding to the intention of the user and the user input information. Then, regarding a second training sample in the second training sample set, using the intention and the user input information included in the second training sample as the input of the initial model (for example, a neural network model), using the reply information of the user input information corresponding to the input intention and the user input information of the second training sample as the expected output of the initial model, continuously adjusting the training parameters of the initial model until a stop condition is met, stopping training, and using the model obtained after the training is stopped as a second reply information generation model. Wherein the stop condition may include at least one of: the training times reach the preset times, the value of the loss function is smaller than the preset threshold value, and the training time exceeds the preset duration.
Optionally, the executing body may further input the user input information and information related to the user input information in the user context information map to a third pre-trained reply information generation model, so as to obtain reply information of the user input information. The third reply information generation model may be configured to represent a correspondence between the user input information, information related to the user input information in the user context information map, and reply information of the user input information.
For example, the third reply information generation model may be obtained by a skilled person through a large number of statistics, and is a table or a database storing the user input information, the information related to the user input information in the user context information map, the reply information of the user input information, and the corresponding relationship therebetween.
The third reply information generation model may be obtained by the execution subject or other electronic device through training as follows: first, a third set of training samples is obtained. Wherein the third training sample comprises user input information, information in the user context information graph related to the user input information, and reply information of the user input information corresponding to the user input information and the information in the user context information graph related to the user input information. Then, regarding a third training sample in a third training sample set, taking the user input information included in the third training sample and information related to the user input information in the user context information map as inputs of an initial model (for example, a neural network model), taking reply information of the user input information included in the third training sample as expected outputs of the initial model, continuously adjusting training parameters of the initial model until a stop condition is met, stopping training, and taking a model obtained after the training is stopped as a third reply information generation model. Wherein the stop condition may include at least one of: the training times reach the preset times, the value of the loss function is smaller than the preset threshold value, and the training time exceeds the preset duration.
In some optional implementations of this embodiment, this step 204 may also be performed according to the following steps: and generating reply information of the user input information based on the user input information, the previous context information, the related information, the types and at least one item (namely at least one item of the information of the main and predicate guest, the language word, the language state and the sentence category).
For example, the execution subject may input the user input information, the previous context information, the related information, the type and at least one item to a fourth response information generation model trained in advance, to obtain the response information of the user input information. The fourth reply information generation model may represent a correspondence between the user input information, previous context information, related information, type, at least one of information of a principal and a predicate, a tone, a language state, and a sentence category, and the reply information of the user input information.
For example, the fourth reply information generation model may be a table or a database, which is obtained by a skilled person through a large number of statistics and stores the reply information of the user input information, the previous context information, the related information, the type, at least one of the information of the principal and the subordinate, the word of the mood, the state of the mood, and the category of the sentence, and the corresponding relationship between the reply information and the user input information.
The fourth reply information generation model may be obtained by the execution subject or other electronic device through training as follows: first, a fourth set of training samples is obtained. The fourth training sample comprises user input information, information related to the user input information in the user context information map, and reply information of the user input information corresponding to the user input information and the information related to the user input information in the user context information map. Then, regarding a fourth training sample in a fourth training sample set, using the user input information, the previous context information, the related information, the type and at least one item included in the fourth training sample as the input of the initial model (for example, a neural network model), using the reply information of the user input information included in the fourth training sample as the expected output of the initial model, continuously adjusting the training parameters of the initial model until a stopping condition is met, stopping training, and using the model obtained after stopping training as a fourth reply information generation model. Wherein the stop condition may include at least one of: the training times reach the preset times, the value of the loss function is smaller than the preset threshold value, and the training time exceeds the preset duration.
With continuing reference to fig. 4A-4C, fig. 4A-4C are schematic diagrams of application scenarios of the method for generating information according to the present embodiment. In the application scenarios of fig. 4A-4C, it is assumed that XX may be both a name of a fruit and a brand of a mobile phone. As shown in fig. 4A, fig. 4A is a set of information (represented by a knowledge graph in the illustration) predetermined for a set of target users (illustrated as including a first user) and a set of target products (illustrated as including XX). In fig. 4A, entity information 410 represents a first user, entity information 412 represents XX (name of product), entity information 415 represents fruit, entity information 416 represents mobile phone, entity information 419 represents 10 jin per jin, entity information 420 represents champagne, entity information 423 represents 64G, entity information 424 represents 256G, entity information 427 represents 9599, entity information 428 represents 10999, relationship information 411 represents consultation between entity information 410 and entity information 412 (information related to product XX consulted by the first user), relationship information 413 represents category between entity information 412 and entity information 415 (category representing XX is fruit), relationship information 414 represents category between entity information 412 and entity information 416 (category representing XX is mobile phone), relationship information 417 represents price between entity information 415 and entity information 419 (price representing fruit XX is 10 jin per jin), the relationship information 418 represents that the relationship between the entity information 416 and the entity information 420 is color (representing that the color of the cell phone XX has champagne), the relationship information 421 represents that the relationship between the entity information 420 and the entity information 423 is memory (representing that the memory of the champagne cell phone XX may be 64G), the relationship information 422 represents that the relationship between the entity information 420 and the entity information 424 is memory (representing that the memory of the champagne cell phone XX may be 256G), the relationship information 425 represents that the relationship between the entity information 423 and the entity information 427 is price (representing that the price of the champagne cell phone XX with 64G is 9599 yuan), and the relationship information 426 represents that the relationship between the entity information 424 and the entity information 428 is price (representing that the price of the champagne cell phone XX with 256G is 10999 yuan). As shown in fig. 4B, the first user inputs user input information 403 (i.e., how much XX is). Thereafter, the handset acquires the user input information 403 presented on the target page (e.g., the consultation page) as shown in fig. 4B, and acquires the context information 401 and 402 of the user input information presented on the target page before presenting the user input information as the previous context information. The handset then extracts the entity information "XX" from the user input information 403. Then, the mobile phone extracts information related to the entity information 412 (i.e., xx) as related information from the above-mentioned information sets predetermined for the target user set and the target product set. As an example, the handset may extract the entity information 412 and all child nodes of the entity information 412 from the information collection shown in fig. 4A, as well as edges between the extracted nodes included in fig. 4A. Optionally, the mobile phone may also extract the entity information 412, all child nodes of the entity information 412, and the parent node of the entity information 412 from the information set shown in fig. 4A. Finally, as shown in fig. 4C, the handset generates reply information 404 for the user input information based on the user input information 403, the previous context information 401 and 402, and the related information (e.g. the price of the 64G champagne handset XX is 9599 dollars).
According to the method provided by the embodiment of the application, the intention of the user input information is recognized based on the predetermined information set and the context information, so that the reply information of the user input information is generated, the generation modes of the information can be enriched, the cost of maintaining the chat robot is reduced, the efficiency of generating the reply information of the user input information is improved, and the accuracy of recognizing the intention of the user is improved.
With further reference to fig. 5, a flow 500 of yet another embodiment of a method for generating information is shown. The flow 500 of the method for generating information includes the steps of:
step 501, obtaining user input information presented by a target page, and obtaining context information of the user input information presented by the target page before presenting the user input information as previous context information.
In this embodiment, step 501 is substantially the same as step 201 in the corresponding embodiment of fig. 2, and is not described here again.
Step 502, in response to detecting that the user performs a target operation on a target page, extracting information related to the user from a pre-constructed knowledge graph as user information, and acquiring state information of the user.
In the present embodiment, an executing subject (e.g., a server or a terminal device shown in fig. 1) of the method for generating information may extract information related to a user from a pre-constructed knowledge graph as user information and acquire status information of the user in the case where it is detected that the user performs a target operation with respect to a target page.
The target operation may be any operation performed by the user with respect to the target page. As an example, the target operation may be an operation of the user to enter the target page (e.g., an operation of the user to enter the target page by clicking a button (e.g., "consult" button, "customer service" button)).
The information related to the user may be various information related to the user in the predetermined information set. For example, the information related to the user may include, but is not limited to, at least one of: user basic registration information, user representation, user purchase record, order record (e.g., product information of purchased product, attribute information of purchased product), and the like. When the information set is characterized by a knowledge graph (in this case, the knowledge graph includes entity information characterizing the user), the information related to the user may also be entity information having a side (relationship between the entity information characterizing the user) with the entity information characterizing the user included in the knowledge graph, or the information related to the user may also be entity information having a path length with the entity information characterizing the user included in the knowledge graph smaller than a preset threshold (for example, 5).
The status information may be used to characterize the status of the user. For example, the status information may include, but is not limited to, at least one of: geographical location information of the user, time zone of the user, acquisition date of acquiring the status information of the user, acquisition time of acquiring the status information of the user, and the like.
Step 503, constructing a knowledge graph as a user context information graph based on the user information, the state information and the previous context information.
In the present embodiment, the execution body may construct a knowledge graph as the user context information graph based on the user information, the state information, and the previous context information.
Here, the execution main body may extract entity information and relationship information from the user information, the state information, and the previous context information, respectively, and then construct the knowledge graph using the extracted entity information as entity information of the knowledge graph to be constructed and the extracted relationship information as relationship information of the knowledge graph to be constructed.
Step 504, entity information is extracted from the user input information.
In this embodiment, step 504 is substantially the same as step 202 in the corresponding embodiment of fig. 2, and is not described here again.
Step 505 is to extract information related to the entity information as related information from the predetermined information set.
In this embodiment, the execution subject may extract information related to the entity information extracted in step 504 as related information from information sets predetermined for the target user set and the target product set.
The target user set may be a set of all or a portion of users who have used a software (e.g., shopping-like software) or a website.
The target product set may be a set of all or part of products displayed by a certain software (e.g., shopping software) or a website, or may be a set of all or part of products displayed in a store (virtual store) in the software or the website.
The information set may include product information of all or some of the products in the target product set and user information of all or some of the users in the target user set.
The information related to the entity information may be various information related to the entity information in an information set.
Step 506, in response to detecting that the target page presents the context information of the user input information after presenting the user input information, extracting the entity information and the relationship information from the context information presented after presenting the user input information, and adding the extracted entity information and relationship information to the user context information map.
In this embodiment, in a case where it is detected that the target page presents the context information of the user input information after presenting the user input information, the execution main body may extract the entity information and the relationship information from the context information presented after presenting the user input information, and add the extracted entity information and relationship information to the user context information map.
It can be understood that, in the case that the target page is detected to present the context information of the user input information after the user input information is presented, the execution main body may update and adjust the context information map in time by adding the entity information and the relationship information in the context information presented after the user input information is presented to the user context information map, so as to ensure that the latest entity information and/or relationship information in the user context information is included in the context information map, thereby helping to improve the identification accuracy of the user intention. Context information in a consultation process of a user can be more efficiently saved or extracted. In addition, by constructing a user context information map for each user, information related to the user can be more specifically known. Compared with the manual maintenance of the context information of the user, the automation degree of the maintenance of the context information of the user can be improved by constructing the knowledge graph, the labor cost is saved, and the maintenance cost is reduced.
And 507, generating reply information of the user input information based on the user input information, the user context information map and the related information.
In this embodiment, the execution subject may generate reply information of the user input information based on the user input information, the user context information map, and the related information.
Here, the executing agent may first identify the user input information, information related to the user input information in the user context information map (e.g., entity information in the user context information map that is related to (e.g., connected by edges) entity information in the user input information), and the related information using an intention identification model based on deep learning, so as to obtain the intention of the user inputting the user input information.
As can be seen from fig. 5, compared with the embodiment corresponding to fig. 2, the flow 500 of the method for generating information in the present embodiment highlights the steps of extracting entity information and relationship information from new context information and adding to the user context information graph in case that new context information is detected. Therefore, the scheme described in the embodiment can update and adjust the context information map in time, so that the entity information and/or the relationship information in the latest user context information in the context information map is ensured to be included, and the identification accuracy of the user intention is improved. Context information in a consultation process of a user can be more efficiently saved or extracted. In addition, by constructing a user context information map for each user, information related to the user can be more specifically known. Compared with the manual maintenance of the context information of the user, the automation degree of the maintenance of the context information of the user can be improved by constructing the knowledge graph, the labor cost is saved, and the maintenance cost is reduced.
With further reference to fig. 6, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for generating information, which corresponds to the method embodiment shown in fig. 2, and which may include the same or corresponding features as the method embodiment shown in fig. 2, in addition to the features described below. The device can be applied to various electronic equipment.
As shown in fig. 6, the apparatus 600 for generating information of the present embodiment includes: an acquisition unit 601, a first extraction unit 602, a second extraction unit 603, and a generation unit 604. Wherein the obtaining unit 601 is configured to obtain the user input information presented by the target page, and obtain context information of the user input information presented by the target page before presenting the user input information as previous context information; the first extraction unit 602 is configured to extract entity information from user input information; the second extraction unit 603 is configured to extract information related to the entity information as related information from information sets predetermined for the target user set and the target product set; the generating unit 604 is configured to generate reply information for the user input information based on the user input information, the previous context information, and the related information.
In this embodiment, the obtaining unit 601 of the apparatus 600 for generating information may obtain, through a wired connection manner or a wireless connection manner, the user input information presented by the target page from other electronic devices communicatively connected therewith or locally, and obtain, as the previous context information, the context information of the user input information presented by the target page before presenting the user input information.
The target page may be a page for presenting information input by the user. For example, the target page may be a page on which the user can consult, communicate, converse, etc. Through the target page, conversation can be carried out between two users, between the user and customer service personnel and between the user and the chat robot. Here, the chat robot may be software or hardware having a chat function. For example, the chat bot may be an application or application plug-in with chat functionality.
The user input information may be information input by a user.
In this embodiment, the first extraction unit 602 may extract entity information from the user input information acquired by the acquisition unit 601.
In this embodiment, the second extraction unit 603 may extract, as the related information, information related to the entity information extracted by the first extraction unit 602 from information sets predetermined for the target user set and the target product set.
The target user set may be a set of all or a portion of users who have used a software (e.g., shopping-like software) or a website.
The target product set may be a set of all or part of products displayed by a certain software (e.g., shopping software) or a website, or may be a set of all or part of products displayed in a store (virtual store) in the software or the website.
The information set may include product information of all or some of the products in the target product set and user information of all or some of the users in the target user set.
The information related to the entity information may be various information related to the entity information in an information set.
In this embodiment, the generation unit 604 may generate reply information of the user input information acquired by the acquisition unit 601 based on the user input information acquired by the acquisition unit 601, the previous context information acquired by the acquisition unit 601, and the related information obtained by the second extraction unit 603. The reply information may be information for replying to the information input by the user.
In some optional implementations of this embodiment, the apparatus 600 further includes: the third extraction unit (not shown in the figure) is configured to extract information related to the user as user information from information sets predetermined for the target user set and the target product set in response to detection of the user performing a target operation with respect to the target page, and acquire status information of the user.
The target operation may be any operation performed by the user with respect to the target page. As an example, the target operation may be an operation of the user to enter the target page (e.g., an operation of the user to enter the target page by clicking a button (e.g., "consult" button, "customer service" button)).
The information related to the user may be various information related to the user in the predetermined information set. For example, the information related to the user may include, but is not limited to, at least one of: user basic registration information, user representation, user purchase record, order record (e.g., product information of purchased product, attribute information of purchased product), and the like. When the set of information is characterized by a knowledge-graph (in this case, the knowledge-graph includes entity information characterizing the user), the information related to the user may also be entity information having edges (representing relationships between entities) with the entity information characterizing the user included in the knowledge-graph, or the information related to the user may also be entity information having a path length smaller than a preset threshold (for example, 5) with the entity information characterizing the user included in the knowledge-graph, or entity information characterized by all child nodes of the entity information characterizing the user included in the knowledge-graph, or the like.
The status information may be used to characterize the status of the user. For example, the status information may include, but is not limited to, at least one of: geographical location information of the user, time zone of the user, acquisition date of acquiring the status information of the user, acquisition time of acquiring the status information of the user, and the like.
In some optional implementations of this embodiment, the apparatus 600 further includes: the construction unit (not shown in the figure) is configured to construct a knowledge-graph as the user context information graph based on the user information, the state information and the previous context information.
In some optional implementations of this embodiment, the apparatus 600 further includes: a fourth extraction unit (not shown in the figure) is configured to, in response to detecting that the target page presents the context information of the user input information after presenting the user input information, extract the entity information and the relationship information from the context information presented after presenting the user input information, and add the extracted entity information and relationship information to the user context information graph.
It can be understood that, in the case that the target page is detected to present the context information of the user input information after the user input information is presented, the execution main body may update and adjust the context information map in time by adding the entity information and the relationship information in the context information presented after the user input information is presented to the user context information map, so as to ensure that the latest entity information and/or relationship information in the user context information is included in the context information map, thereby helping to improve the identification accuracy of the user intention. Context information in the consultation process of the user can be more efficiently saved or extracted. In addition, by constructing a user context information map for each user, information related to the user can be more specifically known. Compared with manual maintenance of the context information of the user, the automation degree of the maintenance of the context information of the user can be improved by constructing the knowledge graph, the labor cost is saved, and the maintenance cost is reduced.
In some optional implementations of this embodiment, the generating unit 604 may include: the first generating module (not shown in the figure) is configured to generate reply information of the user input information based on the user input information, the user context information map and the related information.
In some optional implementations of this embodiment, the first extracting unit 602 may include: a parsing module (not shown in the figure) configured to parse the user input information to extract entity information from the user input information; a first determining module (not shown in the figure) is configured to determine a type of entity information from a predetermined set of types; the second determining module (not shown in the figures) is configured to determine at least one of the following of the user input information: information of the principal and the subordinate guest, language and word, language state and sentence category. Wherein, the above-mentioned morphism may include but is not limited to: active speech, passive speech, etc. The sentence types may include, but are not limited to: statement sentences, interrogative sentences, anti-question sentences, imperative sentences, and the like.
In some optional implementations of this embodiment, the generating unit 604 may include: the second generating module (not shown in the figure) is configured to generate reply information of the user input information based on the user input information, the previous context information, the related information, the type and at least one item (namely, at least one item of the information of the main predicate, the Chinese word, the language state and the sentence category).
In some optional implementations of this embodiment, the generating unit 604 may also include: a fusion module (not shown in the figure) is configured to fuse the user input information, the previous context information and the related information to obtain fused information; an identification module (not shown in the figures) configured to identify an intent of a user inputting the user input information based on the user input information, the previous context information, and the related information; a third generating module (not shown in the figure) is configured to generate reply information of the user input information based on the intention and the fused information.
In some optional implementations of this embodiment, the predetermined information set is a pre-constructed knowledge graph, the pre-constructed knowledge graph includes entity information characterizing entities and relationship information characterizing relationships between the entities, the entity information included in the pre-constructed knowledge graph characterizes information related to target users in the target user set or target products in the target product set, and the relationship information included in the pre-constructed knowledge graph characterizes any one of the following items: the relationship between the target users in the target user set and the target users, the relationship between the target products in the target product set and the target products, and the relationship between the target users in the target user set and the target products in the target product set.
It can be understood that by constructing a knowledge graph to store the relationships between products and users, between products, between users and the like, compared with the existing method for mining user intentions, the method can quickly and simply query information related to user input information, thereby providing more knowledge related to users for the executing main body in the chat process, facilitating the executing main body to more accurately understand the intentions of the users and providing more accurate services. Compared with the manual maintenance of the context information of the user, the automation degree of the maintenance of the context information of the user can be improved by constructing the knowledge graph, the labor cost is saved, and the maintenance cost is reduced.
The apparatus provided by the above embodiment of the present application obtains, by the obtaining unit 601, the user input information presented by the target page, and obtains, as previous context information, context information of the user input information presented by the target page before presenting the user input information, then the first extracting unit 602 extracts entity information from the user input information, then the second extracting unit 603 extracts, as related information, information related to the entity information from information sets predetermined for the target user set and the target product set, and finally the generating unit 604 generates reply information of the user input information based on the user input information, the previous context information, and the related information, so as to perform intent recognition on the user input information based on the predetermined information set and the context information, thereby generating the reply information of the user input information, and can enrich the generation manner of the information, the cost of maintaining the chat robot is reduced, the efficiency of generating reply information of the user input information is improved, and the accuracy of identifying the user intention is improved.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use in implementing a server according to embodiments of the present application. The server shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Python, Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a first extraction unit, a second extraction unit, and a generation unit. The names of the units do not constitute a limitation to the units themselves in some cases, and for example, the acquiring unit may also be described as a "unit that acquires user input information presented by a target page, and acquires context information of the user input information presented by the target page before presenting the user input information as previous context information".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring user input information presented by a target page, and acquiring context information of the user input information presented by the target page before the user input information is presented as previous context information; extracting entity information from user input information; extracting information related to the entity information from information sets predetermined for the target user set and the target product set as related information; generating reply information for the user input information based on the user input information, the previous context information, and the related information.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (18)

1. A method for generating information, comprising:
acquiring user input information presented by a target page, and acquiring context information of the user input information presented by the target page before the user input information is presented as previous context information;
extracting entity information from the user input information;
extracting information related to the entity information from a predetermined information set aiming at a target user set and a target product set as related information, wherein the related information comprises at least one of the following items: the relationship between a target user in the target user set and a target user, the relationship between a target product in the target product set and a target product, and the relationship between a target user in the target user set and a target product in the target product set;
generating reply information of the user input information based on the user input information, a user context information map and the related information, wherein the user context information map is constructed based on the prior context information and then updated based on the context information presented after the user input information is presented, and the context information map is updated in a manner that entity information and relationship information in the context information presented after the user input information is presented are added to the user context information map;
wherein the user context information graph is constructed based on state information of a user, the state information of the user comprising at least one of: the method comprises the steps of obtaining geographic position information of a user, a time zone of the user, obtaining date of obtaining state information of the user and obtaining time of obtaining the state information of the user.
2. The method of claim 1, wherein prior to the obtaining user input information for the target page presentation, the method further comprises:
in response to detecting that a user performs a target operation on the target page, extracting information related to the user from information sets predetermined for a target user set and a target product set as user information, and acquiring state information of the user.
3. The method of claim 2, wherein after the obtaining context information of the user input information, presented by the target page prior to presenting the user input information, as prior context information, the method further comprises:
constructing a knowledge graph as a user context information graph based on the user information, the state information, and the prior context information.
4. The method of claim 3, wherein the method further comprises:
in response to detecting that the target page presents context information of the user input information after presenting the user input information, extracting entity information and relationship information from the context information presented after presenting the user input information, and adding the extracted entity information and relationship information to the user context information graph.
5. The method of any of claims 1-4, wherein said extracting entity information from said user input information comprises:
parsing the user input information to extract entity information from the user input information;
determining the type of the entity information from a predetermined type set;
determining at least one of the following of the user input information: information of the principal and the subordinate guest, language and word, language state and sentence category.
6. The method of claim 5, wherein the generating reply information for the user input information based on the user input information, the previous context information, and the related information comprises:
generating reply information for the user input information based on the user input information, the previous context information, the related information, the type, and the at least one item.
7. The method of any of claims 1-4, wherein the generating reply information for the user input information based on the user input information, the previous context information, and the related information comprises:
fusing the user input information, the previous context information and the related information to obtain fused information;
identifying an intent of a user inputting the user input information based on the user input information, the prior context information, and the relevant information;
and generating reply information of the user input information based on the intention and the fused information.
8. The method of any of claims 1-4, 6, wherein the predetermined set of information is a pre-built knowledge-graph, the pre-built knowledge-graph comprising entity information characterizing entities and relationship information characterizing relationships between entities, the pre-built knowledge-graph comprising entity information characterizing information related to a target user of the set of target users or a target product of the set of target products, the pre-built knowledge-graph comprising relationship information characterizing any of: the relationship between the target users in the target user set and the target users, the relationship between the target products in the target product set and the target products, and the relationship between the target users in the target user set and the target products in the target product set.
9. An apparatus for generating information, comprising:
an acquisition unit configured to acquire user input information presented by a target page and acquire context information of the user input information presented by the target page before presenting the user input information as previous context information;
a first extraction unit configured to extract entity information from the user input information;
a second extraction unit configured to extract, as related information, information related to the entity information from information sets predetermined for a target user set and a target product set, wherein the related information includes at least one of: the relationship between a target user in the target user set and a target user, the relationship between a target product in the target product set and a target product, and the relationship between a target user in the target user set and a target product in the target product set;
a generating unit configured to generate reply information of the user input information based on the user input information, a user context information map, and the related information, wherein the user context information map is constructed based on the previous context information and then updated based on context information presented after the user input information is presented, and the context information map is updated by adding entity information and relationship information in context information presented after the user input information is presented to the user context information map;
wherein the user context information graph is constructed based on state information of a user, the state information of the user comprising at least one of: the method comprises the steps of obtaining geographic position information of a user, a time zone of the user, obtaining date of obtaining state information of the user and obtaining time of obtaining the state information of the user.
10. The apparatus of claim 9, wherein the apparatus further comprises:
a third extraction unit configured to, in response to detecting that a user performs a target operation with respect to the target page, extract information related to the user as user information from information sets predetermined for a target user set and a target product set, and acquire status information of the user.
11. The apparatus of claim 10, wherein the apparatus further comprises:
a construction unit configured to construct a knowledge graph as a user context information graph based on the user information, the state information, and the previous context information.
12. The apparatus of claim 11, wherein the apparatus further comprises:
a fourth extraction unit configured to, in response to detecting that the target page presents context information of the user input information after presenting the user input information, extract entity information and relationship information from the context information presented after presenting the user input information, and add the extracted entity information and relationship information to the user context information map.
13. The apparatus according to one of claims 9-12, wherein the first extraction unit comprises:
a parsing module configured to parse the user input information to extract entity information from the user input information;
a first determining module configured to determine a type of the entity information from a predetermined set of types;
a second determination module configured to determine at least one of the following for the user input information: information of the principal and the subordinate guest, language and word, language state and sentence category.
14. The apparatus of claim 13, wherein the generating unit comprises:
a second generation module configured to generate reply information for the user input information based on the user input information, the previous context information, the related information, the type, and the at least one item.
15. The apparatus according to one of claims 9-12, wherein the generating unit comprises:
a fusion module configured to fuse the user input information, the previous context information, and the related information to obtain fused information;
an identification module configured to identify an intent of a user inputting the user input information based on the user input information, the prior context information, and the relevant information;
a third generating module configured to generate reply information to the user input information based on the intent and the fused information.
16. The apparatus of one of claims 9-12, 14, wherein the predetermined set of information is a pre-built knowledge-graph, the pre-built knowledge-graph comprising entity information characterizing entities and relationship information characterizing relationships between entities, the pre-built knowledge-graph comprising entity information characterizing information related to a target user of the set of target users or a target product of the set of target products, the pre-built knowledge-graph comprising relationship information characterizing any one of: the relationship between the target users in the target user set and the target users, the relationship between the target products in the target product set and the target products, and the relationship between the target users in the target user set and the target products in the target product set.
17. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
18. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-8.
CN201811382265.7A 2018-11-20 2018-11-20 Method and apparatus for generating information Active CN109522399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811382265.7A CN109522399B (en) 2018-11-20 2018-11-20 Method and apparatus for generating information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811382265.7A CN109522399B (en) 2018-11-20 2018-11-20 Method and apparatus for generating information

Publications (2)

Publication Number Publication Date
CN109522399A CN109522399A (en) 2019-03-26
CN109522399B true CN109522399B (en) 2022-08-12

Family

ID=65776834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811382265.7A Active CN109522399B (en) 2018-11-20 2018-11-20 Method and apparatus for generating information

Country Status (1)

Country Link
CN (1) CN109522399B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977216A (en) * 2019-04-01 2019-07-05 苏州思必驰信息科技有限公司 Dialogue recommended method and system based on scene
CN110347792B (en) * 2019-06-25 2022-12-20 腾讯科技(深圳)有限公司 Dialog generation method and device, storage medium and electronic equipment
CN112560508A (en) * 2020-12-22 2021-03-26 中国联合网络通信集团有限公司 Conversation processing method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105068661A (en) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN108227932A (en) * 2018-01-26 2018-06-29 上海智臻智能网络科技股份有限公司 Interaction is intended to determine method and device, computer equipment and storage medium
CN108763192A (en) * 2018-04-18 2018-11-06 达而观信息科技(上海)有限公司 Entity relation extraction method and device for text-processing
CN108804698A (en) * 2018-03-30 2018-11-13 深圳狗尾草智能科技有限公司 Man-machine interaction method, system, medium based on personage IP and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2702937C (en) * 2007-10-17 2014-10-07 Neil S. Roseman Nlp-based content recommender

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105068661A (en) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN108227932A (en) * 2018-01-26 2018-06-29 上海智臻智能网络科技股份有限公司 Interaction is intended to determine method and device, computer equipment and storage medium
CN108804698A (en) * 2018-03-30 2018-11-13 深圳狗尾草智能科技有限公司 Man-machine interaction method, system, medium based on personage IP and equipment
CN108763192A (en) * 2018-04-18 2018-11-06 达而观信息科技(上海)有限公司 Entity relation extraction method and device for text-processing

Also Published As

Publication number Publication date
CN109522399A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN109460513B (en) Method and apparatus for generating click rate prediction model
US11645517B2 (en) Information processing method and terminal, and computer storage medium
CN109145219B (en) Method and device for judging validity of interest points based on Internet text mining
US10558984B2 (en) Method, apparatus and server for identifying risky user
CN107679217B (en) Associated content extraction method and device based on data mining
CN107241260B (en) News pushing method and device based on artificial intelligence
CN110020162B (en) User identification method and device
CN109522399B (en) Method and apparatus for generating information
CN108268450B (en) Method and apparatus for generating information
CN109190123B (en) Method and apparatus for outputting information
CN113407851A (en) Method, device, equipment and medium for determining recommendation information based on double-tower model
CN113051380A (en) Information generation method and device, electronic equipment and storage medium
CN111639162A (en) Information interaction method and device, electronic equipment and storage medium
CN108959289B (en) Website category acquisition method and device
CN112926308A (en) Method, apparatus, device, storage medium and program product for matching text
CN111787042B (en) Method and device for pushing information
CN112446214A (en) Method, device and equipment for generating advertisement keywords and storage medium
US20220327147A1 (en) Method for updating information of point of interest, electronic device and storage medium
CN114036397A (en) Data recommendation method and device, electronic equipment and medium
CN113806541A (en) Emotion classification method and emotion classification model training method and device
CN114118937A (en) Information recommendation method and device based on task, electronic equipment and storage medium
CN111782776A (en) Method and device for realizing intention identification through slot filling
CN112348615A (en) Method and device for auditing information
CN112131484A (en) Multi-person session establishing method, device, equipment and storage medium
CN112860860A (en) Method and device for answering questions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant