CN111984758A - Response information processing method, intelligent device and storage medium - Google Patents

Response information processing method, intelligent device and storage medium Download PDF

Info

Publication number
CN111984758A
CN111984758A CN202010608166.7A CN202010608166A CN111984758A CN 111984758 A CN111984758 A CN 111984758A CN 202010608166 A CN202010608166 A CN 202010608166A CN 111984758 A CN111984758 A CN 111984758A
Authority
CN
China
Prior art keywords
information
emotion
characteristic information
input information
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010608166.7A
Other languages
Chinese (zh)
Inventor
赵建宇
李让
胡长建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010608166.7A priority Critical patent/CN111984758A/en
Publication of CN111984758A publication Critical patent/CN111984758A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3335Syntactic pre-processing, e.g. stopword elimination, stemming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a response information processing method, intelligent equipment and a storage medium, wherein the intelligent equipment firstly receives input information; then, identifying first characteristic information and second characteristic information of the input information by using an emotion classifier, wherein the first characteristic information is used for representing emotion contained in the input information, and the second characteristic information is word information associated with the first characteristic information in the input information; and then constructing and outputting a response word matched with the input information according to the first characteristic information and the second characteristic information.

Description

Response information processing method, intelligent device and storage medium
Technical Field
The present invention relates to the field of voice processing technologies, and in particular, to a response information processing method, an intelligent device, and a storage medium.
Background
In the intelligent customer service scene, the corresponding reply is made after the emotion of the user is acquired, so that the shared emotion capacity of the machine can be improved, and the user experience is improved. Currently, it is common practice to provide a corresponding fixed pattern of replies for a particular emotion. For example, when a user is refractory to a problem, a comfort reply is generated; when the user is happy to solve the problem, a reply without a passenger is generated. Although this approach can generate a corresponding reply, it does not generate a more targeted reply for the reason of the emotion generation of the user, and therefore lacks the ability to share emotion, thereby reducing the user experience.
Disclosure of Invention
The embodiment of the invention provides a response information processing method, intelligent equipment and a storage medium for solving the problem that the existing voice intelligent response lacks the co-situation capability.
According to a first aspect of the present invention, there is provided a response information processing method, including: receiving input information; identifying first characteristic information and second characteristic information of the input information by using an emotion classifier, wherein the first characteristic information is used for representing emotion contained in the input information, and the second characteristic information is word information associated with the first characteristic information in the input information; and constructing and outputting a response word matched with the input information according to the first characteristic information and the second characteristic information.
According to one embodiment of the present invention, identifying first feature information and second feature information of the input information by using an emotion classifier includes: performing emotion classification recognition by taking the input information as the input of an emotion classifier, and outputting first characteristic information and word information with the maximum attention (attention) weight of the input information in the process corresponding to the emotion classification recognition; and determining the word information as second characteristic information.
According to an embodiment of the invention, the method further comprises: collecting a series of training corpora for emotion classification; each corpus comprises a corresponding emotion label; and carrying out emotion classifier training based on the training corpus by a hierarchical LSTM algorithm containing attention.
According to an embodiment of the present invention, constructing and outputting a response matching the input information based on the first feature information and the second feature information includes: acquiring a standard response language corresponding to the first characteristic information; and carrying out language combination on the standard response words and the second characteristic information to form and output response words matched with the input information.
According to an embodiment of the present invention, the language combining the standard answer and the second feature information to form an answer matching the input information includes: determining a concatenation conjunction between the standard answer word and the second characteristic information based on a specific rule; and carrying out language combination on the standard answer words and the second characteristic information by using the spliced conjunctions to form answer words matched with the input information.
According to a second aspect of the present invention, there is also provided a smart device, comprising: the receiving module is used for receiving input information; the recognition module is used for recognizing first characteristic information and second characteristic information of the input information by using an emotion classifier, wherein the first characteristic information is used for representing emotion contained in the input information, and the second characteristic information is word information associated with the first characteristic information in the input information; and the construction module is used for constructing and outputting a response word matched with the input information according to the first characteristic information and the second characteristic information.
According to an embodiment of the present invention, the identification module is specifically configured to perform emotion classification identification by using the input information as an input of an emotion classifier, and output first feature information and word information with the largest attribute weight in a process of corresponding to the emotion classification identification by the user information; and determining the word information as second characteristic information.
According to an embodiment of the present invention, the smart device further includes: the emotion classifier training module is used for acquiring a series of training corpora for emotion classification; each corpus comprises a corresponding emotion label; and carrying out emotion classifier training based on the training corpus by a hierarchical LSTM algorithm containing attention.
According to an embodiment of the present invention, the building module is specifically configured to obtain a standard answer word corresponding to the first feature information; and carrying out language combination on the standard response words and the second characteristic information to form and output response words matched with the input information.
According to an embodiment of the present invention, the building module is further configured to determine a concatenation conjunction between the standard answer and the second feature information based on a specific rule; and carrying out language combination on the standard answer words and the second characteristic information by using the spliced conjunctions to form answer words matched with the input information.
According to a third aspect of the present invention, there is also provided a computer-readable storage medium comprising a set of computer-executable instructions which, when executed, are adapted to perform any of the above-mentioned response information processing methods.
According to the response information processing method, the intelligent device and the storage medium, the intelligent device firstly receives input information; then, identifying first characteristic information and second characteristic information of the input information by using an emotion classifier, wherein the first characteristic information is used for representing emotion contained in the input information, and the second characteristic information is word information associated with the first characteristic information in the input information; and then constructing and outputting a response word matched with the input information according to the first characteristic information and the second characteristic information. Therefore, the emotion and word information generating emotion contained in the input information are identified and obtained according to the input information, and then the emotion and the word information generating emotion are utilized to generate the response words giving emotion so as to reply the input information in a targeted manner, so that the common emotion capacity of the intelligent equipment in response information processing is improved, and the user experience is effectively improved.
It is to be understood that the teachings of the present invention need not achieve all of the above-described benefits, but rather that specific embodiments may achieve specific technical results, and that other embodiments of the present invention may achieve benefits not mentioned above.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a first schematic diagram illustrating a first implementation flow of a response message processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram showing a second implementation flow of a reply information processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram showing a third flow chart of implementing the method for processing reply information according to the embodiment of the present invention;
fig. 4 shows a schematic structural diagram of an intelligent device according to an embodiment of the present invention.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given only to enable those skilled in the art to better understand and to implement the present invention, and do not limit the scope of the present invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The technical solution of the present invention is further elaborated below with reference to the drawings and the specific embodiments.
Fig. 1 is a first schematic flow chart illustrating an implementation of a response message processing method according to an embodiment of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a response information processing method, where the method includes: an operation 101 of receiving input information; an operation 102 of identifying first feature information and second feature information of input information by using an emotion classifier; and operation 103, constructing and outputting a response word matched with the input information according to the first characteristic information and the second characteristic information.
In operation 101, the smart device receives input information, where the input information may be voice information from a user or query instruction information automatically generated by the smart device in response to a user trigger.
Here, the intelligent device according to the embodiment of the present invention may be an intelligent voice device with voice interaction or voice recognition, which is currently developed or to be developed in the future, or may be a robot device rot, or may be an intelligent customer service system.
In operation 102, the first feature information is used to represent an emotion included in the input information, and the second feature information is word information associated with the first feature information in the input information, that is, word information generating a corresponding emotion. Specifically, the method for identifying the first characteristic information and the second characteristic information of the input information by using the emotion classifier comprises the following steps: and performing emotion classification recognition by taking the input information as the input of the emotion classifier, outputting the first characteristic information and the word information with the maximum attention weight in the corresponding emotion classification recognition process of the input information, and determining the word information as second characteristic information.
In an example, the intelligent device receives input information "My computer summary cut down. but My file has not but left saved. i am changed. And then, the emotion included in the input information is 'sad', namely first characteristic information, and the word information 'my file hat not before saved', namely second characteristic information, with the maximum attention weight in the emotion classification and identification process is obtained through the identification of the emotion classifier by the intelligent equipment.
In yet another example, the smart device receives an input message "My file has been restored in the way you all me. And then, the emotion of the input information is recognized by the intelligent equipment by using an emotion classifier, namely first characteristic information, and word information 'my file has been restored' with the maximum weight of the attribute in the emotion classification recognition process, namely second characteristic information.
In operation 103, the smart device constructs and outputs a response word matching the input information according to the first characteristic information and the second characteristic information.
Specifically, the intelligent device first acquires a standard response word corresponding to the first feature information; and then, carrying out language combination on the standard response words and the second characteristic information to form and output response words matched with the input information.
For example, defining input information as q, identifying by an emotion classifier to obtain an emotion contained in the input information q as e, and generating word information of the emotion as r; the standard answer corresponding to emotion e is t, so t and r are combined by voice, and answer a matched with q is formed and output.
Therefore, according to the embodiment of the invention, the emotion and the word information generating emotion contained in the input information are identified and obtained according to the input information, and then the emotion and the word information generating emotion are utilized to generate the response words giving emotion so as to reply the input information in a targeted manner, so that the common-emotion capacity of the intelligent equipment on response information processing is improved, and the user experience is effectively improved.
Fig. 2 shows a schematic implementation flow diagram of a reply information processing method according to an embodiment of the present invention.
Referring to fig. 2, the response information processing method according to the embodiment of the present invention includes: operation 201, collecting a series of training corpora for emotion classification; operation 202, performing emotion classifier training based on the training corpus through a hierarchical LSTM algorithm containing attention; operation 203, receiving input information; operation 204, identifying first characteristic information and second characteristic information of the input information by using an emotion classifier; in operation 205, a response word matching the input information is constructed and output according to the first characteristic information and the second characteristic information.
In the embodiment of the invention, firstly, the intelligent device or the external equipment of the intelligent device is trained and generated based on the operations 201-202, so that a foundation is laid for the subsequent operation 204 of identifying the first characteristic information and the second characteristic information of the input information by using the emotion classifier.
Wherein, each corpus comprises a corresponding emotion label. In operation 201-202, specifically, a batch of training corpora Q ═ Q1, Q2,.. once, qn } for emotion classification is constructed first, where each emotion corpus qi has a corresponding emotion label ei, and the value of i is a positive integer greater than 1, and certainly, in order to ensure the recognition accuracy of the emotion classifier, the larger the value of i is, the better the value is in principle; and then, carrying out classifier training based on the input training corpus Q by using a hierarchical LSTM algorithm containing attention, thereby obtaining an emotion classifier G.
In operation 203, the smart device receives input information, where the input information may be voice information from a user or query instruction information automatically generated by the smart device in response to a user trigger.
Here, the intelligent device according to the embodiment of the present invention may be an intelligent voice device with voice interaction or voice recognition, which is currently developed or to be developed in the future, or may be a robot device rot, or may be an intelligent customer service system.
In operation 204, the first feature information is used to characterize an emotion included in the input information, and the second feature information is word information associated with the first feature information in the input information, that is, word information generating a corresponding emotion.
Specifically, the intelligent device takes the input information as the input of the emotion classifier to perform emotion classification recognition, outputs the first feature information and the word information with the maximum attention weight in the process of corresponding emotion classification recognition of the input information, and determines the word information as the second feature information.
In an example, the intelligent device receives input information "My computer summary cut down. but My file has not but left saved. i am changed. And then, the emotion included in the input information is 'sad', namely first characteristic information, and the word information 'my file hat not before saved', namely second characteristic information, with the maximum attention weight in the emotion classification and identification process is obtained through the identification of the emotion classifier by the intelligent equipment.
In yet another example, the smart device receives an input message "My file has been restored in the way you all me. And then, the emotion of the input information is recognized by the intelligent equipment by using an emotion classifier, namely first characteristic information, and word information 'my file has been restored' with the maximum weight of the attribute in the emotion classification recognition process, namely second characteristic information.
In operation 205, the smart device constructs and outputs a response word matching the input information according to the first characteristic information and the second characteristic information.
Specifically, the intelligent device first acquires a standard response word corresponding to the first feature information; and then, carrying out language combination on the standard response words and the second characteristic information to form and output response words matched with the input information.
For example, defining input information as q, identifying by an emotion classifier to obtain emotion contained in the input information as e, and performing emotion classification identification on word information with the maximum weight of attention in the q, namely, generating emotion, wherein the word information is r; the standard answer corresponding to emotion e is t, so t and r are combined by a rule mode, and answer a matched with q is formed and output.
Therefore, the constructed batch emotion classification training corpus is used for carrying out emotion classifier training through a layered LSTM algorithm containing attention; further, on the basis of obtaining the emotion classifier through training, emotion contained in the input information and word information generating emotion are obtained according to the input information, and then response words endowed with emotion are generated by utilizing the emotion and the word information generating emotion, so that the input information is subjected to targeted response, the common emotion capacity of the intelligent equipment in response information processing is improved, and the user experience is effectively improved.
Fig. 3 shows a third implementation flow diagram of the reply information processing method according to the embodiment of the present invention.
Referring to fig. 3, the response information processing method according to the embodiment of the present invention includes: operation 301, receiving input information; operation 302, identifying first characteristic information and second characteristic information of input information by using an emotion classifier; operation 303, acquiring a standard response word corresponding to the first feature information; and operation 304, performing language combination on the standard answer and the second characteristic information to form and output an answer matched with the input information.
In operation 301, the smart device receives input information, where the input information may be voice information from a user or query instruction information automatically generated by the smart device in response to a user trigger.
Here, the intelligent device according to the embodiment of the present invention may be an intelligent voice device with voice interaction or voice recognition, which is currently developed or to be developed in the future, or may be a robot device rot, or may be an intelligent customer service system.
In operation 302, the first feature information is used to characterize an emotion included in the input information, and the second feature information is word information associated with the first feature information in the input information, that is, word information generating a corresponding emotion.
Specifically, the method for identifying the first characteristic information and the second characteristic information of the input information by using the emotion classifier comprises the following steps: and performing emotion classification recognition by taking the input information as the input of the emotion classifier, outputting the first characteristic information and the word information with the maximum attention weight in the corresponding emotion classification recognition process of the input information, and determining the word information as second characteristic information.
In operation 303, since the corresponding standard answer words are pre-stored in the smart device for different emotions, the standard answer word corresponding to the first feature information can be directly obtained from the answer word library based on the first feature information used for representing the emotion.
In operation 304, the smart device first determines a concatenation conjunction between the standard answer and the second feature information based on a specific rule; and carrying out language combination on the standard answer words and the second characteristic information by utilizing the spliced conjunctions to form answer words matched with the input information.
For example, defining input information as q, identifying by an emotion classifier to obtain an emotion contained in the input information q as e, and generating word information of the emotion as r; and the standard answer corresponding to the emotion e is t, a splicing conjunctive word h between the t and the r is determined based on a specific rule mode, and the t and the r are subjected to voice combination based on the splicing conjunctive word h, so that an answer a matched with the q is formed and output.
In an example, the intelligent device receives input information "My computer summary cut down. but My file has not but left saved. i am changed. Then, the emotion included in the input information is 'sad', namely first characteristic information, and word information 'my file hat not before saved', namely second characteristic information, with the maximum attention weight in the emotion classification and identification process is obtained through the identification of an emotion classifier by intelligent equipment; the standard answer word corresponding to the "sad" is "Don't word", the concatenation conjunction between the "Don't word" and the "my file has not found present" is "about", so that the speech combination is performed to form the answer word "Don't word about unsaved file".
In yet another example, the smart device receives an input message "My file has been restored in the way you all me. Then, the emotion of the input information is recognized by the intelligent equipment by using an emotion classifier, namely first characteristic information, and word information 'my file has been restored' with the maximum weight of the attention in the emotion classification recognition process, namely second characteristic information are obtained; the standard answer words corresponding to the "Thanks a lot" are "It's my multiplex", "It's my multiplex" and "my file has been restored" and the concatenation conjunction words are "to", so that the voice combination is performed to form the answer words "It's my multiplex to help you with your file restore".
Therefore, the response words endowed with emotion are generated by utilizing the emotion and the word information generating the emotion, and the input information is responded in a targeted manner, so that the common emotion capacity of the intelligent equipment on response information processing is improved, and the user experience is effectively improved.
Similarly, based on the above response message processing method, an embodiment of the present invention further provides a computer-readable storage medium, where a program is stored, and when the program is executed by a processor, the processor is caused to perform at least the following operation steps: an operation 101 of receiving input information; an operation 102 of identifying first feature information and second feature information of input information by using an emotion classifier; and operation 103, constructing and outputting a response word matched with the input information according to the first characteristic information and the second characteristic information.
Further, based on the above response information processing method, an embodiment of the present invention further provides an intelligent device, as shown in fig. 4, where the intelligent device 40 includes: a receiving module 401, configured to receive input information; an identifying module 402, configured to identify, by using an emotion classifier, first feature information and second feature information of the input information, where the first feature information is used to characterize an emotion included in the input information, and the second feature information is word information associated with the first feature information in the input information; and a constructing module 403, configured to construct and output a response word matching the input information according to the first characteristic information and the second characteristic information.
According to an embodiment of the present invention, the identifying module 402 is specifically configured to perform emotion classification and identification by using the input information as an input of an emotion classifier, and output first feature information and word information with the largest attribute weight in a process of corresponding to the emotion classification and identification by the user information; and determining the word information as second characteristic information.
According to an embodiment of the present invention, as shown in fig. 4, the smart device 40 further includes: an emotion classifier training module 404, configured to collect a series of training corpora used for emotion classification; each corpus comprises a corresponding emotion label; and carrying out emotion classifier training based on the training corpus by a hierarchical LSTM algorithm containing attention.
According to an embodiment of the present invention, the constructing module 403 is specifically configured to obtain a standard response word corresponding to the first feature information; and carrying out language combination on the standard response words and the second characteristic information to form and output response words matched with the input information.
According to an embodiment of the present invention, the constructing module 403 is further configured to determine a concatenation conjunction between the standard answer and the second feature information based on a specific rule; and carrying out language combination on the standard answer words and the second characteristic information by using the spliced conjunctions to form answer words matched with the input information.
Here, it should be noted that: the above description of the embodiment of the smart device is similar to the description of the embodiment of the method shown in fig. 1 to 3, and has similar beneficial effects to the embodiment of the method shown in fig. 1 to 3, and therefore, the description is omitted. For technical details not disclosed in the embodiment of the intelligent device of the present invention, please refer to the description of the method embodiment shown in fig. 1 to 3 of the present invention for understanding, and therefore, for brevity, will not be described again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method of response information processing, the method comprising:
receiving input information;
identifying first characteristic information and second characteristic information of the input information by using an emotion classifier, wherein the first characteristic information is used for representing emotion contained in the input information, and the second characteristic information is word information associated with the first characteristic information in the input information;
and constructing and outputting a response word matched with the input information according to the first characteristic information and the second characteristic information.
2. The method of claim 1, wherein identifying the first feature information and the second feature information of the input information using an emotion classifier comprises:
taking the input information as the input of an emotion classifier to carry out emotion classification recognition, and outputting first characteristic information and word information with the maximum attention weight in the process of corresponding to the emotion classification recognition;
and determining the word information as second characteristic information.
3. The method of claim 1, further comprising:
collecting a series of training corpora for emotion classification; each corpus comprises a corresponding emotion label;
and carrying out emotion classifier training based on the training corpus by a layered LSTM algorithm containing attention.
4. The method according to any one of claims 1 to 3, wherein constructing and outputting a response matching the input information based on the first feature information and the second feature information includes:
acquiring a standard response language corresponding to the first characteristic information;
and carrying out language combination on the standard response words and the second characteristic information to form and output response words matched with the input information.
5. The method of claim 4, wherein the language combining the standard answer and the second feature information to form an answer matching the input information comprises:
determining a concatenation conjunction between the standard answer word and the second characteristic information based on a specific rule;
and carrying out language combination on the standard answer words and the second characteristic information by using the spliced conjunctions to form answer words matched with the input information.
6. A smart device, the smart device comprising:
the receiving module is used for receiving input information;
the recognition module is used for recognizing first characteristic information and second characteristic information of the input information by using an emotion classifier, wherein the first characteristic information is used for representing emotion contained in the input information, and the second characteristic information is word information related to the first characteristic information in the input information;
and the construction module is used for constructing and outputting a response word matched with the input information according to the first characteristic information and the second characteristic information.
7. The smart device of claim 6,
the identification module is specifically configured to perform emotion classification identification by using the input information as an input of an emotion classifier, and output word information with the largest attention weight in a process corresponding to the emotion classification identification by using first feature information and the user information; and determining the word information as second characteristic information.
8. The smart device of claim 6, further comprising:
the emotion classifier training module is used for acquiring a series of training corpora for emotion classification; each corpus comprises a corresponding emotion label; and carrying out emotion classifier training based on the training corpus by a layered LSTM algorithm containing attention.
9. The smart device of any one of claims 6 to 8,
the building module is specifically configured to obtain a standard response word corresponding to the first feature information; and carrying out language combination on the standard response words and the second characteristic information to form and output response words matched with the input information.
10. A computer-readable storage medium comprising a set of computer-executable instructions which, when executed, perform the response information processing method of any one of claims 1 to 5.
CN202010608166.7A 2020-06-29 2020-06-29 Response information processing method, intelligent device and storage medium Pending CN111984758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010608166.7A CN111984758A (en) 2020-06-29 2020-06-29 Response information processing method, intelligent device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010608166.7A CN111984758A (en) 2020-06-29 2020-06-29 Response information processing method, intelligent device and storage medium

Publications (1)

Publication Number Publication Date
CN111984758A true CN111984758A (en) 2020-11-24

Family

ID=73437628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010608166.7A Pending CN111984758A (en) 2020-06-29 2020-06-29 Response information processing method, intelligent device and storage medium

Country Status (1)

Country Link
CN (1) CN111984758A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893344A (en) * 2016-03-28 2016-08-24 北京京东尚科信息技术有限公司 User semantic sentiment analysis-based response method and device
CN110032742A (en) * 2017-11-28 2019-07-19 丰田自动车株式会社 Respond sentence generating device, method and storage medium and voice interactive system
CN110110169A (en) * 2018-01-26 2019-08-09 上海智臻智能网络科技股份有限公司 Man-machine interaction method and human-computer interaction device
CN111078837A (en) * 2019-12-11 2020-04-28 腾讯科技(深圳)有限公司 Intelligent question and answer information processing method, electronic equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893344A (en) * 2016-03-28 2016-08-24 北京京东尚科信息技术有限公司 User semantic sentiment analysis-based response method and device
CN110032742A (en) * 2017-11-28 2019-07-19 丰田自动车株式会社 Respond sentence generating device, method and storage medium and voice interactive system
CN110110169A (en) * 2018-01-26 2019-08-09 上海智臻智能网络科技股份有限公司 Man-machine interaction method and human-computer interaction device
CN111078837A (en) * 2019-12-11 2020-04-28 腾讯科技(深圳)有限公司 Intelligent question and answer information processing method, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
US11645517B2 (en) Information processing method and terminal, and computer storage medium
CN106776544B (en) Character relation recognition method and device and word segmentation method
CN103678304B (en) Method and device for pushing specific content for predetermined webpage
CN109241525B (en) Keyword extraction method, device and system
CN111159363A (en) Knowledge base-based question answer determination method and device
CN111159409B (en) Text classification method, device, equipment and medium based on artificial intelligence
CN109710739B (en) Information processing method and device and storage medium
CN110347786B (en) Semantic model tuning method and system
CN111402864A (en) Voice processing method and electronic equipment
CN113220828B (en) Method, device, computer equipment and storage medium for processing intention recognition model
CN114706945A (en) Intention recognition method and device, electronic equipment and storage medium
CN108595141A (en) Pronunciation inputting method and device, computer installation and computer readable storage medium
CN116701601A (en) Man-machine interaction method
CN111984758A (en) Response information processing method, intelligent device and storage medium
CN109388695B (en) User intention recognition method, apparatus and computer-readable storage medium
CN108509059B (en) Information processing method, electronic equipment and computer storage medium
CN114242047A (en) Voice processing method and device, electronic equipment and storage medium
JP2019128925A (en) Event presentation system and event presentation device
CN109726279B (en) Data processing method and device
CN111198926B (en) Business handling management method and device, electronic equipment and storage medium
CN112487236A (en) Method, device, equipment and storage medium for determining associated song list
CN110532565A (en) Sentence processing method and processing device and electronic equipment
CN110222930A (en) Customer service householder method, equipment and customer service system and computer readable storage medium
CN113782001B (en) Specific field voice recognition method and device, electronic equipment and storage medium
CN113011170B (en) Contract processing method, electronic equipment and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination