WO2020155619A1 - Procédé et appareil de conversation en ligne avec une machine comprenant un sentiment, dispositif informatique et support d'informations - Google Patents

Procédé et appareil de conversation en ligne avec une machine comprenant un sentiment, dispositif informatique et support d'informations Download PDF

Info

Publication number
WO2020155619A1
WO2020155619A1 PCT/CN2019/103516 CN2019103516W WO2020155619A1 WO 2020155619 A1 WO2020155619 A1 WO 2020155619A1 CN 2019103516 W CN2019103516 W CN 2019103516W WO 2020155619 A1 WO2020155619 A1 WO 2020155619A1
Authority
WO
WIPO (PCT)
Prior art keywords
response
model
chat
sentence
chat sentence
Prior art date
Application number
PCT/CN2019/103516
Other languages
English (en)
Chinese (zh)
Inventor
吴壮伟
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020155619A1 publication Critical patent/WO2020155619A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to an emotional machine chat method, device, computer equipment, and storage medium.
  • chatbots have gradually emerged.
  • a chatbot is a program used to simulate human conversations or chats. It can be used for practical purposes, such as customer service and consultation. There are also some social robots used to chat with people.
  • chatbots will be equipped with a natural language processing system, but more of them extract keywords from input sentences, and then retrieve answers based on keywords from the database.
  • the answers of these chat robots are usually pretty, non-emotional, and the chat mode is the same, which leads to people's low interest in chatting with them, and the utilization rate of chat robots is also low.
  • This application provides an emotional machine chatting method, device, computer equipment, and storage medium to solve the problem that chat robots answer the same question without emotion.
  • An emotional machine chat method includes the following steps:
  • the candidate response with the largest deep reinforcement learning value is returned as the response sentence of the chat sentence.
  • An emotional machine chat device including:
  • Obtaining module used to obtain chat sentences input by the user
  • a generating module configured to input the chat sentence into a preset response generation model, and obtain an initial response output by the response generation model in response to the chat sentence;
  • a processing module configured to input the initial response into a preset emotion generation model, and obtain at least two candidate responses that carry emotion that are output by the emotion generation model in response to the initial response;
  • a calculation module configured to input the candidate response and the chat sentence into a trained deep-strength learning network model to obtain the deep-strength learning value of each candidate response;
  • the execution module is used to return the candidate response with the largest deep reinforcement learning value as the response sentence of the chat sentence.
  • a computer device includes a memory and a processor, wherein computer readable instructions are stored in the memory, and when the computer readable instructions are executed by the processor, the processor implements the following steps: acquiring a chat input by a user Statement
  • the candidate response with the largest deep reinforcement learning value is returned as the response sentence of the chat sentence.
  • a computer-readable storage medium having computer-readable instructions stored thereon, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
  • the candidate response with the largest deep reinforcement learning value is returned as the response sentence of the chat sentence.
  • the beneficial effects of the embodiments of the present application are: by acquiring the chat sentence input by the user; inputting the chat sentence into a preset response generation model, and obtaining the initial response output by the response generation model in response to the chat sentence;
  • the initial response is input into a preset emotion generation model, and at least two candidate responses that carry emotions output by the emotion generation model in response to the initial response are obtained; and the candidate response and the chat sentence are input into the process
  • the deep reinforcement learning value of each candidate response is obtained; and the candidate response with the largest deep reinforcement learning value is returned as the response sentence of the chat sentence. It returns emotional responses to the chat sentences entered by the user, making machine chat more natural and humane.
  • Figure 1 is a schematic diagram of the basic flow of an emotional machine chat method according to an embodiment of this application
  • Figure 2 is a schematic diagram of the flow of generating an initial response according to an embodiment of this application
  • FIG. 3 is a schematic diagram of the process of generating an initial response through a question-and-answer knowledge base according to an embodiment of the application
  • FIG. 4 is a schematic diagram of the training process of an emotion generation model according to an embodiment of the application
  • FIG. 5 is a schematic diagram of a process of deep learning enhanced network training according to an embodiment of this application.
  • FIG. 6 is a basic structural block diagram of an emotional machine chat device according to an embodiment of this application.
  • Fig. 7 is a block diagram of the basic structure of a computer device according to an embodiment of the application.
  • terminal and terminal equipment used herein include both wireless signal receiver equipment, which only has equipment with wireless signal receivers without transmitting capability, and also includes receiving and transmitting hardware equipment.
  • a device which has a device capable of performing two-way communication receiving and transmitting hardware on a two-way communication link.
  • Such equipment may include: cellular or other communication equipment, which has a single line display or multi-line display or cellular or other communication equipment without a multi-line display; PCS (Personal Communications Service, personal communication system), which can combine voice and data Processing, fax and/or data communication capabilities; PDA (Personal Digital Assistant), which can include radio frequency receivers, pagers, Internet/Intranet access, web browsers, notepads, calendars and/or GPS (Global Positioning System (Global Positioning System) receiver; conventional laptop and/or palmtop computer or other device, which has and/or includes a radio frequency receiver.
  • PCS Personal Communications Service, personal communication system
  • PDA Personal Digital Assistant
  • GPS Global Positioning System (Global Positioning System) receiver
  • conventional laptop and/or palmtop computer or other device which has and/or includes a radio frequency receiver.
  • terminal and terminal equipment used here may be portable, transportable, installed in vehicles (aviation, sea and/or land), or suitable and/or configured to operate locally, and/or In a distributed form, it runs on the earth and/or any other location in space.
  • the "terminal” and “terminal equipment” used here can also be communication terminals, internet terminals, music/video playback terminals, for example, PDAs, MIDs (Mobile Internet Devices, mobile Internet devices) and/or music/video playback Functional mobile phones can also be devices such as smart TVs and set-top boxes.
  • FIG. 1 is a schematic diagram of the basic flow of an emotional machine chat method in this embodiment.
  • an emotional machine chat method includes the following steps:
  • S 10K obtains the chat sentence entered by the user
  • the language information input by the user is acquired through the interactive page on the terminal.
  • the received information can be text information or voice information.
  • the voice information is converted into text information through a voice recognition device.
  • the response generation model can use the trained Seq2Seq model.
  • the specific training process is to prepare the training corpus, that is, prepare the input sequence and the corresponding output sequence, input the input sequence into the Seq2Seq model, calculate the probability of the output sequence, and adjust the parameters of the Seq2Seq model , So that the entire sample, that is, all input sequences, has the highest probability of outputting the corresponding output sequence after Seq2Seq.
  • the process of using the Seq2Seq model to generate the initial response is to first vectorize the chat sentence, for example, use the one-hot vocabulary encoding method to obtain the word vector, and input it to the Encoder layer, where the Encoder layer uses the bidirectional LSTM layer as the basic neuron unit Multi-layer neuron layer;
  • the output state vector of the encoder is input to the Decoder layer, where the Decoder layer is also a multi-layer neural network with the bidirectional LSTM (Long Short-Term Memory) layer as the basic neuron unit;
  • the decoder layer is output
  • the final_state state vector is input to the Softmax layer, and the initial response content with the highest probability is obtained.
  • machine chat is applied to a question answering scenario, and the response generation model adopted is a question-and-answer knowledge base.
  • the response generation model adopted is a question-and-answer knowledge base.
  • the machine chat is used to accompany the user in small talk and to answer the user's question.
  • the response generation model is selected by first determining whether it is a question answering scene. Please refer to FIG. 2 for specific description.
  • S103 Input the initial response into a preset emotion generation model, and obtain at least two candidate responses that carry emotions output by the emotion generation model in response to the initial response.
  • the preset emotion generation model contains at least two emotion generation sub-models, which can transform the initial response into emotion. For example, changing the initial response with neutral emotion to a response with positive emotion, or changing the initial response with neutral emotion to a response with negative emotion.
  • Any emotion generation sub-model is based on the pre-trained Seq2Seq model.
  • An emotion generation sub-model is a Seq2Seq model that outputs a candidate response that carries emotion.
  • Each Seq2Seq model in the preset emotion generation model generates emotion due to different training corpus The factors are different, the output port Emotional candidate responses are also different.
  • the initial response is input to each Seq2Seq model in the preset emotion generation model, and candidate responses carrying various emotions are output. It is worth noting that the Seq2Seq model used for emotion generation here is different from the aforementioned Seq2Seq model used for generating initial responses, and the specific training process of the Seq2Seq model used for emotion generation is shown in FIG. 4.
  • the deep reinforcement learning network combines the perception ability of the deep learning network and the decision-making ability of the reinforcement learning network, and determines which candidate response to adopt by calculating the reinforcement learning value of each candidate response. Among them, the deep reinforcement learning network has the following loss
  • Q is the true deep reinforcement learning value
  • 0 is the deep reinforcement learning value predicted by the deep reinforcement learning network.
  • the training process of the deep reinforcement learning network is to prepare training samples.
  • Each sample in the training sample contains the input chat sentence and the candidate response corresponding to the chat sentence and the deep learning value of each candidate response; the deep learning value is based on the preset Rule labeling. For example, when a candidate response to a chat sentence causes the user to directly end the conversation, the deep learning value of the candidate response is low. When a candidate response to a chat sentence makes the user’s next chat sentence input If there is a positive change in emotion, the deep learning value of the candidate response is high.
  • the candidate response with the largest deep reinforcement learning value is considered to be the most appropriate response to the chat sentence input by the current user.
  • the response sentence is returned to the client terminal, and the text information is displayed on the terminal screen.
  • the text information can also be converted to audio first.
  • the audio output device of the terminal outputs language information.
  • the preset response generation model includes M response generation sub-models, where M is a positive integer greater than 1, and when the chat sentence is input into the preset response generation model, the initial response is obtained
  • M is a positive integer greater than 1
  • machine chat When machine chat is applied to a variety of scenarios, for example, it is applied to both the question answering scene and the non-question answering scene.
  • the scene is first identified, and then the corresponding response is determined according to the scene to generate a sub-model, which can make the generated response more Targeted.
  • the scene recognition model can be based on keywords to determine whether it is a problem-solving scene or a non-question-answering scene. It can be judged whether the input chat sentence contains keywords that express questions, such as "?" What" "How much” "Where” "How” and other interrogative particles. You can also use a regular matching algorithm to judge whether the chat is a question or not.
  • a regular expression is a logical formula for string manipulation. It uses predefined specific characters and combinations of these specific characters to form a "regular character” String", this "rule string” is used to express a filtering logic for strings.
  • the scene is a non-question answering scene. Identify whether it is a problem-solving scene, and further, you can subdivide the scenes, for example, the non-problem-solving scene can be subdivided into small chat, appreciation, and complaints; the problem-solving scene can be subdivided into pre-sales consultation, after-sales service, etc.
  • the segmented scenes can be judged by the preset keyword list. For each type of segmented scene, a keyword list is preset. When the extracted keywords in the input chat sentence are in the keyword list corresponding to a certain type of segmented scene When the words are the same, it is considered that the input chat sentence corresponds to the segmented scene.
  • a pre-trained LSTM-CNN neural network model is used for scene recognition. Specifically, for the input content, the Chinese word segmentation is performed first, and the basic word segmentation database is used, and the stop words, punctuation marks, etc. are removed sequentially, and the word embedding vector is obtained through the word vector model, and then passed into the neural network model based on LSTM-CNN. That is, the word embedding vector enters the multi-layer LSTM neural unit to obtain the state vector and output of each stage; then, based on the state vector of each stage, perform convolution and pooling operations (CNN) to obtain the integrated vector index; then integrate The vector index is input into the softmax function to obtain the probability of the corresponding scene. The scene with the highest probability is selected as the scene corresponding to the input chat sentence.
  • CNN convolution and pooling operations
  • the response generation model determines a response generation submodel corresponding to the chat sentence; the response generation model presets M response generation submodels, and the response generation submodel has a mapping relationship with the scenario.
  • the scene of the input chat sentence is determined, and the response generation sub-model corresponding to the user input chat sentence is determined according to the mapping relationship between the scene and the response generation sub-model.
  • the mapping relationship between the response generation sub-model and the scene is that when the scene is a question answering type, the question and answer knowledge base is used as the response generation sub-model, and when the scene is a non-question answering type, Use the trained Seq2Seq model.
  • the chat sentence into the response generation sub-model corresponding to the scene, and the response generation sub-model responds to the chat sentence to output the initial response.
  • the initial response is generated by the Seq2Seq model.
  • the process of generating the initial response is shown in Fig. 3.
  • the two-way maximum matching method is adopted in the embodiment of this application.
  • the two-way maximum matching method is a dictionary-based word segmentation method.
  • the dictionary-based word segmentation method is to match the Chinese character string to be analyzed with the entry in a machine dictionary according to a certain strategy. If a certain character string is found in the dictionary, the matching is successful.
  • the dictionary-based word segmentation method is divided into forward matching and reverse matching according to different scanning directions, and divided into maximum matching and minimum matching according to the difference in length.
  • the two-way maximum matching method compares the word segmentation results obtained by the forward maximum matching method and the reverse maximum matching method to determine the correct word segmentation method. According to research, about 90.0% of the sentences in Chinese, the forward maximum matching method and the reverse maximum matching method are completely coincident and correct.
  • the word segmentation result can also be matched with a preset stop word list, the stop words are removed, and the keywords of the chat sentence are obtained.
  • the Q&A knowledge base is searched according to keywords, and search results matching the keywords are obtained.
  • a third-party search engine can be used to search the Q & A knowledge base.
  • the Q&A knowledge base is retrieved by keywords, and there are multiple retrieval results. In the embodiment of the application, it is determined that among the retrieval results, the top ranked result is used as the initial response of the chat sentence.
  • the emotion generation model is based on N pre-trained Seq2Seq models. After each Seq2Seq model is trained, different emotions are added to the initial response. The training of any Seq2Seq model includes the following steps:
  • S 13K Obtain training corpus, the training corpus includes a number of input sequence and output sequence pairs, where the output sequence is the expression of the specified emotion type of the input sequence;
  • the training corpus is a number of sequence pairs, including an input sequence and an output sequence, where the output sequence is the expression of the specified emotion type of the input sequence, for example, the input sequence is a neutral expression "Today's weather is sunny, temperature is 25 degrees, air quality index 20", the expected output sequence is a positive expression "the weather is great today, the temperature is at a comfortable 25 degrees, and the air quality is good”.
  • S132 Input the input sequence into the Seq2Seq model, and adjust the parameters of the Seq2Seq model so that the Seq2Seq model has the greatest probability of outputting the output sequence in response to the input sequence.
  • the parameter file obtained at this time defines the Seq2Seq model that generates the specified emotion type.
  • the training of the deep reinforcement learning network model is performed through the following steps:
  • each sample in the training samples includes the input chat sentence and the candidate response corresponding to the chat sentence and the deep reinforcement learning value of each candidate response.
  • Each sample in the training sample contains the input chat sentence and the candidate response corresponding to the chat sentence and the deep learning value of each candidate response; the deep learning value is labeled according to preset rules, for example, If a candidate response of the user directly ends the dialogue, the deep learning value of the candidate response is low.
  • the candidate response to the chat sentence causes the user to enter the next round of chat sentences with a positive change in emotion, then the The candidate response has a high deep learning value.
  • S142 Input the training samples into a deep reinforcement learning network model, and obtain the deep reinforcement learning value predicted by the deep reinforcement learning network model.
  • Deep reinforcement learning can be analogous to supervised learning. Deep reinforcement learning tasks are usually described by Markov decision process.
  • the robot is in an environment, and each state is the robot's perception of the environment. When the robot performs an action, it will make the environment transfer to another state according to probability; at the same time, the environment will be given to the robot according to the reward function.
  • S143 Calculate the value of the loss function L(w) according to the predicted deep learning value. Substitute the deep reinforcement learning value predicted by the deep reinforcement learning network model and the actual deep learning value of the sample into the above loss function L(w), and calculate the value of the loss function.
  • the goal of training is the convergence of the loss function L(w), that is, when the network parameters of the deep reinforcement learning network model are continuously adjusted, the value of the loss function no longer decreases, but increases instead, the training ends.
  • the obtained parameter file is It is a file that defines the deep reinforcement learning network model.
  • Fig. 6 is a basic structural block diagram of an emotional machine chat device according to this embodiment.
  • an emotional machine chat device includes: an acquisition module 210, a generation module 220, a processing module 230, a calculation module 240, and an execution module 250.
  • the obtaining module 210 is used to obtain the chat sentence input by the user;
  • the generating module 220 is used to input the chat sentence into a preset response generation model, and obtain the output of the response generation model in response to the chat sentence Initial response;
  • a processing module 230 configured to input the initial response into a preset emotion generation model, and obtain at least two emotion-carrying candidate responses output by the emotion generation model in response to the initial response;
  • calculation module 240 Used to input the candidate response and the chat sentence into the trained deep reinforcement learning network model to obtain the deep reinforcement learning value of each candidate response;
  • the execution module 250 is used to return the candidate response with the largest deep reinforcement learning value As a response sentence of the chat sentence.
  • the embodiment of the application obtains the chat sentence input by the user; inputs the chat sentence into a preset response generation model, and obtains the initial response output by the response generation model in response to the chat sentence; and input the initial response
  • a preset emotion generation model obtain at least two candidate responses carrying emotions output by the emotion generation model in response to the initial response; input the candidate responses and the chat sentence into the trained deep reinforcement learning
  • the deep reinforcement learning value of each candidate response is obtained; and the candidate response with the largest deep reinforcement learning value is returned as the response sentence of the chat sentence. It returns emotional responses to the chat sentences entered by the user, making machine chat more natural and more humane.
  • the generation module includes: a first recognition sub-module, a first confirmation sub-module, and a first generation sub-module, wherein the first recognition sub-module is used to input the chat sentence into a preset In the scene recognition model, obtain the scene output by the scene recognition model in response to the chat sentence; a first confirmation sub-module, configured to determine a response generation sub-model corresponding to the chat sentence according to the scene; first generation A sub-module for inputting the chat sentence into the response generation sub-model, Acquire the initial response output by the response generation sub-model in response to the chat sentence.
  • the first recognition sub-module includes: a first matching sub-module, a second confirming sub-module, and a third confirming sub-module, wherein the first matching sub-module is configured to combine the chat sentence with a preset The regular expression matching of the chat sentence, where the preset regular expression contains the characteristics of the question sentence; the second confirmation sub-module is used to determine the question corresponding to the chat sentence when the chat sentence matches the preset regular expression Answering scenario; the third confirmation sub-module is used to determine that the chat sentence corresponds to a non-question answering scenario when the chat sentence does not match the preset regular expression.
  • the first generation submodule includes: a first word segmentation submodule, a first search submodule, and a first execution submodule, wherein the first word segmentation submodule performs word segmentation on the chat sentence to obtain Keywords of the chat sentence; a first search sub-module for searching the question and answer knowledge base according to the keywords to obtain search results that match the keywords; a first execution sub-module for returning the The search result is used as the initial response of the chat sentence.
  • the emotion generation model in the emotional machine chat device is based on N pre-trained Seq2Seq models
  • the emotional machine chat device further includes: a first acquisition submodule, a first calculation A sub-module, wherein the first acquisition sub-module is used to acquire training corpus, the training corpus includes a number of input sequence and output sequence pairs, wherein the output sequence is the expression of the specified emotion type of the input sequence; first The calculation sub-module is configured to input the input sequence into the Seq2Seq model and adjust the parameters of the Seq2Seq model to maximize the probability of the Seq2Seq model outputting the output sequence in response to the input sequence.
  • Q is the true deep reinforcement learning value
  • 0 is the deep reinforcement learning value predicted by the deep reinforcement learning network.
  • the emotional machine chat device further includes: a second acquisition sub-module, a second calculation sub-module, a third calculation sub-module, and a first adjustment sub-module, wherein the second acquisition sub-module uses To obtain training samples, each of the training samples includes the input chat sentence and the candidate response corresponding to the chat sentence and the deep reinforcement learning value of each candidate response; the second calculation sub-module is used to combine the training sample Input to the deep reinforcement learning network model to obtain the deep reinforcement learning value predicted by the deep reinforcement learning network model; the third calculation sub-module is used for the deep learning according to the prediction Value, calculate the value of the loss function L(w); the first adjustment sub-module is used to adjust the network parameters of the deep reinforcement learning network model, and end when the value of the loss function L(w) is minimum.
  • the second acquisition sub-module uses To obtain training samples, each of the training samples includes the input chat sentence and the candidate response corresponding to the chat sentence and the deep reinforcement learning value of each candidate response; the second calculation sub-module is used to
  • Fig. 7 is a block diagram of the basic structure of the computer device in this embodiment.
  • the computer device includes a processor, a nonvolatile storage medium, a memory, and a network interface connected through a system bus.
  • the non-volatile storage medium of the computer device stores an operating system, a database, and computer-readable instructions.
  • the database may store a control information sequence.
  • the processor of the computer device is used to provide calculation and control capabilities, and supports the operation of the entire computer device.
  • Computer readable instructions may be stored in the memory of the computer device, and when the computer readable instructions are executed by the processor, the processor can make the processor execute an emotional machine chat method.
  • the network interface of the computer equipment is used to connect and communicate with the terminal.
  • the processor is used to execute the specific content of the acquisition module 210, the generation module 220, the processing module 230, the calculation module 240, and the execution module 250 in FIG. 6, and the memory stores the program codes and various data required to execute the above modules.
  • the network interface is used for data transmission between user terminals or servers.
  • the memory in this embodiment stores the program code and data required to execute all the sub-modules in the emotional machine chat method, and the server can call the program code and data of the server to execute the functions of all the sub-modules.
  • the computer device obtains the chat sentence input by the user; inputs the chat sentence into a preset response generation model, obtains the initial response output by the response generation model in response to the chat sentence; and inputs the initial response to the pre- In the sentiment generation model, obtain at least two candidate responses carrying emotions output by the sentiment generation model in response to the initial response; input the candidate responses and the chat sentence into a trained deep reinforcement learning network model In the process, the deep reinforcement learning value of each candidate response is obtained; and the candidate response with the largest deep reinforcement learning value is returned as the response sentence of the chat sentence. It returns emotional responses to the chat sentences entered by the user, making machine chat more natural and humane.
  • the present application also provides a storage medium storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors can execute any of the foregoing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Des modes de réalisation de la présente invention concernent un procédé et un appareil de conversation en ligne avec une machine comprenant un sentiment, un dispositif informatique et un support d'informations, le procédé comprenant : l'obtention d'une phrase de conversation en ligne entrée par un utilisateur ; l'entrée de la phrase de conversation en ligne dans un modèle de génération de réponse prédéfini afin d'obtenir une réponse initiale émise en sortie par le modèle de génération de réponse en réponse à la phrase de conversation en ligne ; l'entrée de la réponse initiale dans un modèle de génération de sentiment prédéfini afin d'obtenir au moins deux réponses candidates portant un sentiment émises par le modèle de génération de sentiment en réponse à la réponse initiale ; l'entrée des réponses candidates et de la phrase de conversation en ligne dans un modèle de réseau d'apprentissage de renforcement profond entraîné afin d'obtenir une valeur d'apprentissage de renforcement profond de chaque réponse candidate ; et le renvoi de la réponse candidate présentant la plus grande valeur d'apprentissage de renforcement profond en tant que phrase de réponse de la phrase de conversation en ligne. Une réponse présentant un sentiment est renvoyée pour l'énoncé de conversation en ligne entré par l'utilisateur, ce qui rend la conversation de la machine plus naturelle et humanisée.
PCT/CN2019/103516 2019-01-28 2019-08-30 Procédé et appareil de conversation en ligne avec une machine comprenant un sentiment, dispositif informatique et support d'informations WO2020155619A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910081989.6A CN109977201B (zh) 2019-01-28 2019-01-28 带情感的机器聊天方法、装置、计算机设备及存储介质
CN201910081989.6 2019-01-28

Publications (1)

Publication Number Publication Date
WO2020155619A1 true WO2020155619A1 (fr) 2020-08-06

Family

ID=67076749

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103516 WO2020155619A1 (fr) 2019-01-28 2019-08-30 Procédé et appareil de conversation en ligne avec une machine comprenant un sentiment, dispositif informatique et support d'informations

Country Status (2)

Country Link
CN (1) CN109977201B (fr)
WO (1) WO2020155619A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163078A (zh) * 2020-09-29 2021-01-01 彩讯科技股份有限公司 智能应答方法、装置、服务器及存储介质
CN112560447A (zh) * 2020-12-22 2021-03-26 联想(北京)有限公司 回复信息获取方法、装置及计算机设备
CN113360614A (zh) * 2021-05-31 2021-09-07 多益网络有限公司 生成式聊天机器人回复情感控制方法、装置、终端及介质
CN114187997A (zh) * 2021-11-16 2022-03-15 同济大学 一种面向抑郁人群的心理咨询聊天机器人实现方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977201B (zh) * 2019-01-28 2023-09-22 平安科技(深圳)有限公司 带情感的机器聊天方法、装置、计算机设备及存储介质
CN110717022A (zh) * 2019-09-18 2020-01-21 平安科技(深圳)有限公司 一种机器人对话生成方法、装置、可读存储介质及机器人
CN112750430A (zh) * 2019-10-29 2021-05-04 微软技术许可有限责任公司 在自动聊天中提供响应
CN111241250B (zh) * 2020-01-22 2023-10-24 中国人民大学 一种情感对话生成系统和方法
CN111400466A (zh) * 2020-03-05 2020-07-10 中国工商银行股份有限公司 一种基于强化学习的智能对话方法及装置
CN111553171B (zh) * 2020-04-09 2024-02-06 北京小米松果电子有限公司 语料处理方法、装置及存储介质
CN111985216A (zh) * 2020-08-25 2020-11-24 武汉长江通信产业集团股份有限公司 基于强化学习和卷积神经网络的情感倾向性分析方法
CN113094490B (zh) * 2021-05-13 2022-11-22 度小满科技(北京)有限公司 一种会话交互方法、装置、电子设备及存储介质
CN113868386A (zh) * 2021-09-18 2021-12-31 天津大学 一种可控情感对话生成的方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324429A1 (en) * 2013-04-25 2014-10-30 Elektrobit Automotive Gmbh Computer-implemented method for automatic training of a dialogue system, and dialogue system for generating semantic annotations
CN108874972A (zh) * 2018-06-08 2018-11-23 青岛里奥机器人技术有限公司 一种基于深度学习的多轮情感对话方法
CN108960402A (zh) * 2018-06-11 2018-12-07 上海乐言信息科技有限公司 一种面向聊天机器人的混合策略式情感安抚系统
CN109129501A (zh) * 2018-08-28 2019-01-04 西安交通大学 一种陪伴式智能家居中控机器人
CN109977201A (zh) * 2019-01-28 2019-07-05 平安科技(深圳)有限公司 带情感的机器聊天方法、装置、计算机设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809103B (zh) * 2015-04-29 2018-03-30 北京京东尚科信息技术有限公司 一种人机对话的语义分析方法及系统
CN106910513A (zh) * 2015-12-22 2017-06-30 微软技术许可有限责任公司 情绪智能聊天引擎
JP6660770B2 (ja) * 2016-03-02 2020-03-11 株式会社アイ・ビジネスセンター 対話システムおよびプログラム
US11580350B2 (en) * 2016-12-21 2023-02-14 Microsoft Technology Licensing, Llc Systems and methods for an emotionally intelligent chat bot
CN107480291B (zh) * 2017-08-28 2019-12-10 大国创新智能科技(东莞)有限公司 基于幽默生成的情感交互方法和机器人系统
CN107679234B (zh) * 2017-10-24 2020-02-11 上海携程国际旅行社有限公司 客服信息提供方法、装置、电子设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324429A1 (en) * 2013-04-25 2014-10-30 Elektrobit Automotive Gmbh Computer-implemented method for automatic training of a dialogue system, and dialogue system for generating semantic annotations
CN108874972A (zh) * 2018-06-08 2018-11-23 青岛里奥机器人技术有限公司 一种基于深度学习的多轮情感对话方法
CN108960402A (zh) * 2018-06-11 2018-12-07 上海乐言信息科技有限公司 一种面向聊天机器人的混合策略式情感安抚系统
CN109129501A (zh) * 2018-08-28 2019-01-04 西安交通大学 一种陪伴式智能家居中控机器人
CN109977201A (zh) * 2019-01-28 2019-07-05 平安科技(深圳)有限公司 带情感的机器聊天方法、装置、计算机设备及存储介质

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163078A (zh) * 2020-09-29 2021-01-01 彩讯科技股份有限公司 智能应答方法、装置、服务器及存储介质
CN112163078B (zh) * 2020-09-29 2024-06-07 彩讯科技股份有限公司 智能应答方法、装置、服务器及存储介质
CN112560447A (zh) * 2020-12-22 2021-03-26 联想(北京)有限公司 回复信息获取方法、装置及计算机设备
CN113360614A (zh) * 2021-05-31 2021-09-07 多益网络有限公司 生成式聊天机器人回复情感控制方法、装置、终端及介质
CN114187997A (zh) * 2021-11-16 2022-03-15 同济大学 一种面向抑郁人群的心理咨询聊天机器人实现方法

Also Published As

Publication number Publication date
CN109977201A (zh) 2019-07-05
CN109977201B (zh) 2023-09-22

Similar Documents

Publication Publication Date Title
WO2020155619A1 (fr) Procédé et appareil de conversation en ligne avec une machine comprenant un sentiment, dispositif informatique et support d'informations
WO2020177282A1 (fr) Procédé et appareil de dialogue avec une machine, dispositif informatique et support de stockage
CN111914551B (zh) 自然语言处理方法、装置、电子设备及存储介质
US11068474B2 (en) Sequence to sequence conversational query understanding
CN111966800B (zh) 情感对话生成方法、装置及情感对话模型训练方法、装置
CN112528637B (zh) 文本处理模型训练方法、装置、计算机设备和存储介质
CN118349673A (zh) 文本处理模型的训练方法、文本处理方法及装置
CN113505205A (zh) 一种人机对话的系统和方法
CN114596844B (zh) 声学模型的训练方法、语音识别方法及相关设备
CN111428010A (zh) 人机智能问答的方法和装置
CN110083693A (zh) 机器人对话回复方法及装置
US11314951B2 (en) Electronic device for performing translation by sharing context of utterance and operation method therefor
CN111191450A (zh) 语料清洗方法、语料录入设备及计算机可读存储介质
CN114840671A (zh) 对话生成方法、模型的训练方法、装置、设备及介质
CN113421551B (zh) 语音识别方法、装置、计算机可读介质及电子设备
CN117272937B (zh) 文本编码模型训练方法、装置、设备及存储介质
CN116913278B (zh) 语音处理方法、装置、设备和存储介质
CN117971420A (zh) 任务处理、交通任务处理以及任务处理模型训练方法
CN117435696A (zh) 文本数据的检索方法、装置、电子设备及存储介质
CN117455009A (zh) 联邦学习方法、联邦预测方法、装置、设备及存储介质
CN112214592A (zh) 一种回复对话评分模型训练方法、对话回复方法及其装置
CN111797220A (zh) 对话生成方法、装置、计算机设备和存储介质
CN116384405A (zh) 文本处理方法,文本分类方法及情感识别方法
WO2020151318A1 (fr) Procédé et appareil de construction de corpus fondés sur un modèle de collecteur, et dispositif informatique
CN117972160B (zh) 一种多模态信息处理方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19912414

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19912414

Country of ref document: EP

Kind code of ref document: A1