CN115905485A - Common-situation conversation method and system based on common-sense self-adaptive selection - Google Patents

Common-situation conversation method and system based on common-sense self-adaptive selection Download PDF

Info

Publication number
CN115905485A
CN115905485A CN202211422630.9A CN202211422630A CN115905485A CN 115905485 A CN115905485 A CN 115905485A CN 202211422630 A CN202211422630 A CN 202211422630A CN 115905485 A CN115905485 A CN 115905485A
Authority
CN
China
Prior art keywords
common sense
common
feature
emotion
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211422630.9A
Other languages
Chinese (zh)
Inventor
沈旭立
蔡华
薛向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202211422630.9A priority Critical patent/CN115905485A/en
Publication of CN115905485A publication Critical patent/CN115905485A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Machine Translation (AREA)

Abstract

The invention provides a common-situation dialogue method and a common-situation dialogue system based on common-sense adaptive selection. And then screening the feature information obtained by coding through a working space based on a selection common sense feature coding algorithm, and adaptively unifying emotion cognitive information to ensure the consistency of the screened common sense feature coding and the context emotion recognition information of the historical conversation, thereby outputting a common emotion conversation text which is more in line with the feeling of a user. According to the invention, emotion information in the conversation is utilized to assist in understanding conversation intention, so that the context understanding ability is improved, and the user can feel that the emotion state of the user is understood, thereby improving the communication efficiency and the user experience. The invention can also be applied to various unmanned systems or human-computer interaction scenes, and has the advantages of wide application range and high use value.

Description

Common-situation conversation method and system based on common-sense self-adaptive selection
Technical Field
The invention belongs to the field of computer application, and particularly relates to a common situation conversation method and system based on common sense adaptive selection.
Background
The co-emotion is the ability of human beings to understand the emotions of other people, so that the human beings can experience the emotions of other people through various emotional stimulations and know the psychological states of other people. If the conversation system has the shared-emotion capability, the emotion of the speaker can be recognized, and the response can be made more specifically. In the field of natural language processing, current conversation systems focus on rationality of word and sentence structures, but neglect the role of co-emotion ability in sustainable chat systems. The common emotion ability is the basic cognitive ability of people, and the emotional state of the opposite side can be better understood by combining the common knowledge of the speakers. Therefore, a dialog system which appears recently achieves the purpose of sensing human emotion in human-computer interaction by using a common sense map. However, conventional wisdom is complex and unselected wisdom disturbs dialog systems to generate reply text.
The current research does not relate to the adaptive screening of common sense maps to achieve the capabilities of understanding human emotion and assisting a dialog system in generating a co-emotional response. The difficulty of the system is how to unify the emotional understanding of the conversation system in the common sense map and the conversation context to achieve the context-consistent common situation reply.
Disclosure of Invention
In order to solve the problems, the invention provides a common-case dialogue method and a system which can realize consistent contexts, and the invention adopts the following technical scheme:
the invention provides a common situation conversation method based on common sense self-adaptive selection, which is characterized by comprising the following steps of: step S1, inputting historical dialogue text data and a predefined common sense relation set into a pre-trained common sense atlas generation model, and acquiring a predicted context-related common sense inference result set; s2, inputting a common sense inference result set and an emotion classification loss function based on a parameterized common sense atlas encoder, and acquiring a feature coding set and common sense atlas emotion identification information of a corresponding common sense atlas; s3, inputting historical dialogue text data and an emotion classification loss function based on a parameterized context encoder, and acquiring a feature coding vector of a context and context emotion identification information; s4, inputting a feature coding set of the common sense map, a feature coding vector of a context and an emotion classification loss function into a simulation working space for unifying the common sense map emotion identification information and the context emotion identification information, and acquiring feature codes of the adaptively selected common sense map by utilizing an adaptively selected common sense feature coding algorithm; and S5, inputting the feature codes of the self-adaptively selected common sense map into a parameterized neural network decoder in combination with the feature coding vectors of the context, so as to obtain the common-emotion conversation reply text which is uniform in emotion with the historical conversation text data.
The common-situation conversation method based on common-sense adaptive selection provided by the invention can also have the technical characteristics that the elements of the predefined common-sense relationship set at least comprise a conversation demand relationship, a conversation intention relationship and a conversation influence relationship.
The common-situation conversation method based on common-sense adaptive selection provided by the invention can also have the technical characteristics that the step S4 comprises the following substeps:
s4-1, simulating a working space mechanism by utilizing a self-adaptive selection common sense feature coding algorithm;
step S4-2, the feature coding set Z of the common sense map is collected r Parameterized neural network g φ Context-specific coded vector Z ctx And sentiment classification loss function
Figure BDA0003942614260000031
Inputting to a simulation workspace;
step S4-3, in the competition stage m, when the feature coding set Z of the common sense map r When the number of codes is greater than 1, the emotion classification loss function is used
Figure BDA0003942614260000032
The calculation is performed through a neural network g φ Coded feature code vectorContext-dependent feature encoding vector z ctx Set of compositions f:
Figure BDA0003942614260000033
and recorded as a feature code set Z of a common sense map r Index I of the maximum loss of (c):
Figure BDA0003942614260000034
s4-4, solving the gradient of the elements in the set f to form a matrix Gm:
Figure BDA0003942614260000035
s4-5, calculating momentum delta of the features of the feature alignment context of the common sense map in the competition stage m by solving Lagrange multiplier lambda m
Figure BDA0003942614260000036
Figure BDA0003942614260000037
After solving the quadratic programming problem by using the above formula to obtain the Lagrange multiplier lambda, the momentum delta is calculated m
δ m =-G m T λ;
S4-6, in the broadcasting stage m, inputting the feature coding set Z of the common sense map r Through a neural network g α Obtaining a knowledge representation h after decoding k =g α (Z r ) And outputs a passing momentum delta m Modified knowledge coding
Figure BDA0003942614260000038
Figure BDA0003942614260000039
Step S4-7, after the processing of the steps S4-1 to S4-6, the coding of the I-th common sense map is removed from the characteristic coding set of the common sense map until the characteristic coding set only has one characteristic coding of the common sense map left, so as to obtain the characteristic coding of the common sense map selected according to the context emotion identification information; wherein, the competition phase m and the broadcast phase m are expressed as m cycles in the WHILE cycle, and in each cycle m, a characteristic coding set Z of the common sense map is removed r The sequence number I of the medium and maximum loss represents the elimination of the knowledge representation which is not related to the context, and the process is the self-adaptive selection process.
The invention also provides a common situation dialogue system based on common sense self-adaptive selection, which is characterized by comprising the following components: the media data acquisition module is used for acquiring historical dialogue text data, the calculation module is used for encoding and decoding the historical dialogue text data and a pre-stored common sense map, and then acquiring feature codes of the common sense map consistent with the encoding emotion of the historical dialogue text data by utilizing a self-adaptive selection common sense feature coding algorithm so as to generate a common sense dialogue reply text, and the result display module is used for displaying the historical dialogue text data acquired by the media data acquisition module and the common sense dialogue reply text output by the calculation module.
The common-situation conversation system based on common-sense adaptive selection provided by the invention can also have the technical characteristics that the calculation module comprises a perception module, a cognition module, a working space module and a generation module, wherein the perception module is a text pre-training model and is used for coding input historical conversation text data and outputting corresponding historical conversation codes; the cognitive module is provided with a prestored common sense atlas and a common sense inference generation model and is used for coding the input historical dialogue text data and outputting corresponding common sense codes; the working space module carries out self-adaptive selection on the common sense code based on a self-adaptive selection common sense feature coding algorithm, selects the common sense code consistent with the historical dialogue coding emotion as the feature code of the self-adaptively selected common sense map and outputs the feature code; the generating module is a text dialogue generating model and is used for generating a common-emotion dialogue reply text with uniform emotion according to feature codes of the self-adaptive selected common-sense atlas and the input historical dialogue codes.
Action and Effect of the invention
According to the common-situation dialogue method and system based on common-sense adaptive selection, the representation capability of a neural network is utilized, and the context information of historical dialogue and the prior information of a common-situation map are obtained through coding by sensing the context and emotion recognition information of dialogue. And then screening the feature information obtained by coding through a working space based on a selection common sense feature coding algorithm, and adaptively unifying emotion cognitive information, thereby ensuring the consistency of the screened common sense feature coding and the context emotion recognition information of the historical conversation, and outputting a common emotion conversation text which is more in line with the feeling of a user. Compared with the existing dialogue system which only focuses on the accuracy of the grammar, aims at generating fluent and smooth sentences and neglects the defect of significance of emotion understanding, the method and the system utilize emotion information in the dialogue to assist in understanding the dialogue intention, so that the context understanding ability is improved, the user can feel that the emotion state is understood, the communication efficiency is improved, and the user experience is improved. In addition, the invention can also be applied to various unmanned systems or human-computer interaction scenes, and has the advantages of wide application range and high use value.
Drawings
FIG. 1 is a flow chart of a common-situation conversation method based on common-sense adaptive selection according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a common-situation conversation system based on common-sense adaptive selection according to an embodiment of the present invention.
Detailed Description
In order to provide a technical scheme for realizing the co-emotion conversation with consistent context, the invention considers the conversation as a decision result of the cognitive process of adults. The dialogue system firstly understands the emotion of a person and then replies to the dialogue in a targeted manner. Motivated by this, and used for the reference of the cognitive and decision-making activities of the human brain under conscious conditions, namely: global work space theory (global work space theory), which is a cognitive model proposed by the american psychologist bernal bals about human decision making, and is suitable for explaining the human conversational process. The theory assumes that human decision (dialog) in conscious state is associated with a workspace of the "broadcast system", which is responsible for integrating the input perceptual information and outputting reply dialog by unifying the contexts of the respective perceptual information.
By combining the theory, the invention models a common-situation conversation framework which accords with a global working space mechanism, and the current speaking context content and the common sense of people are used as the input source of the perception information. After the perception information passes through the working space, the general knowledge meeting the context can be selected in a self-adaptive mode, the emotion recognition consistency is achieved, and then the corresponding reply is output. Meanwhile, the technical scheme can feed back the self-adaptive cognitive process replied by the dialog system after the dialog, and assist a system developer in optimizing the dialog system.
In order to make the technical means, the creation features, the achievement purposes and the effects of the invention easy to understand, the common situation conversation method and the common situation conversation system based on the common sense adaptive selection of the invention are specifically described below with reference to the embodiments and the drawings.
< example >
Fig. 1 is a schematic flow chart of a common-situation conversation method based on common-sense adaptive selection according to an embodiment of the present invention.
As shown in fig. 1, the common sense adaptive selection-based common-situation conversation method of the present embodiment includes the following steps:
step S1, generating a model based on a pre-trained common sense atlas
Figure BDA0003942614260000071
Inputting historical dialogue text data U for forward propagation, and acquiring a predicted context-related common sense inference result set E according to a predefined common sense relationship set r belonging to { N, W, O r 。/>
In a predefined set of common sense relationships r e { N, W, O,. }, where N represents a dialog requirement relationship, W represents a dialog intention relationship, O represents a dialog influence relationship, etc.
Step S2, based on parameterized common sense atlas encoder Enc k Inputting the common sense inference result set E r Loss function L for sentiment classification emo Outputting the feature code set Z of the common sense map r And the emotional recognition information Emo of the common sense map k
Step S3, a parameterization-based context encoder Enc ctx Inputting historical dialogue text data U and loss function L of emotion classification emo Obtaining a feature encoding vector z of a context of historical dialogue text data ctx And contextual emotion recognition information Emo ctx
Step S4, unifying the emotion recognition information Emo of the common sense map k And contextual emotion recognition information Emo ctx Gathering the feature codes of the common sense map Z r And context feature encoding vector z ctx Inputting the feature codes into a simulation working space, and acquiring the feature codes of the adaptively selected common sense map by using an adaptive selection common sense feature coding algorithm.
The specific process of the step is as follows:
s4-1, simulating a working space mechanism by utilizing a self-adaptive selection common sense feature coding algorithm;
step S4-2, the feature coding set Z of the common sense map is collected r Parameterized neural network g φ Context-specific coded vector z ctx And sentiment classification loss function
Figure BDA0003942614260000081
Inputting to a simulation workspace;
step S4-3, in the competition stage m, when the feature coding set Z of the common sense map r When the number of codes is more than 1, the emotion classification loss function is utilized
Figure BDA0003942614260000082
Calculating through a neural network g φ Coded feature code vector and context feature code vector z ctx Set of compositions f:
Figure BDA0003942614260000083
and recorded as a feature code set Z of a common sense graph r Index I of the maximum loss of (1):
Figure BDA0003942614260000084
s4-4, solving the gradient of the elements in the set f to form a matrix Gm:
Figure BDA0003942614260000085
s4-5, calculating momentum delta of the features of the feature alignment context of the common sense map in the competition stage m by solving Lagrange multiplier lambda m
Figure BDA0003942614260000086
Figure BDA0003942614260000087
After solving the quadratic programming problem by using the above formula to obtain the Lagrange multiplier lambda, the momentum delta is calculated m
δ m =-G m T λ;
S4-6, in the broadcasting stage m, inputting the feature coding set Z of the common sense map r Through a neural network g α Obtaining a knowledge representation h after decoding k =g α (Z r ) And outputs a passing momentum delta m Modified knowledge coding
Figure BDA0003942614260000088
Figure BDA0003942614260000089
And S4-7, after the processing from the step S4-1 to the step S4-6, removing the I-th common sense map code from the feature code set of the common sense map until only one common sense map code is left in the feature code set, so as to obtain the feature code of the common sense map selected according to the context emotion information.
In the above process, the contention phase m and the broadcast phase m are represented as m cycles in the WHILE cycle. And, in each cycle m, rejecting feature code set Z of the common sense graph r The sequence number I with the medium maximum loss represents the removal of the knowledge representation which is not related to the context, the process is an adaptive selection process and can also be represented as a feature coding set Z in a common sense map r In, high to low alignment:
Figure BDA0003942614260000091
step S5, coding the characteristics of the common sense map subjected to the self-adaptive selection
Figure BDA0003942614260000092
Feature encoding vector z in conjunction with historical dialogue ctx Input parameterized neural network decoder Dec c,k And acquiring a common-emotion conversation reply text unified with the emotion of the historical conversation text data:
Figure BDA0003942614260000093
fig. 2 is a schematic structural diagram of a common-situation conversation system based on common-sense adaptive selection according to an embodiment of the present invention.
As shown in fig. 2, the common sense dialogue system 100 of the present embodiment based on common sense adaptive selection includes a media data acquisition module 10, a calculation module 11, and a result display module 12.
The media data acquiring module 10 is configured to acquire historical dialogue text data, which may be acquired from a speech-to-text program or device, or may be locally stored text data.
The computing module 11 is configured to encode and decode the historical dialogue text data acquired by the media data acquisition module 10 and a pre-stored common sense map, and acquire a feature code of the common sense map that is consistent with the encoded emotion of the historical dialogue text data by using a self-adaptive selection common sense feature coding algorithm, so as to generate a common sense dialogue reply text.
The calculation module 11 has a sensing unit 111, a cognitive unit 112, a workspace unit 113, and a generation unit 114.
The sensing unit 111 is a text pre-training model, and is configured to encode input historical dialogue text data and output a corresponding historical dialogue code.
The cognitive unit 112 has a pre-stored common sense graph and a common sense inference generation model for encoding the input historical dialogue text data and outputting a corresponding common sense code.
Workspace section 113 adaptively selects a common sense code based on an adaptively selected common sense feature coding algorithm, and selects a common sense code having emotion matching with the historical dialogue code as a feature code of the adaptively selected common sense map and outputs the feature code.
The generating unit 114 is a text dialogue generating model for generating a common-emotion dialogue reply text with uniform emotion according to the feature code of the adaptively selected common-sense graph and the inputted historical dialogue code.
The result display module 12 is configured to display the historical dialog text data acquired by the media data acquisition module 10 and the shared-emotion dialog reply text output by the computation module 11. The result display module 12 may be a computer or a mobile device.
The self-adaptive selection-based co-situation conversation method and system can be applied to various unmanned systems or human-computer interaction scenes, such as communication interaction between a robot and a patient in a nursing scene; the driver and the automobile carry out voice question answering in the automatic driving process; or ordering voice interaction between the customer and the food delivery robot in the unmanned restaurant, and the like, and has the advantages of wide application range and high use value.
Examples effects and effects
According to the common-emotion conversation method and system based on adaptive selection provided by the embodiment, the representation capability of the neural network is utilized, and the context information of the historical conversation and the prior information of the common-knowledge graph are obtained by coding through the context and emotion recognition information of the perception conversation. And then screening the feature information obtained by coding through a working space based on a selection common sense feature coding algorithm, and adaptively unifying emotion cognitive information, thereby ensuring the consistency of the screened common sense feature coding and the context emotion recognition information of the historical conversation, and outputting a common emotion conversation text which is more in line with the feeling of a user. Compared with the existing dialogue system which only focuses on the accuracy of the grammar and aims at generating fluent and smooth sentences and neglecting the defect of significance of emotion understanding, the embodiment utilizes emotion information in the dialogue to assist in understanding the dialogue intention, so that the context understanding ability is improved, the user feels the emotion state understood, the communication efficiency is improved, and the user experience is improved.
Meanwhile, the co-situation dialog system provided by the embodiment can report the self-adaptive selection process in the working space, so that the developers are assisted to iterate and tune the dialog system and the common sense atlas.
The above-described embodiments are merely illustrative of specific embodiments of the present invention, and the present invention is not limited to the scope of the description of the above-described embodiments.

Claims (5)

1. A common situation conversation method based on common sense adaptive selection is characterized by comprising the following steps:
step S1, inputting historical dialogue text data and a predefined common sense relation set into a pre-trained common sense map generation model, and acquiring a predicted context-related common sense inference result set;
s2, inputting the common sense inference result set and the emotion classification loss function based on a parameterized common sense atlas encoder, and acquiring a feature coding set and common sense atlas emotion identification information of a corresponding common sense atlas;
s3, inputting the historical dialogue text data and the emotion classification loss function based on a parameterized context encoder to acquire a feature coding vector of a context and context emotion identification information;
s4, inputting the feature coding set of the common sense map, the feature coding vector of the context and the emotion classification loss function into a simulation working space for unifying the common sense map emotion recognition information and the context emotion recognition information, and acquiring the feature coding of the adaptively selected common sense map by utilizing an adaptive selection common sense feature coding algorithm;
and S5, inputting the feature codes of the self-adaptively selected common sense map to a parameterized neural network decoder in combination with the feature code vectors of the context, thereby acquiring a common-emotion dialogue reply text which is emotionally unified with the historical dialogue text data.
2. The common sense dialog method based on common sense adaptive selection of claim 1, characterized in that:
wherein the elements of the set of predefined common sense relationships include at least a conversation demand relationship, a conversation intention relationship, and a conversation impact relationship.
3. The common sense conversation method based on common sense adaptive selection according to claim 1, wherein:
wherein the step S4 comprises the following substeps:
s4-1, simulating a working space mechanism by utilizing the self-adaptive selection common sense feature coding algorithm;
step S4-2, the feature coding set Z of the common sense map is collected r Parameterized neural network g φ Context feature encoding vector Z ctx And sentiment classification loss function
Figure FDA0003942614250000027
Input to the simulation workspace;
step S4-3, in the competition stage m, when the feature coding set Z of the common sense map r When the number of codes is more than 1, the emotion classification loss function is utilized
Figure FDA0003942614250000021
Calculating through a neural network g φ Coded feature code vector and context feature code vector Z ctx Set of compositions f:
Figure FDA0003942614250000022
and recorded as a feature code set Z of a common sense map r Index I of the maximum loss of (1):
Figure FDA0003942614250000023
s4-4, solving the gradient of the elements in the set f to form a matrix G m
Figure FDA0003942614250000024
S4-5, calculating momentum delta of the features of the feature alignment context of the common sense map in the competition stage m by solving Lagrange multiplier lambda m
Figure FDA0003942614250000025
Figure FDA0003942614250000026
Solving the quadratic programming problem using the above equation is calledAfter the Greeny multiplier λ, the momentum δ is calculated m
δ m =-G m T λ;
S4-6, in the broadcasting stage m, inputting the feature coding set Z of the common sense map r Through a neural network g α Obtaining a knowledge representation h after decoding k =g α (Z r ) And outputs a passing momentum delta m Modified knowledge coding
Figure FDA0003942614250000031
Figure FDA0003942614250000032
S4-7, after the processing of the steps S4-1 to S4-6, removing the I common sense map code from the feature code set of the common sense map until only one feature code of the common sense map remains in the feature code set, so as to obtain the feature code of the common sense map selected according to the context emotion recognition information;
wherein the competition stage m and the broadcast stage m are expressed as m cycles in the WHILE cycle, and in each cycle m, the feature coding set Z of the common sense map is removed r The sequence number I of the medium maximum loss represents the removal of the knowledge representation which is not related to the context, and the process is the self-adaptive selection process.
4. A common sense conversation system based on adaptive selection of common sense, comprising:
a media data acquisition module, a calculation module and a result display module,
the media data acquisition module is used for acquiring historical dialogue text data,
the computing module is used for coding and decoding the historical dialogue text data and a pre-stored common sense map, acquiring feature codes of the common sense map consistent with the coded emotion of the historical dialogue text data by utilizing a self-adaptive selection common sense feature coding algorithm so as to generate a common sense dialogue reply text,
the result display module is used for displaying the historical dialogue text data acquired by the media data acquisition module and the shared situation dialogue reply text output by the calculation module.
5. A common sense conversation system based on common sense adaptive selection according to claim 4, wherein:
wherein the computing module comprises a sensing module, a cognition module, a working space module and a generating module,
the perception module is a text pre-training model and is used for coding the input historical dialogue text data and outputting corresponding historical dialogue codes;
the cognitive module is provided with a prestored common sense atlas and a common sense inference generation model and is used for coding input historical dialogue text data and outputting corresponding common sense codes;
the working space module carries out self-adaptive selection on the common sense coding based on a self-adaptive selection common sense feature coding algorithm, selects the common sense coding which is consistent with the historical dialogue coding emotion as the feature coding of the self-adaptively selected common sense map and outputs the feature coding;
the generating module is a text dialogue generating model and is used for generating a common-emotion dialogue reply text with uniform emotion according to feature codes of the self-adaptive selected common-sense atlas and the input historical dialogue codes.
CN202211422630.9A 2022-11-14 2022-11-14 Common-situation conversation method and system based on common-sense self-adaptive selection Pending CN115905485A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211422630.9A CN115905485A (en) 2022-11-14 2022-11-14 Common-situation conversation method and system based on common-sense self-adaptive selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211422630.9A CN115905485A (en) 2022-11-14 2022-11-14 Common-situation conversation method and system based on common-sense self-adaptive selection

Publications (1)

Publication Number Publication Date
CN115905485A true CN115905485A (en) 2023-04-04

Family

ID=86473888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211422630.9A Pending CN115905485A (en) 2022-11-14 2022-11-14 Common-situation conversation method and system based on common-sense self-adaptive selection

Country Status (1)

Country Link
CN (1) CN115905485A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116521872A (en) * 2023-04-27 2023-08-01 华中师范大学 Combined recognition method and system for cognition and emotion and electronic equipment
CN116680369A (en) * 2023-04-13 2023-09-01 华中师范大学 Co-emotion dialogue generation method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680369A (en) * 2023-04-13 2023-09-01 华中师范大学 Co-emotion dialogue generation method and system
CN116680369B (en) * 2023-04-13 2023-12-15 华中师范大学 Co-emotion dialogue generation method and system
CN116521872A (en) * 2023-04-27 2023-08-01 华中师范大学 Combined recognition method and system for cognition and emotion and electronic equipment
CN116521872B (en) * 2023-04-27 2023-12-26 华中师范大学 Combined recognition method and system for cognition and emotion and electronic equipment

Similar Documents

Publication Publication Date Title
CN109785824B (en) Training method and device of voice translation model
CN109859736B (en) Speech synthesis method and system
CN107464559A (en) Joint forecast model construction method and system based on Chinese rhythm structure and stress
Robinson et al. Sequence-to-sequence modelling of f0 for speech emotion conversion
CN115905485A (en) Common-situation conversation method and system based on common-sense self-adaptive selection
Merdivan et al. Dialogue systems for intelligent human computer interactions
CN113987179B (en) Dialogue emotion recognition network model based on knowledge enhancement and backtracking loss, construction method, electronic equipment and storage medium
CN111966800B (en) Emotion dialogue generation method and device and emotion dialogue model training method and device
CN111667812A (en) Voice synthesis method, device, equipment and storage medium
Chi et al. Speaker role contextual modeling for language understanding and dialogue policy learning
CN111986687B (en) Bilingual emotion dialogue generation system based on interactive decoding
CN111128118A (en) Speech synthesis method, related device and readable storage medium
CN111382257A (en) Method and system for generating dialog context
CN114911932A (en) Heterogeneous graph structure multi-conversation person emotion analysis method based on theme semantic enhancement
CN112163080A (en) Generation type dialogue system based on multi-round emotion analysis
CN109800295A (en) The emotion session generation method being distributed based on sentiment dictionary and Word probability
Suzić et al. Style transplantation in neural network based speech synthesis
Wu et al. Rapid Style Adaptation Using Residual Error Embedding for Expressive Speech Synthesis.
CN117349427A (en) Artificial intelligence multi-mode content generation system for public opinion event coping
Lee et al. Many-to-many unsupervised speech conversion from nonparallel corpora
CN114360485A (en) Voice processing method, system, device and medium
CN114386426A (en) Gold medal speaking skill recommendation method and device based on multivariate semantic fusion
CN117592564A (en) Question-answer interaction method, device, equipment and medium
CN116108856B (en) Emotion recognition method and system based on long and short loop cognition and latent emotion display interaction
CN116701580A (en) Conversation emotion intensity consistency control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination