CN112861509A - Role analysis method and system based on multi-head attention mechanism - Google Patents

Role analysis method and system based on multi-head attention mechanism Download PDF

Info

Publication number
CN112861509A
CN112861509A CN202110180395.8A CN202110180395A CN112861509A CN 112861509 A CN112861509 A CN 112861509A CN 202110180395 A CN202110180395 A CN 202110180395A CN 112861509 A CN112861509 A CN 112861509A
Authority
CN
China
Prior art keywords
vector matrix
sentence
layer
text
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110180395.8A
Other languages
Chinese (zh)
Other versions
CN112861509B (en
Inventor
代少兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingniuzhisheng Technology Co ltd
Original Assignee
Qingniuzhisheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingniuzhisheng Technology Co ltd filed Critical Qingniuzhisheng Technology Co ltd
Priority to CN202110180395.8A priority Critical patent/CN112861509B/en
Publication of CN112861509A publication Critical patent/CN112861509A/en
Application granted granted Critical
Publication of CN112861509B publication Critical patent/CN112861509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to a role analysis method and a system based on a multi-head attention mechanism, wherein the method specifically comprises the steps of converting a first dialogue record into a first text, further generating a first vector matrix corresponding to the first text, further inputting the first vector matrix into a pre-trained probability distribution analysis model, obtaining probability distribution of sentence vectors contained in the first vector matrix, and further judging the size relation of A, B in the probability distribution; if A is larger than B, marking the sentence corresponding to the sentence vector as the content spoken by the service provider; if B is larger than A, marking the sentence corresponding to the sentence vector as the content spoken by the service party. The method utilizes the super-strong learning capacity of a multi-head attention mechanism on the long-distance relationship, and can effectively improve the accuracy of character analysis.

Description

Role analysis method and system based on multi-head attention mechanism
Technical Field
The invention relates to the technical field of dialogue analysis, in particular to a role analysis method and system based on a multi-head attention mechanism.
Background
The gradual development of the customer information service industry enables the voice interactive customer service mode to have wider usability and usability; in the goal of further improving the quality of service, analysis of the contents of a voice call is a key link. In order to accurately know whether the operation of the service provider is normal and the appeal of the service provider, the dialog content expressed by the service provider needs to be distinguished from the dialog content expressed by the service provider. ASR (automatic speech recognition) also provides the ability to analyze characters, but is often less effective.
Disclosure of Invention
The present invention provides a role analysis method based on a multi-head attention mechanism and a role analysis system based on the multi-head attention mechanism, which are directed to overcome the above-mentioned drawbacks of the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows:
in one aspect, a role analysis method based on a multi-head attention mechanism is provided, wherein the role analysis method comprises the following steps:
converting the first dialogue record into a first text, wherein the first dialogue record is a record of the content spoken by the service providing party and the content spoken by the service party to be divided;
generating a first vector matrix corresponding to the first text, wherein the total number of sentence vectors contained in the first vector matrix is the same as the total number of sentences contained in the first text, and the sentence vectors contained in the first vector matrix correspond to the sentences contained in the first text one by one;
inputting the first vector matrix into a pre-trained probability distribution analysis model to obtain probability distribution of sentence vectors contained in the first vector matrix, wherein the probability distribution is [ A, B ], A represents the probability that sentences corresponding to the sentence vectors are the contents spoken by a service provider, and B represents the probability that the sentences corresponding to the sentence vectors are the contents spoken by a served provider;
judging A, B size relation in the probability distribution;
if A is larger than B, marking the sentence corresponding to the sentence vector as the content spoken by the service provider; if B is larger than A, marking the sentence corresponding to the sentence vector as the content spoken by the service party.
On the other hand, a role analysis system based on the multi-head attention mechanism is provided, and the role analysis method based on the multi-head attention mechanism includes:
the conversion unit is used for converting the first dialogue record into a first text, wherein the first dialogue record is a record of the content spoken by the service providing party and the content spoken by the service receiving party to be divided;
the generating unit is used for generating a first vector matrix corresponding to the first text, wherein the total number of sentence vectors contained in the first vector matrix is the same as the total number of sentences contained in the first text, and the sentence vectors contained in the first vector matrix correspond to the sentences contained in the first text one by one;
the probability distribution analysis unit is used for inputting the first vector matrix into a pre-trained probability distribution analysis model to obtain the probability distribution of sentence vectors contained in the first vector matrix, wherein the probability distribution is [ A, B ], A represents the probability that sentences corresponding to the sentence vectors are the content spoken by the service provider, and B represents the probability that the sentences corresponding to the sentence vectors are the content spoken by the service provider;
a judging unit, configured to judge A, B magnitude relation in the probability distribution;
and the marking unit is also used for marking the sentence corresponding to the sentence vector as the content spoken by the service party when A is larger than B, and also used for marking the sentence corresponding to the sentence vector as the content spoken by the service party when B is larger than A.
The invention has the beneficial effects that: converting the first dialogue record into a first text, further generating a first vector matrix corresponding to the first text, further inputting the first vector matrix into a pre-trained probability distribution analysis model to obtain probability distribution of sentence vectors contained in the first vector matrix, and further judging the size relation of A, B in the probability distribution; if A is larger than B, marking the sentence corresponding to the sentence vector as the content spoken by the service provider; if B is larger than A, marking the sentence corresponding to the sentence vector as the content spoken by the service party. The method utilizes the super-strong learning capacity of a multi-head attention mechanism on the long-distance relationship, and can effectively improve the accuracy of character analysis.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be further described with reference to the accompanying drawings and embodiments, wherein the drawings in the following description are only part of the embodiments of the present invention, and for those skilled in the art, other drawings can be obtained without inventive efforts according to the accompanying drawings:
FIG. 1 is a flow chart of a method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a multi-head attention layer in a method according to an embodiment of the present invention;
fig. 3 is a block diagram of a system according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the following will clearly and completely describe the technical solutions in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without inventive step, are within the scope of the present invention.
Example one
The embodiment of the invention provides a role analysis method based on a multi-head attention mechanism, which comprises the following steps as shown in fig. 1 to 2:
s1: and converting the first dialogue record into a first text, wherein the first dialogue record is a record of the content spoken by the service providing party and the content spoken by the service receiving party to be divided.
S2: and generating a first vector matrix corresponding to the first text, wherein the total number of sentence vectors contained in the first vector matrix is the same as the total number of sentences contained in the first text, and the sentence vectors contained in the first vector matrix correspond to the sentences contained in the first text one by one.
S3: and inputting the first vector matrix into a pre-trained probability distribution analysis model to obtain the probability distribution of sentence vectors contained in the first vector matrix, wherein the probability distribution is [ A, B ], A represents the probability that sentences corresponding to the sentence vectors are the content spoken by the service provider, and B represents the probability that the sentences corresponding to the sentence vectors are the content spoken by the service provider.
S4: the magnitude relation of A, B in the probability distribution is determined.
S5: if A is larger than B, marking the sentence corresponding to the sentence vector as the content spoken by the service provider; if B is larger than A, marking the sentence corresponding to the sentence vector as the content spoken by the service party.
Further, prior to converting the first dialog recording to the first text,
selecting N1Segmenting a second conversation recording;
will N1Converting the second dialogue record into text to obtain N1N corresponding to the second dialogue record2A second text is copied;
sign N2Providing the sentences spoken by the service party and the sentences spoken by the service party in the second text;
generation of and N by BERT model2N corresponding to the second text3Grouping a second vector matrix, wherein the second vector matrix corresponds to N in the second text4Sentences, the second vector matrix containing N5 sentence vectors corresponding to N4 sentences;
carrying out mean operation on each sentence vector of the second vector matrix to obtain a sum N3N corresponding to group second vector matrix6A third vector matrix;
the marking results corresponding to the third vector matrix and the second text are input data and output data used for training the probability distribution analysis model respectively;
the probability distribution analysis model comprises the following steps:
the input layer is used for inputting the first vector matrix and the third vector matrix;
the first linear transformation layer and the second linear transformation layer of the multi-head attention layer are respectively used for carrying out linear transformation on a first vector matrix output by the input layer to obtain a fourth vector matrix with higher dimensionality and carrying out linear transformation on a fifth vector matrix obtained by splicing a plurality of fourth vector matrices to obtain a sixth vector matrix, and the dimensionality of the fourth vector matrix is N1*N2The number of splitting heads is N1Each head hidden layer has a size of N2The dimension of the sixth vector matrix is N2(ii) a The multi-head attention layer is used for inputting a sixth vector matrix to the normalization layer;
a normalization layer for normalizing the sixth vector matrix output by the multi-head attention layer;
a first fully connected layer with 256 inputs and 256 outputs;
a Dropout layer;
the second fully connected layer, input 256, output 2.
Furthermore, the loss function of the probability distribution analysis model adopts cross entropy and adopts a gradient descent method for training.
Further, a first vector matrix is generated using a BERT model.
Further, in the normalization layer, a LayerNormalization mode is adopted for normalization;
in the Dropout layer, the loss rate is 50%;
in the first full connection layer, the activation function adopts relu;
in the second fully-connected layer, softmax is used for the activation function.
In this example, N1-N6Are all positive integers, N1May be 100, N4Can be 10; obtaining a vector matrix through a BERT model (base version), if a sentence has 12 words, obtaining the matrix size of 10 x 12 x 768, then averaging the 12 words of each sentence to obtain the vector representation of the 10 sentences, and obtaining the matrix size of 10 x 768 for subsequent training.
In this embodiment, preferably, the probability distribution analysis model is mainly based on LSTM, and the Multi-Head Attention layer is based on a Multi-Head Attention mechanism in a transform architecture.
In this example, N1Is 8, N2For 256, for K, Q, V in the multi-head attention layer, splitting K into 8K, splitting Q into 8Q, splitting V into 8V, performing attention calculation on each pair of K, Q, V to obtain c of attention, and splicing 8 c to obtain c of attentionC (10 × 2048), then through a second linear transformation layer, C is transformed into 10 × 256, which is input to the normalization layer.
In this embodiment, the output of the second fully-connected layer is 2, which corresponds to the candidate range of the role analysis.
The method provided by this embodiment specifically converts the first dialogue record into a first text, and then generates a first vector matrix corresponding to the first text, and then inputs the first vector matrix into a pre-trained probability distribution analysis model to obtain probability distribution of sentence vectors included in the first vector matrix, and further determines the size relationship of A, B in the probability distribution; if A is larger than B, marking the sentence corresponding to the sentence vector as the content spoken by the service provider; if B is larger than A, marking the sentence corresponding to the sentence vector as the content spoken by the service party. The method utilizes the super-strong learning capacity of a multi-head attention mechanism on the long-distance relationship, and can effectively improve the accuracy of character analysis.
Example two
An embodiment of the present invention provides a role analysis system based on a multi-head attention mechanism, and as shown in fig. 3, the role analysis system includes:
a conversion unit 10, configured to convert a first dialogue record into a first text, where the first dialogue record is a record of content spoken by a service provider and content spoken by a service provider to be divided;
the generating unit 11 is configured to generate a first vector matrix corresponding to the first text, where a total number of sentence vectors included in the first vector matrix is the same as a total number of sentences included in the first text, and the sentence vectors included in the first vector matrix correspond to the sentences included in the first text one to one;
a probability distribution analysis unit 12, configured to input the first vector matrix into a pre-trained probability distribution analysis model to obtain a probability distribution of sentence vectors included in the first vector matrix, where the probability distribution is [ a, B ], a represents a probability that a sentence corresponding to the sentence vector is the content spoken by the service provider, and B represents a probability that a sentence corresponding to the sentence vector is the content spoken by the service provider;
a judging unit 13 for judging A, B magnitude relation in the probability distribution;
the marking unit 14 is further configured to mark the sentence corresponding to the sentence vector as the content spoken by the service provider when a is greater than B, and mark the sentence corresponding to the sentence vector as the content spoken by the service provider when B is greater than a.
Further, the role analysis system further comprises:
a selection unit 15 for selecting N1Segmenting a second conversation recording;
a conversion unit for converting N1Converting the second dialogue record into text to obtain N1N corresponding to the second dialogue record2A second text is copied;
a marking unit for marking N by user2Providing the sentences spoken by the service party and the sentences spoken by the service party in the second text;
a generation unit for generating N and N by a BERT model2N corresponding to the second text3Grouping a second vector matrix, wherein the second vector matrix corresponds to N in the second text4A sentence, a second vector matrix including N4N corresponding to each sentence5A sentence vector; and the mean value operation is also carried out on each sentence vector of the second vector matrix to obtain the sum N3N corresponding to group second vector matrix6A third vector matrix;
the marking results corresponding to the third vector matrix and the second text are input data and output data used for training the probability distribution analysis model respectively;
the probability distribution analysis unit comprises the following components in sequence:
the input layer is used for inputting the first vector matrix and the third vector matrix;
the first linear transformation layer and the second linear transformation layer of the multi-head attention layer are respectively used for carrying out linear transformation on a first vector matrix output by the input layer to obtain a fourth vector matrix with higher dimensionality and carrying out linear transformation on a fifth vector matrix obtained by splicing a plurality of fourth vector matrices to obtain a sixth vector matrix, and the dimensionality of the fourth vector matrix is N1*N2The number of splitting heads is N1Each head hidden layer has a size of N2The dimension of the sixth vector matrix is N2(ii) a The multi-head attention layer is used for inputting a sixth vector matrix to the normalization layer;
a normalization layer for normalizing the sixth vector matrix output by the multi-head attention layer;
a first fully connected layer with 256 inputs and 256 outputs;
a Dropout layer;
the second fully connected layer, input 256, output 2.
Further, the loss function of the probability distribution analysis unit 12 is trained by using a gradient descent method and cross entropy.
Further, a first vector matrix is generated using a BERT model.
Further, in the normalization layer, a LayerNormalization mode is adopted for normalization;
in the Dropout layer, the loss rate is 50%;
in the first full connection layer, the activation function adopts relu;
in the second fully-connected layer, softmax is used for the activation function.
The system provided by the embodiment utilizes the super-strong learning capability of a multi-head attention mechanism on the long-distance relationship, and can effectively improve the accuracy of character analysis.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (10)

1. A role analysis method based on a multi-head attention mechanism is characterized by comprising the following steps:
converting the first dialogue record into a first text, wherein the first dialogue record is a record of the content spoken by the service providing party and the content spoken by the service party to be divided;
generating a first vector matrix corresponding to the first text, wherein the total number of sentence vectors contained in the first vector matrix is the same as the total number of sentences contained in the first text, and the sentence vectors contained in the first vector matrix correspond to the sentences contained in the first text one by one;
inputting the first vector matrix into a pre-trained probability distribution analysis model to obtain probability distribution of sentence vectors contained in the first vector matrix, wherein the probability distribution is [ A, B ], A represents the probability that sentences corresponding to the sentence vectors are the contents spoken by a service provider, and B represents the probability that the sentences corresponding to the sentence vectors are the contents spoken by a served provider;
judging A, B size relation in the probability distribution;
if A is larger than B, marking the sentence corresponding to the sentence vector as the content spoken by the service provider; if B is larger than A, marking the sentence corresponding to the sentence vector as the content spoken by the service party.
2. The multi-head attention mechanism-based character analysis method according to claim 1,
before the step of converting the first dialogue record into the first text, the role analysis method further comprises the following steps:
selecting N1Segmenting a second conversation recording;
will N1Converting the second dialogue record into text to obtain N1N corresponding to the second dialogue record2A second text is copied;
sign N2Providing the sentences spoken by the service party and the sentences spoken by the service party in the second text;
generation of and N by BERT model2N corresponding to the second text3Grouping a second vector matrix, wherein the second vector matrix corresponds to N in the second text4A sentence, a second vector matrix including N4N corresponding to each sentence5A sentence vector;
carrying out mean operation on each sentence vector of the second vector matrix to obtain a sum N3N corresponding to group second vector matrix6A third vector matrix;
the marking results corresponding to the third vector matrix and the second text are input data and output data used for training the probability distribution analysis model respectively;
the probability distribution analysis model comprises the following steps:
the input layer is used for inputting the first vector matrix and the third vector matrix;
the first linear transformation layer and the second linear transformation layer of the multi-head attention layer are respectively used for carrying out linear transformation on a first vector matrix output by the input layer to obtain a fourth vector matrix with higher dimensionality and carrying out linear transformation on a fifth vector matrix obtained by splicing a plurality of fourth vector matrices to obtain a sixth vector matrix, and the dimensionality of the fourth vector matrix is N1*N2The number of splitting heads is N1Each head hidden layer has a size of N2The dimension of the sixth vector matrix is N2(ii) a The multi-head attention layer is used for inputting a sixth vector matrix to the normalization layer;
a normalization layer for normalizing the sixth vector matrix output by the multi-head attention layer;
a first fully connected layer with 256 inputs and 256 outputs;
a Dropout layer;
the second fully connected layer, input 256, output 2.
3. The method for analyzing roles based on the multi-attention mechanism as claimed in claim 2, wherein the loss function of the probability distribution analysis model is trained by a gradient descent method using cross entropy.
4. The multi-head attention mechanism-based role analysis method according to claim 1, wherein a BERT model is used to generate the first vector matrix.
5. The multi-attention mechanism-based character analysis method according to claim 2,
in the normalization layer, a LayerNormalization mode is adopted for normalization;
in the Dropout layer, the loss rate is 50%;
in the first full connection layer, the activation function adopts relu;
in the second fully-connected layer, softmax is used for the activation function.
6. A character analysis system based on a multi-head attention mechanism, which is based on the character analysis method based on the multi-head attention mechanism as claimed in any one of claims 1 to 5, and which comprises:
the conversion unit is used for converting the first dialogue record into a first text, wherein the first dialogue record is a record of the content spoken by the service providing party and the content spoken by the service receiving party to be divided;
the generating unit is used for generating a first vector matrix corresponding to the first text, wherein the total number of sentence vectors contained in the first vector matrix is the same as the total number of sentences contained in the first text, and the sentence vectors contained in the first vector matrix correspond to the sentences contained in the first text one by one;
the probability distribution analysis unit is used for inputting the first vector matrix into a pre-trained probability distribution analysis model to obtain the probability distribution of sentence vectors contained in the first vector matrix, wherein the probability distribution is [ A, B ], A represents the probability that sentences corresponding to the sentence vectors are the content spoken by the service provider, and B represents the probability that the sentences corresponding to the sentence vectors are the content spoken by the service provider;
a judging unit, configured to judge A, B magnitude relation in the probability distribution;
and the marking unit is also used for marking the sentence corresponding to the sentence vector as the content spoken by the service party when A is larger than B, and also used for marking the sentence corresponding to the sentence vector as the content spoken by the service party when B is larger than A.
7. The multi-head attention mechanism-based character analysis system of claim 6, further comprising:
a selection unit for selecting N1Segmenting a second conversation recording;
the conversion unit is also used for converting N1Converting the second dialogue record into text to obtain N1N corresponding to the second dialogue record2A second text is copied;
the marking unit is also used for marking N for users2Providing the sentences spoken by the service party and the sentences spoken by the service party in the second text;
the generation unit is also used for generating and N through a BERT model2N corresponding to the second text3Grouping a second vector matrix, wherein the second vector matrix corresponds to N in the second text4A sentence, a second vector matrix including N4N corresponding to each sentence5A sentence vector; and the mean value operation is also carried out on each sentence vector of the second vector matrix to obtain the sum N3N corresponding to group second vector matrix6A third vector matrix;
the marking results corresponding to the third vector matrix and the second text are input data and output data used for training the probability distribution analysis model respectively;
the probability distribution analysis unit comprises the following components in sequence:
the input layer is used for inputting the first vector matrix and the third vector matrix;
the first linear transformation layer and the second linear transformation layer of the multi-head attention layer are respectively used for carrying out linear transformation on a first vector matrix output by the input layer to obtain a fourth vector matrix with higher dimensionality and carrying out linear transformation on a fifth vector matrix obtained by splicing a plurality of fourth vector matrices to obtain a sixth vector matrix, and the dimensionality of the fourth vector matrix is N1*N2The number of splitting heads is N1Each head hidden layer has a size of N2The dimension of the sixth vector matrix is N2(ii) a The multi-head attention layer is used for inputting a sixth vector matrix to the normalization layer;
a normalization layer for normalizing the sixth vector matrix output by the multi-head attention layer;
a first fully connected layer with 256 inputs and 256 outputs;
a Dropout layer;
the second fully connected layer, input 256, output 2.
8. The multi-head attention mechanism-based character analysis system according to claim 7, wherein the loss function of the probability distribution analysis unit is trained by a gradient descent method by using cross entropy.
9. The multi-head attention mechanism-based role analysis system according to claim 6, wherein a BERT model is used to generate the first vector matrix.
10. The multi-head attention mechanism-based character analysis system of claim 7,
in the normalization layer, a LayerNormalization mode is adopted for normalization;
in the Dropout layer, the loss rate is 50%;
in the first full connection layer, the activation function adopts relu;
in the second fully-connected layer, softmax is used for the activation function.
CN202110180395.8A 2021-02-08 2021-02-08 Role analysis method and system based on multi-head attention mechanism Active CN112861509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110180395.8A CN112861509B (en) 2021-02-08 2021-02-08 Role analysis method and system based on multi-head attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110180395.8A CN112861509B (en) 2021-02-08 2021-02-08 Role analysis method and system based on multi-head attention mechanism

Publications (2)

Publication Number Publication Date
CN112861509A true CN112861509A (en) 2021-05-28
CN112861509B CN112861509B (en) 2023-05-12

Family

ID=75989486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110180395.8A Active CN112861509B (en) 2021-02-08 2021-02-08 Role analysis method and system based on multi-head attention mechanism

Country Status (1)

Country Link
CN (1) CN112861509B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683661A (en) * 2015-11-05 2017-05-17 阿里巴巴集团控股有限公司 Role separation method and device based on voice
US20170161256A1 (en) * 2015-12-04 2017-06-08 Mitsubishi Electric Research Laboratories, Inc. Method and System for Role Dependent Context Sensitive Spoken and Textual Language Understanding with Neural Networks
CN107766565A (en) * 2017-11-06 2018-03-06 广州杰赛科技股份有限公司 Conversational character differentiating method and system
CN108074576A (en) * 2017-12-14 2018-05-25 讯飞智元信息科技有限公司 Inquest the speaker role's separation method and system under scene
US20200342860A1 (en) * 2019-04-29 2020-10-29 Microsoft Technology Licensing, Llc System and method for speaker role determination and scrubbing identifying information
CN112131879A (en) * 2019-06-25 2020-12-25 普天信息技术有限公司 Relationship extraction system, method and device
CN112182231A (en) * 2020-12-01 2021-01-05 佰聆数据股份有限公司 Text processing method, system and storage medium based on sentence vector pre-training model
CN112270169A (en) * 2020-10-14 2021-01-26 北京百度网讯科技有限公司 Dialogue role prediction method and device, electronic equipment and storage medium
CN112270167A (en) * 2020-10-14 2021-01-26 北京百度网讯科技有限公司 Role labeling method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683661A (en) * 2015-11-05 2017-05-17 阿里巴巴集团控股有限公司 Role separation method and device based on voice
US20170161256A1 (en) * 2015-12-04 2017-06-08 Mitsubishi Electric Research Laboratories, Inc. Method and System for Role Dependent Context Sensitive Spoken and Textual Language Understanding with Neural Networks
CN107766565A (en) * 2017-11-06 2018-03-06 广州杰赛科技股份有限公司 Conversational character differentiating method and system
CN108074576A (en) * 2017-12-14 2018-05-25 讯飞智元信息科技有限公司 Inquest the speaker role's separation method and system under scene
US20200342860A1 (en) * 2019-04-29 2020-10-29 Microsoft Technology Licensing, Llc System and method for speaker role determination and scrubbing identifying information
CN112131879A (en) * 2019-06-25 2020-12-25 普天信息技术有限公司 Relationship extraction system, method and device
CN112270169A (en) * 2020-10-14 2021-01-26 北京百度网讯科技有限公司 Dialogue role prediction method and device, electronic equipment and storage medium
CN112270167A (en) * 2020-10-14 2021-01-26 北京百度网讯科技有限公司 Role labeling method and device, electronic equipment and storage medium
CN112182231A (en) * 2020-12-01 2021-01-05 佰聆数据股份有限公司 Text processing method, system and storage medium based on sentence vector pre-training model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NICOLE NOVIELLI: "The Role of Affect Analysis in Dialogue Act Identification", 《 IEEE TRANSACTIONS ON AFFECTIVE COMPUTING》 *
夏登山: "话语角色分配/识别与(不)礼貌现象分析", 《北京交通大学学报( 社会科学版)》 *
朱晨光著: "《机器阅读理解》", 30 April 2020, 北京:机械工业出版社 *

Also Published As

Publication number Publication date
CN112861509B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN111477216B (en) Training method and system for voice and meaning understanding model of conversation robot
CN110853649A (en) Label extraction method, system, device and medium based on intelligent voice technology
CA3011397A1 (en) Natural expression processing method, processing and response method, device and system
CN111883115A (en) Voice flow quality inspection method and device
US9401145B1 (en) Speech analytics system and system and method for determining structured speech
CN110767213A (en) Rhythm prediction method and device
CN111177350A (en) Method, device and system for forming dialect of intelligent voice robot
CN115292461B (en) Man-machine interaction learning method and system based on voice recognition
CN113158671B (en) Open domain information extraction method combined with named entity identification
CN114818649A (en) Service consultation processing method and device based on intelligent voice interaction technology
CN111489743A (en) Operation management analysis system based on intelligent voice technology
CN112116907A (en) Speech recognition model establishing method, speech recognition device, speech recognition equipment and medium
CN115617955A (en) Hierarchical prediction model training method, punctuation symbol recovery method and device
CN115827854A (en) Voice abstract generation model training method, voice abstract generation method and device
CN112053681B (en) Telephone customer service quality scoring method and system for ASR and NLU combined training
CN112861509B (en) Role analysis method and system based on multi-head attention mechanism
CN115512691A (en) Method for judging echo based on semantic level in man-machine continuous conversation
CN115240712A (en) Multi-mode-based emotion classification method, device, equipment and storage medium
CN115270818A (en) Intention identification method and device, storage medium and computer equipment
CN114579751A (en) Emotion analysis method and device, electronic equipment and storage medium
CN114239565A (en) Deep learning-based emotion reason identification method and system
CN112002306B (en) Speech class recognition method and device, electronic equipment and readable storage medium
CN114708848A (en) Method and device for acquiring size of audio and video file
CN112542154A (en) Text conversion method and device, computer readable storage medium and electronic equipment
CN111883133A (en) Customer service voice recognition method, customer service voice recognition device, customer service voice recognition server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant