WO2021139108A1 - 情绪智能识别方法、装置、电子设备及存储介质 - Google Patents

情绪智能识别方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021139108A1
WO2021139108A1 PCT/CN2020/098963 CN2020098963W WO2021139108A1 WO 2021139108 A1 WO2021139108 A1 WO 2021139108A1 CN 2020098963 W CN2020098963 W CN 2020098963W WO 2021139108 A1 WO2021139108 A1 WO 2021139108A1
Authority
WO
WIPO (PCT)
Prior art keywords
data set
text
emotional state
emotional
probability distribution
Prior art date
Application number
PCT/CN2020/098963
Other languages
English (en)
French (fr)
Inventor
蒋江涛
马骏
王少军
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021139108A1 publication Critical patent/WO2021139108A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to an emotional intelligence recognition method, device, electronic equipment, and computer-readable storage medium.
  • NLP Natural Language Processing
  • This application provides an emotional intelligence recognition method, device, electronic equipment, and computer-readable storage medium, the main purpose of which is to provide a solution for recognizing user emotions based on user voice data.
  • an emotional intelligence recognition method includes:
  • the standard text data set is obtained after deleting, replacing and enhancing the characters in the text data set through preset cleaning rules;
  • the emotional state is optimized, and the user's emotion is recognized according to the maximized emotional state.
  • the present application also provides an electronic device including a memory and a processor.
  • the memory stores an emotional intelligence recognition program that can run on the processor.
  • the emotional intelligence recognition program When executed by the processor, the following steps are implemented:
  • the standard text data set is obtained after deleting, replacing and enhancing the characters in the text data set through preset cleaning rules;
  • the emotional state is optimized, and the user's emotion is recognized according to the maximized emotional state.
  • the present application also provides a computer-readable storage medium having an emotional intelligence recognition program stored on the computer-readable storage medium, and the emotional intelligence recognition program can be executed by one or more processors, To achieve the following steps:
  • the standard text data set is obtained after deleting, replacing and enhancing the characters in the text data set through preset cleaning rules;
  • the emotional state is optimized, and the user's emotion is recognized according to the maximized emotional state.
  • this application also provides an emotional intelligence recognition device, including:
  • the voice data conversion module is used to obtain the user's voice data set, and convert the voice data set into a text data set;
  • the text data cleaning module is used to delete, replace and enhance the characters in the text data set through preset cleaning rules to obtain a standard text data set;
  • the feature extraction module is used to perform text information feature extraction on the standard text data set to obtain a text sequence vector set
  • the emotion recognition module is used to input the text sequence vector set into a pre-built emotion recognition model to calculate the probability distribution set of the emotional state corresponding to the text sequence vector set, and calculate the emotional state by using the maximum score algorithm
  • the probability distribution of is concentrated to maximize the emotional state, and the user’s emotion is identified according to the maximized emotional state.
  • the emotional intelligence recognition method, device, electronic equipment, and computer-readable storage medium proposed in this application acquire the user’s voice data set, and clean the voice data set, thereby eliminating the presence of noise, excessive speech speed, and dialects in the voice data.
  • the converted text information contains typos, missing characters, and repetition of words or words; further, this application uses a pre-built emotion recognition model to recognize the user's emotions at the time, so as to further reduce the difficulty of speech recognition.
  • FIG. 1 is a schematic flowchart of an emotional intelligence recognition method provided by an embodiment of this application
  • FIG. 2 is a schematic diagram of the internal structure of an electronic device provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of modules of an emotional intelligence recognition device provided by an embodiment of the application.
  • This application provides an emotional intelligence recognition method.
  • FIG. 1 it is a schematic flowchart of an emotional intelligence recognition method provided by an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the emotional intelligence recognition method includes:
  • the user's voice data set is obtained from the enterprise's manual customer service when talking with the user.
  • this application uses automatic speech recognition (ASR) technology to convert the speech data set into text data.
  • ASR automatic speech recognition
  • the ASR is composed of an encoder and two decoders, which are used to extract common inter-domain features between voice data and text data, and to learn unpaired voice data and text data at the same time.
  • speech and text are different data types, speech is a sequence of continuous vectors, while text is a sequence of discrete symbols.
  • the length of speech is longer than the length of text in automatic speech recognition.
  • the input layer uses the embedding layer g( ⁇ ) for text input, and converts the continuous vector representation g(y) into the discrete id of the character y. Further, the present application inputs the voice data set into the pyramid bidirectional long and short-term memory network f( ⁇ ) to shorten the length of the voice data.
  • the automatic encoding of text data can not only enhance the intermediate representation of text data, but also can enhance the intermediate representation of voice data when these representation forms are standardized in this application, so that voice and text
  • the middle domain of the data represents more similar to each other during training.
  • the conversion of the voice data set into text data in this application includes: performing pre-emphasis and windowing and framing processing on the voice data set to obtain a standard voice data set, which is calculated by a pre-built loss function
  • the inter-domain loss of the standard voice data set is calculated, the optimal parameters of the inter-domain loss are calculated using a stochastic gradient algorithm, and the standard voice data set is updated according to the optimal parameters to obtain the optimal voice data set
  • the pre-emphasis mentioned in this application is to increase the high frequency part, so that in the entire frequency band from low frequency to high frequency, the signal spectrum becomes flat, so that the signal-to-noise ratio is basically the same, so as to facilitate the subsequent processing and make the audio signal No loss, and at the same time, by increasing the energy of the voice part, the influence of random noise and DC drift can be suppressed.
  • a digital filter is selected to pre-emphasize the sound frequency.
  • the windowing and framing is to perform framing processing on the voice data according to the characteristics of the voice data in a very small time range, the characteristics of which are basically unchanged, that is, relatively stable.
  • the voice data set is divided into frames.
  • the length of each frame of data obtained after the framing operation is 64 ms, corresponding to the length of each frame of data being 512.
  • the present application marks whether part of the frame data is muted, where 0 is no and 1 is yes, and at the same time, the overlap between adjacent frame data is about 0-0.5 times the frame length, which prevents signal loss.
  • the windowing is the same as the framing, which functions to divide a segment of audio signal into several short-term audio segments, even if the voice data is "short-term".
  • the effect of windowing is not limited to this, because after framing the voice data, it will suddenly cut off at the continuity of the signal (the end of the frame), which is called the truncation effect of the audio frame. Therefore, it is necessary to perform windowing processing on the audio signal to smoothly reduce the signal of the frame data to zero, which is equivalent to increasing the slope at both ends of the frame data, which is gentle and not abrupt.
  • windowing is to multiply the audio signal by a window function.
  • the selected window function is the window function of the Hamming window:
  • N is the window length
  • n is the audio signal.
  • the regression algorithm described in this application includes:
  • x j represents the output text
  • x k represents the text data output mode
  • k represents the total amount of text data
  • e represents an infinite non-recurring decimal.
  • the standard text data set is obtained after deleting, replacing and adding characters in the text data set through the preset cleaning rules.
  • this application adopts a data deletion method to delete the text data set, that is, perform continuous repeated words or words. Delete operation. Among them, in order to avoid deletion errors, this application collects a dictionary of commonly used duplicated words and duplicated words to ensure the correctness of the deletion operation.
  • this application found through the analysis of the real data of intelligent customer service that most of the numbers and letters generated in the phone have nothing to do with the expressed emotions, so replacement operations are required, that is, a placeholder is used to indicate that this is a string of numbers or letters .
  • This application uses a dictionary that uses numbers and letters to express emotions to ensure the correctness of the replacement.
  • the text data set may also have typos, missing characters and unbalanced samples.
  • a data enhancement method is adopted to solve these problems. The specific solutions are as follows Show:
  • this application randomly deletes a certain proportion of characters in the text string according to a certain strategy, and uses homophones to randomly replace a certain proportion of words, which enhances subsequent emotions
  • the recognition model can be better compatible with the problem of typos and missing characters, and infer the emotion of the text by learning the context of the text
  • this application balances the samples by controlling the enhancement parameters, that is, for the type with fewer samples, the larger proportion of enhancement parameters is used to achieve the balance of the training samples, and this application also implements the open translation system Data enhancement, such as translating Chinese into English, and then translating English into Chinese, to obtain different expressions, and then achieve the expansion of training corpus and sample balance.
  • the embodiment of the present application preferentially uses the pre-built pre-training language (Bidirectional Encoder Representations from Transformers, BERT) model to analyze
  • the standard text data set is used for text information feature extraction to obtain a text sequence vector set, thereby representing the text context information.
  • the BERT model described in this application includes a two-way Transformer encoder, "Masked language model” and "next sentence prediction”.
  • the attention mechanism in the two-way Transformer encoder is used to analyze the standard text data set. Modeling is performed, and the word-level and sentence-level sequence vector representations in the standard text data set are captured through the “Masked language model” and “next sentence prediction” to obtain the text sequence vector set.
  • the attention mechanism includes:
  • Q, K, and V represent the word vector matrix
  • d k represents the dimension of the input vector.
  • the core idea is to calculate the relationship between each word in the text and all the words in the sentence, and to show the relationship and importance of different words in the text through the relationship between words. This application re-uses the correlation to adjust the importance (weight) of each word to obtain a new characterization of each word.
  • the new representation not only implies the word itself, but also implies the relationship with other words, so it is a more global expression than a simple word vector.
  • the "Masked language model” is used to train deep two-way language representation vectors. This method adopts a very straightforward method, that is, by covering some text in the text, the encoder can predict the text. This application Randomly cover 15% of the text as a training sample.
  • the "next sentence prediction” refers to pre-training a two-class model for learning the relationship between sentences in the text.
  • a character sequence containing n characters Char (char 1 ,char 2 ...,char n ), where char n is a word vector with a dimension of d, which is input into the pre-built BERT model
  • char n is a word vector with a dimension of d
  • CharF i is a vector containing the word sequence and the above information of the word sequence
  • the word sequence is read in the reverse direction using the BERT model, and the word sequence and the following information of the word sequence are represented as CharB i
  • CharF i and CharB i is connected to form a word representation Wd containing word sequence and context information, and the text sequence vector is extracted in the same way as:
  • the pre-built emotion recognition model in the preferred embodiment of the present application includes: Conditional Random Field (CRF) model and Long Short-Term Memory (LSTM).
  • CRF Conditional Random Field
  • LSTM Long Short-Term Memory
  • the present application calculates the score matrix of the input text sequence vector set through the LSTM, and obtains the distribution state of the emotional state corresponding to the text sequence vector set based on the score matrix.
  • the distribution state uses the CRF to calculate the probability distribution set of the emotional state, calculates the maximum emotional state in the probability distribution set of the emotional state according to the maximum score algorithm, and identifies the user according to the maximum emotional state Emotions.
  • the calculation of the score matrix of the text sequence vector set includes:
  • S(Wd, y) represents the output score matrix of the emotional state
  • y represents the text sequence of the emotional state
  • n represents the length of the text sequence
  • A represents the transition score matrix
  • p represents the probability value.
  • the size of the A transfer score matrix is k+2.
  • the calculation method for calculating the probability distribution set of the emotional state includes:
  • Wd) represents the probability of an emotional state
  • Y Wd represents all possible emotional categories corresponding to the text sequence y
  • e represents an infinite non-recurring decimal.
  • the maximum score algorithm includes:
  • y * represents the maximum emotional state in the probability distribution set of the target text sequence set.
  • the application also provides an electronic device.
  • FIG. 2 it is a schematic diagram of the internal structure of an electronic device provided by an embodiment of this application.
  • the electronic device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer, or a server.
  • the electronic device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a hard disk of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the electronic device 1, such as codes of the emotional intelligence recognition program 01, etc., but also to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip, for running program codes or processing stored in the memory 11 Data, such as the execution of emotional intelligence recognition program 01 and so on.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip
  • the communication bus 13 is used to realize the connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is generally used to establish a communication connection between the electronic device 1 and other electronic devices.
  • the electronic device 1 may also include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the electronic device 1 and to display a visualized user interface.
  • Figure 2 only shows the electronic device 1 with components 11-14 and the emotional intelligence recognition program 01.
  • the structure shown in Figure 1 does not constitute a limitation on the electronic device 1, and may include ratios Fewer or more parts are shown, or some parts are combined, or different parts are arranged.
  • the emotional intelligence recognition program 01 is stored in the memory 11; when the processor 12 executes the emotional intelligence recognition program 01 stored in the memory 11, the following steps are implemented:
  • Step 1 Obtain the user's voice data set, and convert the voice data set into text data.
  • the user's voice data set is obtained from the enterprise's manual customer service when talking with the user.
  • this application uses automatic speech recognition (ASR) technology to convert the speech data set into text data.
  • ASR automatic speech recognition
  • the ASR is composed of an encoder and two decoders, which are used to extract common inter-domain features between voice data and text data, and to learn unpaired voice data and text data at the same time.
  • speech and text are different data types, speech is a sequence of continuous vectors, while text is a sequence of discrete symbols.
  • the length of speech is longer than the length of text in automatic speech recognition.
  • the input layer uses the embedding layer g( ⁇ ) for text input, and converts the continuous vector representation g(y) into the discrete id of the character y. Further, the present application inputs the voice data set into the pyramid bidirectional long and short-term memory network f( ⁇ ) to shorten the length of the voice data.
  • the automatic encoding of text data can not only enhance the intermediate representation of text data, but also can enhance the intermediate representation of voice data when these representation forms are standardized in this application, so that voice and text
  • the middle domain of the data represents more similar to each other during training.
  • the conversion of the voice data set into text data in this application includes: performing pre-emphasis and windowing and framing processing on the voice data set to obtain a standard voice data set, which is calculated by a pre-built loss function
  • the inter-domain loss of the standard voice data set is calculated, the optimal parameters of the inter-domain loss are calculated using a stochastic gradient algorithm, and the standard voice data set is updated according to the optimal parameters to obtain the optimal voice data set
  • the pre-emphasis mentioned in this application is to increase the high frequency part, so that in the entire frequency band from low frequency to high frequency, the signal spectrum becomes flat, so that the signal-to-noise ratio is basically the same, so as to facilitate the subsequent processing and make the audio signal No loss, and at the same time, it can also suppress the influence of random noise and DC drift by increasing the energy of the voice part.
  • a digital filter is selected to pre-emphasize the sound frequency.
  • the windowing and framing is to perform framing processing on the voice data according to the characteristics of the voice data in a very small time range, the characteristics of which are basically unchanged, that is, relatively stable.
  • the voice data set is divided into frames.
  • the length of each frame of data obtained after the framing operation is 64 ms, corresponding to the length of each frame of data being 512.
  • the present application marks whether part of the frame data is muted, where 0 is no and 1 is yes, and at the same time, the overlap between adjacent frame data is about 0-0.5 times the frame length, which prevents signal loss.
  • the windowing is the same as the framing, which functions to divide a segment of audio signal into several short-term audio segments, even if the voice data is "short-term".
  • the effect of windowing is not limited to this, because after framing the voice data, it will suddenly cut off at the continuity of the signal (the end of the frame), which is called the truncation effect of the audio frame. Therefore, it is necessary to perform windowing processing on the audio signal to smoothly reduce the signal of the frame data to zero, which is equivalent to increasing the slope at both ends of the frame data, which is gentle and not abrupt.
  • windowing is to multiply the audio signal by a window function.
  • the selected window function is the window function of the Hamming window:
  • N is the window length
  • n is the audio signal.
  • the regression algorithm described in this application includes:
  • x j represents the output text
  • x k represents the text data output mode
  • k represents the total amount of text data
  • e represents an infinite non-recurring decimal.
  • Step 2 Through the preset cleaning rules, the characters in the text data set are deleted, replaced, and added to obtain a standard text data set.
  • this application adopts a data deletion method to delete the text data set, that is, perform continuous repeated words or words. Delete operation. Among them, in order to avoid deletion errors, this application collects a dictionary of commonly used duplicated words and duplicated words to ensure the correctness of the deletion operation.
  • this application found through the analysis of the real data of intelligent customer service that most of the numbers and letters generated in the phone have nothing to do with the expressed emotions, so replacement operations are required, that is, a placeholder is used to indicate that this is a string of numbers or letters .
  • This application uses a dictionary that uses numbers and letters to express emotions to ensure the correctness of the replacement.
  • the text data set may also have typos, missing characters and unbalanced samples.
  • a data enhancement method is adopted to solve these problems. The specific solutions are as follows Show:
  • this application randomly deletes a certain proportion of characters in the text string according to a certain strategy, and uses homophones to randomly replace a certain proportion of words, which enhances subsequent emotions
  • the recognition model can be better compatible with the problem of typos and missing characters, and infer the emotion of the text by learning the context of the text
  • this application balances the samples by controlling the enhancement parameters, that is, for the type with fewer samples, the larger proportion of enhancement parameters is used to achieve the balance of the training samples, and this application also implements the open translation system Data enhancement, such as translating Chinese into English, and then translating English into Chinese, to obtain different expressions, and then achieve the expansion of training corpus and sample balance.
  • Step 3 Perform text information feature extraction on the standard text data set to obtain a text sequence vector set.
  • the embodiment of the present application preferentially uses the pre-built pre-training language (Bidirectional Encoder Representations from Transformers, BERT) model to analyze
  • the standard text data set is used for text information feature extraction to obtain a text sequence vector set, thereby representing the text context information.
  • the BERT model described in this application includes a two-way Transformer encoder, a “Masked language model” and “next sentence prediction”, and the attention mechanism in the two-way Transformer encoder is used to analyze the standard text data set. Modeling is performed, and the word-level and sentence-level sequence vector representations in the standard text data set are captured through the “Masked language model” and “next sentence prediction” to obtain the text sequence vector set.
  • the attention mechanism includes:
  • Q, K, and V represent the word vector matrix
  • d represents the dimension of the input vector.
  • the core idea is to calculate the relationship between each word in the text and all words in the sentence, and to show the relationship and importance of different words in the text through the relationship between words. This application re-uses the correlation to adjust the importance (weight) of each word to obtain a new characterization of each word.
  • the new representation not only implies the word itself, but also implies the relationship with other words, so it is a more global expression than a simple word vector.
  • the "Masked language model” is used to train deep two-way language representation vectors. This method adopts a very straightforward way, that is, by covering some text in the text, the encoder can predict the text. This application Randomly cover 15% of the text as a training sample.
  • the "next sentence prediction” refers to pre-training a two-class model for learning the relationship between sentences in the text.
  • a character sequence containing n characters Char (char 1 ,char 2 ...,char n ), where char n is a word vector with a dimension of d, which is input into the pre-built BERT model
  • char n is a word vector with a dimension of d
  • CharF i is a vector containing the word sequence and the above information of the word sequence
  • the word sequence is read in the reverse direction using the BERT model, and the word sequence and the following information of the word sequence are represented as CharB i
  • CharF i and CharB i is connected to form a word representation Wd containing word sequence and context information, and the text sequence vector is extracted in the same way as:
  • Step 4 Input the text sequence vector set into the pre-built emotion recognition model, output the probability distribution set of the emotional state corresponding to the text sequence vector set, and calculate the probability of the emotional state using the maximum score algorithm
  • the maximum emotional state in a centralized distribution is used to identify the user's emotion according to the maximum emotional state.
  • the pre-built emotion recognition model in the preferred embodiment of the present application includes: Conditional Random Field (CRF) model and Long Short-Term Memory (LSTM).
  • CRF Conditional Random Field
  • LSTM Long Short-Term Memory
  • the present application calculates the score matrix of the input text sequence vector set through the LSTM, and obtains the distribution state of the emotional state corresponding to the text sequence vector set based on the score matrix.
  • the distribution state uses the CRF to calculate the probability distribution set of the emotional state, calculates the maximum emotional state in the probability distribution set of the emotional state according to the maximum score algorithm, and identifies the user according to the maximum emotional state Emotions.
  • the calculation of the score matrix of the text sequence vector set includes:
  • S(Wd, y) represents the output score matrix of the emotional state
  • y represents the text sequence of the emotional state
  • n represents the length of the text sequence
  • A represents the transition score matrix
  • p represents the probability value.
  • the size of the A transfer score matrix is k+2.
  • the calculation method for calculating the probability distribution set of the emotional state includes:
  • Wd) represents the probability of an emotional state
  • Y Wd represents all possible emotional categories corresponding to the text sequence y
  • e represents an infinite non-recurring decimal.
  • the maximum score algorithm includes:
  • y * represents the maximum emotional state in the probability distribution set of the target text sequence set.
  • the emotional intelligence recognition device 100 includes a voice data conversion module 10, a text data cleaning module 20, and a feature extraction module 30.
  • the emotion recognition module 40 exemplarily:
  • the voice data conversion module 10 is used to obtain a user's voice data set, and convert the voice data set into a text data set.
  • the text data cleaning module 20 is used to obtain a standard text data set after deleting, replacing, and enhancing the characters in the text data set through preset cleaning rules.
  • the feature extraction module 30 is configured to: perform text information feature extraction on the standard text data set to obtain a text sequence vector set.
  • the emotion recognition module 40 is configured to: input the text sequence vector set into a pre-built emotion recognition model to calculate the probability distribution set of the emotional state corresponding to the text sequence vector set, and use the maximum score algorithm to calculate the The probability distribution of the emotional state concentrates on the maximized emotional state, and the user's emotion is identified according to the maximized emotional state.
  • voice data conversion module 10 The functions or operation steps implemented by the voice data conversion module 10, the text data cleaning module 20, the feature extraction module 30, and the emotion recognition module 40 when executed are substantially the same as those in the foregoing embodiment, and will not be repeated here.
  • the embodiment of the present application also proposes a computer-readable storage medium.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the computer-readable storage medium stores an emotional intelligence recognition program.
  • the emotional intelligence recognition program can be executed by one or more processors to achieve the following operations:
  • the standard text data set is obtained after deleting, replacing and enhancing the characters in the text data set through preset cleaning rules;
  • the emotional state is optimized, and the user's emotion is recognized according to the maximized emotional state.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Machine Translation (AREA)

Abstract

一种情绪智能识别方法,包括:获取用户的语音数据集,将语音数据集转换为文本数据集(S1);通过预设的清洗规则对文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集(S2);对标准文本数据集进行文本信息特征提取,得到文本序列向量集(S3);将文本序列向量集输入至预先构建的情绪识别模型中计算文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出情绪状态的概率分布集中的最大化情绪状态,根据最大化情绪状态识别用户的情绪(S4)。还提出一种情绪智能识别装置(100)、电子设备(1)以及一种计算机可读存储介质。实现了用户情绪的识别。

Description

情绪智能识别方法、装置、电子设备及存储介质
本申请要求于2020年1月10日提交中国专利局、申请号为CN 202010034197.6,发明名称为“情绪智能识别方法、装置及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种情绪智能识别方法、装置、电子设备及计算机可读存储介质。
背景技术
基于文本的自然语言处理(Natural Language Processing,简称NLP)的相关技术有了飞速发展,尤其是基于深度学习的端到端的模型在某些领域均已超过了人类的水平。为了能够充分利用自然语言处理的相关技术,如句法分析、语义分析、篇章分析、文本分类等,智能客服等方面需要将电话语音经过ASR技术转换成相应的文本数据。但是发明人意识到在ASR进行语音信息转换成文本的过程中可能会受到语音质量的影响,因为存在噪音、语速过快、方言等导致转换出来的文本信息会包含错字、少字、字或词重复的问题。此外,由于用户在与客服沟通的过程中表现的各类情绪,如正面情绪、负面情绪、中性情绪经常是不平衡,进一步加剧了深度学习模型的识别难度。
发明内容
本申请提供一种情绪智能识别方法、装置、电子设备及计算机可读存储介质,其主要目的在于提供一种根据用户的语音数据识别用户情绪的方案。
为实现上述目的,本申请提供的一种情绪智能识别方法,包括:
获取用户的语音数据集,将所述语音数据集转换为文本数据集;
通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集;
对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集;
将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
此外,为实现上述目的,本申请还提供一种电子设备,该设备包括存储器和处理器,所述存储器中存储有可在所述处理器上运行的情绪智能识别程序,所述情绪智能识别程序被所述处理器执行时实现如下步骤:
获取用户的语音数据集,将所述语音数据集转换为文本数据集;
通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集;
对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集;
将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有情绪智能识别程序,所述情绪智能识别程序可被一个或者多个处理器执行, 以实现如下步骤:
获取用户的语音数据集,将所述语音数据集转换为文本数据集;
通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集;
对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集;
将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
此外,为实现上述目的,本申请还提供一种情绪智能识别装置,包括:
语音数据转换模块,用于获取用户的语音数据集,将所述语音数据集转换为文本数据集;
文本数据清洗模块,用于通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集;
特征提取模块,用于对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集;
情绪识别模块,用于将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
本申请提出的情绪智能识别方法、装置、电子设备及计算机可读存储介质,获取用户的语音数据集,对语音数据集进行清洗处理从而可以消除语音数据中由于存在噪音、语速过快、方言等导致转换出来的文本信息包含错字、少字、字或词重复的问题;进一步地,本申请利用预先构建的情绪识别模型识别用户当时的情绪,从而为进一步减轻语音识别的难度。
附图说明
图1为本申请一实施例提供的情绪智能识别方法的流程示意图;
图2为本申请一实施例提供的电子设备的内部结构示意图;
图3为本申请一实施例提供的情绪智能识别装置的模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种情绪智能识别方法。参照图1所示,为本申请一实施例提供的情绪智能识别方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,情绪智能识别方法包括:
S1、获取用户的语音数据集,将所述语音数据集转换成文本数据。
本申请较佳实施例中,所述用户的语音数据集通过从企业的人工客服与用户进行通话时进行获取得到。
进一步地,本申请利用自动语音识别技术(Automatic Speech Recognition,ASR)将所述语音数据集转换成文本数据。所述ASR由一个编码器和两个解码器组成,用于提取语音数据和文本数据之间的共同域间特征,同时学习不成对的语音数据和文本数据。
由于语音和文本是不同的数据类型,语音是连续向量的序列,而文本是离散符号的序列,此外,语音的长度比自动语音识别中的文本长度长,因此,本申请在所述编码器的输入层使用嵌入层g(·)进行文本输入,将连续向量表示g(y)转换为字符y的离散id。进一步地,本申请将所述语音数据集输入金字塔双向长短期记忆网络f(·)中,以缩短语音数据长度。在所述ASR语音转换器的编码-解码器网络中,文本数据的自动编码不仅可以增强文 本数据的中间表示,还可以在本申请规范这些表示形式时增强语音数据的中间表示,使语音和文本数据的中间域表示在训练期间彼此更加相似。
较佳地,本申请中所述将所述语音数据集转换成文本数据包括:对所述语音数据集进行预加重和加窗分帧处理,得到标准语音数据集,通过预先构建的损失函数计算出所述标准语音数据集的域间损失,利用随机梯度算法计算所述域间损失的最优参数,根据所述最优参数对所述标准语音数据集进行更新操作后得到最优语音数据集,通过回归算法输出所述最优语音数据集对应的文本数据集。
其中,本申请中所述预加重即提高高频部分,使在低频到高频的整个频带中,信号频谱变得平坦,使其信噪比基本一致,以便于后续的一些处理,使音频信号不丢失,同时,还能通过增加语音部分能量,抑制随机噪声和直流漂移的影响。本申请实施例选择数字滤波器对声音频率进行预加重,其公式为:H(z)=1-μz -1,式中,z为声音频率,μ接近于1。
所述加窗分帧即根据语音数据在一个非常小的时间范围内,其特性基本保持不变即相对稳定的特点,对语音数据进行分帧处理。本申请实施例将所述语音数据集进行分帧。较佳地,分帧操作之后得到的每帧数据时长64ms,对应每帧数据的长度为512。进一步地,本申请对部分的帧数据标记是否为静音,其中0为否,1为是,同时使相邻帧数据之间重叠约0-0.5倍帧长,防止了信号丢失。所述加窗与分帧一样,都起到把一段音频信号分割成若干个短时音频段的作用,即使语音数据实现“短时”。此外,加窗的作用不仅限于此,因为对语音数据分帧后,会在信号连续处突然截止(帧结束),称之为音频帧的截断效应。所以就要对音频信号进行加窗处理,使帧数据的信号平滑降低到零,相当于在帧数据两端增加了坡度,平缓而不突兀。总的来说,加窗就是给音频信号乘以一个窗函数。在本申请实施例中,所选窗函数为汉明窗的窗函数:
Figure PCTCN2020098963-appb-000001
其中,N为窗长,n表示音频信号。
较佳地,本申请中所述回归算法包括:
Figure PCTCN2020098963-appb-000002
其中,x j表示输出文本,x k表示文本数据输出方式,k表示文数据的总量,e表示无限不循环小数。
S2、通过预设的清洗规则,对所述文本数据集中的字符进行删除、替换以及增加操作后得到标准文本数据集。
本申请较佳实施例中,由于所述文本数据集会出现字或词重复的问题,因此本申请采用数据删除方式对所述所述文本数据集进行删除处理,即对连续重复的字或词进行删除操作。其中,为了避免删除错误,本申请收集了一个常用的叠音字和叠音词的词典,以保证删除操作的正确性。
进一步地,本申请通过对智能客服真实数据分析发现,电话中产生的数字、字母绝大部分都与表达的情绪无关,因此需要替换操作,即用一个占位符表示这是一个数字或者字母串。本申请通过一个利用数字、字母表达情感的词典,确保对于替换的正确性。
在对文本数据进行数据删除和替换操作后,文本数据集中还可能出现错字、少字以及样本不均衡的问题,本申请中采用一种数据增强的方式对这些进行解决,其具体解决方式如下所示:
a、对于错字、少字的问题:本申请通过对文本的字符串进行按照一定的策略进行随机删除一定比例的字符,并利用同音字进行随机替换一定比例的字,通过这种方式增强后续情绪识别模型能够更好的兼容这种存在错字、少字的问题,并通过学习文本的上下文语境 进行推断出文本情绪
b、对于样本不均衡的问题:本申请通过控制增强参数进行均衡样本,即对于样本少的一类通过更大比例的增强参数,实现训练样本的均衡,同时本申请也对开放的翻译系统实现数据增强,如将中文翻译成英文,再讲英文翻译成中文,得到不同的表达方式,进而实现训练语料的扩充以及样本均衡。
S3、对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集。
由于在利用文本进行用户的情绪识别分析中,文本的上下文表示能够更好的表征当前用户的情绪,因此本申请实施例优先通过预先构建的预训练语言(Bidirectional Encoder Representationsfrom Transformers,BERT)模型对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集,从而将文本上下文信息进行表示。
较佳地,本申请中所述BERT模型包括双向Transformer编码器、“Masked语言模型”以及“下一个句子预测”,利用所述双向Transformer编码器中的注意力机制来对所述标准文本数据集进行建模,通过所述“Masked语言模型”以及“下一个句子预测”捕捉所述标准文本数据集中词级别和句子级别的序列向量表示,从而得到所述文本序列向量集。
其中,所述注意力机制包括:
Figure PCTCN2020098963-appb-000003
其中Q、K、V均表示字向量矩阵,d k表示输入向量维度。其核心思想是计算文本中的每个词对于这句话中所有词的相互关系,通过词与词之间的相互关系展现出在所述文本中不同词之间的关联性以及重要程度。本申请再利用所述相互关系来调整每个词的重要性(权重),以获得每个词新的表征。其中,所述新的表征不但蕴含了该词本身,还蕴含了与其他词的关系,因此和单纯的词向量相比是一个更加全局的表达。
进一步地,所述“Masked语言模型”用于训练深度双向语言表示向量,该方法采用了一个非常直接的方式,即通过遮住文本里某些文字,让所述编码器预测这个文字,本申请随机遮住15%的文本作为训练样本。所述“下一个句子预测”是指预训练一个二分类的模型,用于学习文本中句子之间的关系。
较佳地,本申请将一个包含n个字的文字序列Char=(char 1,char 2…,char n),其中char n是一个维度为d维的字向量,输入所述预先构建的BERT模型中,从而生成一个包含字序列以及字序列上文信息的向量表示CharF i,同理使用BERT模型反向读取字序列,将字序列以及字序列的下文信息表示为CharB i,将CharF i和CharB i连接形成一个包含字序列以及上下文信息的词表示Wd,并以同样的方式抽取得到所述文本序列向量为:
Wd=[CharF i:CharB i]。
S4、将所述文本序列向量集输入至预先构建的情绪识别模型中,输出所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
本申请较佳实施例中所述预先构建的情绪识别模型包括:条件随机场(ConditionalRandom Field,CRF)模型以及长短期记忆网络(Long Short-Term Memory,LSTM)。
较佳地,本申请通过所述LSTM计算所述输入的文本序列向量集的分值矩阵,根据所述分值矩阵,得到所述文本序列向量集对应的情绪状态的分布状态,并基于所述分布状态利用所述CRF计算所述情绪状态的概率分布集,根据所述最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
较佳地,所述计算所述文本序列向量集的分值矩阵包括:
Figure PCTCN2020098963-appb-000004
其中,S(Wd,y)表示情绪状态的输出分值矩阵,y表示情绪状态的文本序列,n表示文本序列的长度,A表示的是转移分值矩阵,p表示概率值。其中,当j=0时,即y 0表示的是一个序列开始的标志,当j=n时,即y n+1表示一个序列结束的标志,A转移分值矩阵的大小为k+2。
较佳地,所述计算所述情绪状态的概率分布集的计算方法包括:
Figure PCTCN2020098963-appb-000005
其中,p(y|Wd)表示情绪状态概率,Y Wd代表文本序列y对应的所有可能情绪类别,e表示无限不循环小数。
较佳地,所述最大分值算法包括:
Figure PCTCN2020098963-appb-000006
其中,y *表示目标文本序列集的概率分布集中的最大化情绪状态。
本申请还提供一种电子设备。参照图2所示,为本申请一实施例提供的电子设备的内部结构示意图。
在本实施例中,所述电子设备1可以是PC(Personal Computer,个人电脑),或者是智能手机、平板电脑、便携计算机等终端设备,也可以是一种服务器等。该电子设备1至少包括存储器11、处理器12,通信总线13,以及网络接口14。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的硬盘。存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括电子设备1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如情绪智能识别程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行情绪智能识别程序01等。
通信总线13用于实现这些组件之间的连接通信。
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该电子设备1与其他电子设备之间建立通信连接。
可选地,该电子设备1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。
图2仅示出了具有组件11-14以及情绪智能识别程序01的电子设备1,本领域技术人员可以理解的是,图1示出的结构并不构成对电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图2所示的电子设备1实施例中,存储器11中存储有情绪智能识别程序01;处理器12执行存储器11中存储的情绪智能识别程序01时实现如下步骤:
步骤一、获取用户的语音数据集,将所述语音数据集转换成文本数据。
本申请较佳实施例中,所述用户的语音数据集通过从企业的人工客服与用户进行通话时进行获取得到。
进一步地,本申请利用自动语音识别技术(Automatic Speech Recognition,ASR)将所述语音数据集转换成文本数据。所述ASR由一个编码器和两个解码器组成,用于提取语音数据和文本数据之间的共同域间特征,同时学习不成对的语音数据和文本数据。
由于语音和文本是不同的数据类型,语音是连续向量的序列,而文本是离散符号的序列,此外,语音的长度比自动语音识别中的文本长度长,因此,本申请在所述编码器的输入层使用嵌入层g(·)进行文本输入,将连续向量表示g(y)转换为字符y的离散id。进一步地,本申请将所述语音数据集输入金字塔双向长短期记忆网络f(·)中,以缩短语音数据长度。在所述ASR语音转换器的编码-解码器网络中,文本数据的自动编码不仅可以增强文本数据的中间表示,还可以在本申请规范这些表示形式时增强语音数据的中间表示,使语音和文本数据的中间域表示在训练期间彼此更加相似。
较佳地,本申请中所述将所述语音数据集转换成文本数据包括:对所述语音数据集进行预加重和加窗分帧处理,得到标准语音数据集,通过预先构建的损失函数计算出所述标准语音数据集的域间损失,利用随机梯度算法计算所述域间损失的最优参数,根据所述最优参数对所述标准语音数据集进行更新操作后得到最优语音数据集,通过回归算法输出所述最优语音数据集对应的文本数据集。
其中,本申请中所述预加重即提高高频部分,使在低频到高频的整个频带中,信号频谱变得平坦,使其信噪比基本一致,以便于后续的一些处理,使音频信号不丢失,同时,还能通过增加语音部分能量,抑制随机噪声和直流漂移的影响。本申请实施例选择数字滤波器对声音频率进行预加重,其公式为:H(z)=1-μz -1,式中,z为声音频率,μ接近于1。
所述加窗分帧即根据语音数据在一个非常小的时间范围内,其特性基本保持不变即相对稳定的特点,对语音数据进行分帧处理。本申请实施例将所述语音数据集进行分帧。较佳地,分帧操作之后得到的每帧数据时长64ms,对应每帧数据的长度为512。进一步地,本申请对部分的帧数据标记是否为静音,其中0为否,1为是,同时使相邻帧数据之间重叠约0-0.5倍帧长,防止了信号丢失。所述加窗与分帧一样,都起到把一段音频信号分割成若干个短时音频段的作用,即使语音数据实现“短时”。此外,加窗的作用不仅限于此,因为对语音数据分帧后,会在信号连续处突然截止(帧结束),称之为音频帧的截断效应。所以就要对音频信号进行加窗处理,使帧数据的信号平滑降低到零,相当于在帧数据两端增加了坡度,平缓而不突兀。总的来说,加窗就是给音频信号乘以一个窗函数。在本申请实施例中,所选窗函数为汉明窗的窗函数:
Figure PCTCN2020098963-appb-000007
其中,N为窗长,n表示音频信号。
较佳地,本申请中所述回归算法包括:
Figure PCTCN2020098963-appb-000008
其中,x j表示输出文本,x k表示文本数据输出方式,k表示文数据的总量,e表示无限不循环小数。
步骤二、通过预设的清洗规则,对所述文本数据集中的字符进行删除、替换以及增加操作后得到标准文本数据集。
本申请较佳实施例中,由于所述文本数据集会出现字或词重复的问题,因此本申请采用数据删除方式对所述所述文本数据集进行删除处理,即对连续重复的字或词进行删除操作。其中,为了避免删除错误,本申请收集了一个常用的叠音字和叠音词的词典,以保证删除操作的正确性。
进一步地,本申请通过对智能客服真实数据分析发现,电话中产生的数字、字母绝大部分都与表达的情绪无关,因此需要替换操作,即用一个占位符表示这是一个数字或者字母串。本申请通过一个利用数字、字母表达情感的词典,确保对于替换的正确性。
在对文本数据进行数据删除和替换操作后,文本数据集中还可能出现错字、少字以及样本不均衡的问题,本申请中采用一种数据增强的方式对这些进行解决,其具体解决方式如下所示:
a、对于错字、少字的问题:本申请通过对文本的字符串进行按照一定的策略进行随机删除一定比例的字符,并利用同音字进行随机替换一定比例的字,通过这种方式增强后续情绪识别模型能够更好的兼容这种存在错字、少字的问题,并通过学习文本的上下文语境进行推断出文本情绪
b、对于样本不均衡的问题:本申请通过控制增强参数进行均衡样本,即对于样本少的一类通过更大比例的增强参数,实现训练样本的均衡,同时本申请也对开放的翻译系统实现数据增强,如将中文翻译成英文,再讲英文翻译成中文,得到不同的表达方式,进而实现训练语料的扩充以及样本均衡。
步骤三、对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集。
由于在利用文本进行用户的情绪识别分析中,文本的上下文表示能够更好的表征当前用户的情绪,因此本申请实施例优先通过预先构建的预训练语言(Bidirectional Encoder Representationsfrom Transformers,BERT)模型对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集,从而将文本上下文信息进行表示。
较佳地,本申请中所述BERT模型包括双向Transformer编码器、“Masked语言模型”以及“下一个句子预测”,利用所述双向Transformer编码器中的注意力机制来对所述标准文本数据集进行建模,通过所述“Masked语言模型”以及“下一个句子预测”捕捉所述标准文本数据集中词级别和句子级别的序列向量表示,从而得到所述文本序列向量集。
其中,所述注意力机制包括:
Figure PCTCN2020098963-appb-000009
其中Q、K、V均表示字向量矩阵,d表示输入向量维度。其核心思想是计算文本中的每个词对于这句话中所有词的相互关系,通过词与词之间的相互关系展现出在所述文本中不同词之间的关联性以及重要程度。本申请再利用所述相互关系来调整每个词的重要性(权重),以获得每个词新的表征。其中,所述新的表征不但蕴含了该词本身,还蕴含了与其他词的关系,因此和单纯的词向量相比是一个更加全局的表达。
进一步地,所述“Masked语言模型”用于训练深度双向语言表示向量,该方法采用了一个非常直接的方式,即通过遮住文本里某些文字,让所述编码器预测这个文字,本申请随机遮住15%的文本作为训练样本。所述“下一个句子预测”是指预训练一个二分类的模型,用于学习文本中句子之间的关系。
较佳地,本申请将一个包含n个字的文字序列Char=(char 1,char 2…,char n),其中char n是一个维度为d维的字向量,输入所述预先构建的BERT模型中,从而生成一个包含字序列以及字序列上文信息的向量表示CharF i,同理使用BERT模型反向读取字序列,将字序列以及字序列的下文信息表示为CharB i,将CharF i和CharB i连接形成一个包含字序列以及上下文信息的词表示Wd,并以同样的方式抽取得到所述文本序列向量为:
Wd=[CharF i:CharB i]。
步骤四、将所述文本序列向量集输入至预先构建的情绪识别模型中,输出所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
本申请较佳实施例中所述预先构建的情绪识别模型包括:条件随机场(ConditionalRandom Field,CRF)模型以及长短期记忆网络(Long Short-Term Memory,LSTM)。
较佳地,本申请通过所述LSTM计算所述输入的文本序列向量集的分值矩阵,根据所述分值矩阵,得到所述文本序列向量集对应的情绪状态的分布状态,并基于所述分布状态利用所述CRF计算所述情绪状态的概率分布集,根据所述最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
较佳地,所述计算所述文本序列向量集的分值矩阵包括:
Figure PCTCN2020098963-appb-000010
其中,S(Wd,y)表示情绪状态的输出分值矩阵,y表示情绪状态的文本序列,n表示文本序列的长度,A表示的是转移分值矩阵,p表示概率值。其中,当j=0时,即y 0表示的是一个序列开始的标志,当j=n时,即y n+1表示一个序列结束的标志,A转移分值矩阵的大小为k+2。
较佳地,所述计算所述情绪状态的概率分布集的计算方法包括:
Figure PCTCN2020098963-appb-000011
其中,p(y|Wd)表示情绪状态概率,Y Wd代表文本序列y对应的所有可能情绪类别,e表示无限不循环小数。
较佳地,所述最大分值算法包括:
Figure PCTCN2020098963-appb-000012
其中,y *表示目标文本序列集的概率分布集中的最大化情绪状态。
参照图3所示,为本申请情绪智能识别装置100一实施例的模块示意图,该实施例中,所述情绪智能识别装置100包括语音数据转换模块10、文本数据清洗模块20、特征提取模块30以及情绪识别模块40示例性地:
所述语音数据转换模块10用于:获取用户的语音数据集,将所述语音数据集转换为文本数据集。
所述文本数据清洗模块20用于:通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集。
所述特征提取模块30用于:对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集。
所述情绪识别模块40用于:将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
上述语音数据转换模块10、文本数据清洗模块20、特征提取模块30以及情绪识别模块40等模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质可以 是非易失性,也可以是易失性,所述计算机可读存储介质上存储有情绪智能识别程序,所述情绪智能识别程序可被一个或多个处理器执行,以实现如下操作:
获取用户的语音数据集,将所述语音数据集转换为文本数据集;
通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集;
对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集;
将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
本申请计算机可读存储介质具体实施方式与上述电子设备和方法各实施例基本相同,在此不作累述。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种情绪智能识别方法,其中,所述方法包括:
    获取用户的语音数据集,将所述语音数据集转换为文本数据集;
    通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集;
    对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集;
    将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
  2. 如权利要求1所述的情绪智能识别方法,其中,所述将所述语音数据集转换为文本数据集包括:
    对所述语音数据集进行预加重和加窗分帧处理,得到标准语音数据集;
    通过预先构建的损失函数计算出所述标准语音数据集的域间损失,利用随机梯度算法计算所述域间损失的最优参数,根据所述最优参数对所述标准语音数据集进行更新操作后得到最优语音数据集;
    通过回归算法将所述最优语音数据集转换为对应的文本数据集。
  3. 如权利要求2所述的情绪智能识别方法,其中,所述回归算法包括:
    Figure PCTCN2020098963-appb-100001
    其中,x j表示输出文本数据,x k表示文本数据输出方式,k表示文数据的总量,e表示无限不循环小数。
  4. 如权利要求1至3中任意一项所述的情绪智能识别方法,其中,所述预先构建的情绪识别模型包括:条件随机场模型以及长短期记忆网络。
  5. 如权利要求4所述的情绪智能识别方法,其中,所述将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪,包括:
    通过所述长短期记忆网络计算所述输入的文本序列向量集的分值矩阵,根据所述分值矩阵,得到所述文本序列向量集对应的情绪状态的分布状态,并基于所述分布状态利用所述条件随机场模型计算所述情绪状态的概率分布集,根据所述最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
  6. 如权利要求1至3中任意一项所述的情绪智能识别方法,其中,所述文本序列向量集对应的情绪状态的概率分布集的计算方法包括:
    Figure PCTCN2020098963-appb-100002
    其中,p(y|Wd)表示情绪状态概率,S(Wd,y)表示情绪状态的输出分值矩阵,y表示情绪状态的文本序列,Wd表示包含字序列以及上下文信息的词表示,Y Wd代表文本序列y对应的所有可能情绪类别,e表示无限不循环小数。
  7. 如权利要求6所述的情绪智能识别方法,其中,所述最大分值算法包括:
    Figure PCTCN2020098963-appb-100003
    其中,y *表示目标文本序列集的概率分布集中的最大化情绪状态。
  8. 一种电子设备,其中,所述设备包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的情绪智能识别程序,所述情绪智能识别程序被所述处理器执行时实现如下步骤:
    获取用户的语音数据集,将所述语音数据集转换为文本数据集;
    通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集;
    对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集;
    将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
  9. 如权利要求8所述的电子设备,其中,所述将所述语音数据集转换为文本数据集包括:
    对所述语音数据集进行预加重和加窗分帧处理,得到标准语音数据集;
    通过预先构建的损失函数计算出所述标准语音数据集的域间损失,利用随机梯度算法计算所述域间损失的最优参数,根据所述最优参数对所述标准语音数据集进行更新操作后得到最优语音数据集;
    通过回归算法将所述最优语音数据集转换为对应的文本数据集。
  10. 如权利要求9所述的电子设备,其中,所述回归算法包括:
    Figure PCTCN2020098963-appb-100004
    其中,x j表示输出文本数据,x k表示文本数据输出方式,k表示文数据的总量,e表示无限不循环小数。
  11. 如权利要求8至10中任意一项所述的电子设备,其中,所述预先构建的情绪识别模型包括:条件随机场模型以及长短期记忆网络。
  12. 如权利要求11所述的电子设备,其中,所述将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪,包括:
    通过所述长短期记忆网络计算所述输入的文本序列向量集的分值矩阵,根据所述分值矩阵,得到所述文本序列向量集对应的情绪状态的分布状态,并基于所述分布状态利用所述条件随机场模型计算所述情绪状态的概率分布集,根据所述最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
  13. 如权利要求8至10中任意一项所述的电子设备,其中,所述计算所述文本序列向量集对应的情绪状态的概率分布集的计算方法包括:
    Figure PCTCN2020098963-appb-100005
    其中,p(y|Wd)表示情绪状态概率,S(Wd,y)表示情绪状态的输出分值矩阵,y表示情绪状态的文本序列,Wd表示包含字序列以及上下文信息的词表示,Y Wd代表文本序列y对应的所有可能情绪类别,e表示无限不循环小数。
  14. 如权利要求13所述的电子设备,其中,所述最大分值算法包括:
    Figure PCTCN2020098963-appb-100006
    其中,y *表示目标文本序列集的概率分布集中的最大化情绪状态。
  15. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有情绪智能识别程序,所述情绪智能识别程序可被一个或者多个处理器执行,以实现如下步骤:
    获取用户的语音数据集,将所述语音数据集转换为文本数据集;
    通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集;
    对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集;
    将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
  16. 如权利要求15所述的计算机可读存储介质,其中,所述将所述语音数据集转换为文本数据集包括:
    对所述语音数据集进行预加重和加窗分帧处理,得到标准语音数据集;
    通过预先构建的损失函数计算出所述标准语音数据集的域间损失,利用随机梯度算法计算所述域间损失的最优参数,根据所述最优参数对所述标准语音数据集进行更新操作后得到最优语音数据集;
    通过回归算法将所述最优语音数据集转换为对应的文本数据集。
  17. 如权利要求16所述的计算机可读存储介质,其中,所述回归算法包括:
    Figure PCTCN2020098963-appb-100007
    其中,x j表示输出文本数据,x k表示文本数据输出方式,k表示文数据的总量,e表示无限不循环小数。
  18. 如权利要求15至17中任意一项所述的计算机可读存储介质,其中,所述计算所述文本序列向量集对应的情绪状态的概率分布集的计算方法包括:
    Figure PCTCN2020098963-appb-100008
    其中,p(y|Wd)表示情绪状态概率,S(Wd,y)表示情绪状态的输出分值矩阵,y表示情绪状态的文本序列,Wd表示包含字序列以及上下文信息的词表示,Y Wd代表文本序列y对应的所有可能情绪类别,e表示无限不循环小数。
  19. 如权利要求18所述的计算机可读存储介质,其中,所述最大分值算法包括:
    Figure PCTCN2020098963-appb-100009
    其中,y *表示目标文本序列集的概率分布集中的最大化情绪状态。
  20. 一种情绪智能识别装置,其中,包括:
    语音数据转换模块,用于获取用户的语音数据集,将所述语音数据集转换为文本数据集;
    文本数据清洗模块,用于通过预设的清洗规则对所述文本数据集中的字符进行删除、替换以及增强操作后得到标准文本数据集;
    特征提取模块,用于对所述标准文本数据集进行文本信息特征提取,得到文本序列向量集;
    情绪识别模块,用于将所述文本序列向量集输入至预先构建的情绪识别模型中计算所述文本序列向量集对应的情绪状态的概率分布集,并利用最大分值算法计算出所述情绪状态的概率分布集中的最大化情绪状态,根据所述最大化情绪状态识别所述用户的情绪。
PCT/CN2020/098963 2020-01-10 2020-06-29 情绪智能识别方法、装置、电子设备及存储介质 WO2021139108A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010034197.6A CN111223498A (zh) 2020-01-10 2020-01-10 情绪智能识别方法、装置及计算机可读存储介质
CN202010034197.6 2020-01-10

Publications (1)

Publication Number Publication Date
WO2021139108A1 true WO2021139108A1 (zh) 2021-07-15

Family

ID=70832303

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098963 WO2021139108A1 (zh) 2020-01-10 2020-06-29 情绪智能识别方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN111223498A (zh)
WO (1) WO2021139108A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362858A (zh) * 2021-07-27 2021-09-07 中国平安人寿保险股份有限公司 语音情感分类方法、装置、设备及介质
CN116687410A (zh) * 2023-08-03 2023-09-05 中日友好医院(中日友好临床医学研究所) 一种慢性病患者的述情障碍评估方法和系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223498A (zh) * 2020-01-10 2020-06-02 平安科技(深圳)有限公司 情绪智能识别方法、装置及计算机可读存储介质
CN111798874A (zh) * 2020-06-24 2020-10-20 西北师范大学 一种语音情绪识别方法及系统
CN111862279A (zh) * 2020-07-23 2020-10-30 中国工商银行股份有限公司 交互处理方法和装置
CN112002329B (zh) * 2020-09-03 2024-04-02 深圳Tcl新技术有限公司 身心健康监测方法、设备及计算机可读存储介质
CN112183228B (zh) * 2020-09-09 2022-07-08 青岛联合创智科技有限公司 一种社区智慧养老服务系统及方法
CN112151014B (zh) * 2020-11-04 2023-07-21 平安科技(深圳)有限公司 语音识别结果的测评方法、装置、设备及存储介质
CN112700255A (zh) * 2020-12-28 2021-04-23 科讯嘉联信息技术有限公司 一种多模态监督服务系统及方法
CN113569584A (zh) * 2021-01-25 2021-10-29 腾讯科技(深圳)有限公司 文本翻译方法、装置、电子设备及计算机可读存储介质
CN113506586B (zh) * 2021-06-18 2023-06-20 杭州摸象大数据科技有限公司 用户情绪识别的方法和系统
CN113593521B (zh) * 2021-07-29 2022-09-20 北京三快在线科技有限公司 语音合成方法、装置、设备及可读存储介质
CN114548114B (zh) * 2022-02-23 2024-02-02 平安科技(深圳)有限公司 文本情绪识别方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108039181A (zh) * 2017-11-02 2018-05-15 北京捷通华声科技股份有限公司 一种声音信号的情感信息分析方法和装置
CN108305641A (zh) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 情感信息的确定方法和装置
CN109003624A (zh) * 2018-06-29 2018-12-14 北京百度网讯科技有限公司 情绪识别方法、装置、计算机设备及存储介质
US20190050875A1 (en) * 2017-06-22 2019-02-14 NewVoiceMedia Ltd. Customer interaction and experience system using emotional-semantic computing
JP6513869B1 (ja) * 2018-10-31 2019-05-15 株式会社eVOICE 対話要約生成装置、対話要約生成方法およびプログラム
CN110297907A (zh) * 2019-06-28 2019-10-01 谭浩 生成访谈报告的方法、计算机可读存储介质和终端设备
CN111223498A (zh) * 2020-01-10 2020-06-02 平安科技(深圳)有限公司 情绪智能识别方法、装置及计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10037767B1 (en) * 2017-02-01 2018-07-31 Wipro Limited Integrated system and a method of identifying and learning emotions in conversation utterances
CN110364185B (zh) * 2019-07-05 2023-09-29 平安科技(深圳)有限公司 一种基于语音数据的情绪识别方法、终端设备及介质
CN110413785B (zh) * 2019-07-25 2021-10-19 淮阴工学院 一种基于bert和特征融合的文本自动分类方法
CN110516256A (zh) * 2019-08-30 2019-11-29 的卢技术有限公司 一种中文命名实体提取方法及其系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190050875A1 (en) * 2017-06-22 2019-02-14 NewVoiceMedia Ltd. Customer interaction and experience system using emotional-semantic computing
CN108305641A (zh) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 情感信息的确定方法和装置
CN108039181A (zh) * 2017-11-02 2018-05-15 北京捷通华声科技股份有限公司 一种声音信号的情感信息分析方法和装置
CN109003624A (zh) * 2018-06-29 2018-12-14 北京百度网讯科技有限公司 情绪识别方法、装置、计算机设备及存储介质
JP6513869B1 (ja) * 2018-10-31 2019-05-15 株式会社eVOICE 対話要約生成装置、対話要約生成方法およびプログラム
CN110297907A (zh) * 2019-06-28 2019-10-01 谭浩 生成访谈报告的方法、计算机可读存储介质和终端设备
CN111223498A (zh) * 2020-01-10 2020-06-02 平安科技(深圳)有限公司 情绪智能识别方法、装置及计算机可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362858A (zh) * 2021-07-27 2021-09-07 中国平安人寿保险股份有限公司 语音情感分类方法、装置、设备及介质
CN113362858B (zh) * 2021-07-27 2023-10-31 中国平安人寿保险股份有限公司 语音情感分类方法、装置、设备及介质
CN116687410A (zh) * 2023-08-03 2023-09-05 中日友好医院(中日友好临床医学研究所) 一种慢性病患者的述情障碍评估方法和系统
CN116687410B (zh) * 2023-08-03 2023-11-14 中日友好医院(中日友好临床医学研究所) 一种慢性病患者的述情障碍评估方法和系统

Also Published As

Publication number Publication date
CN111223498A (zh) 2020-06-02

Similar Documents

Publication Publication Date Title
WO2021139108A1 (zh) 情绪智能识别方法、装置、电子设备及存储介质
US9473637B1 (en) Learning generation templates from dialog transcripts
US20190377790A1 (en) Supporting Combinations of Intents in a Conversation
CN111738003B (zh) 命名实体识别模型训练方法、命名实体识别方法和介质
WO2021051516A1 (zh) 基于人工智能的古诗词生成方法、装置、设备及存储介质
US8374881B2 (en) System and method for enriching spoken language translation with dialog acts
US8370127B2 (en) Systems and methods for building asset based natural language call routing application with limited resources
US20160275075A1 (en) Natural Expression Processing Method, Processing and Response Method, Device, and System
WO2021121198A1 (zh) 基于语义相似度的实体关系抽取方法、装置、设备及介质
Mai et al. Enhancing Rasa NLU model for Vietnamese chatbot
US10916242B1 (en) Intent recognition method based on deep learning network
JP7335300B2 (ja) 知識事前訓練モデルの訓練方法、装置及び電子機器
US20220293092A1 (en) Method and apparatus of training natural language processing model, and method and apparatus of processing natural language
WO2021139107A1 (zh) 情感智能识别方法、装置、电子设备及存储介质
CN110428820A (zh) 一种中英文混合语音识别方法及装置
WO2021143206A1 (zh) 单语句自然语言处理方法、装置、计算机设备及可读存储介质
US20230127787A1 (en) Method and apparatus for converting voice timbre, method and apparatus for training model, device and medium
CN113407677A (zh) 评估咨询对话质量的方法、装置、设备和存储介质
CN117033582A (zh) 对话模型的训练方法、装置、电子设备和存储介质
WO2023045186A1 (zh) 意图识别方法、装置、电子设备和存储介质
CN111831832B (zh) 词表构建方法、电子设备及计算机可读介质
US10706086B1 (en) Collaborative-filtering based user simulation for dialog systems
WO2023123892A1 (zh) 一种信息预测模块的构建方法、信息预测方法及相关设备
CN111090720B (zh) 一种热词的添加方法和装置
Chen et al. Fast OOV words incorporation using structured word embeddings for neural network language model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20912097

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20912097

Country of ref document: EP

Kind code of ref document: A1