WO2020253362A1 - 基于情绪分析的业务处理方法、装置、设备及存储介质 - Google Patents
基于情绪分析的业务处理方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2020253362A1 WO2020253362A1 PCT/CN2020/086161 CN2020086161W WO2020253362A1 WO 2020253362 A1 WO2020253362 A1 WO 2020253362A1 CN 2020086161 W CN2020086161 W CN 2020086161W WO 2020253362 A1 WO2020253362 A1 WO 2020253362A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- information
- preset
- voice information
- target user
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Definitions
- This application relates to the field of artificial intelligence, and in particular to a business processing method, device, equipment and computer-readable storage medium based on sentiment analysis.
- the main purpose of this application is to provide a business processing method, device, equipment, and computer-readable storage medium based on sentiment analysis, aiming to solve the technical problems of low self-service terminal business processing efficiency and poor user experience.
- the business processing method based on sentiment analysis includes the following steps:
- the terminal Upon receiving the service handling request sent by the terminal, obtain the target voice information of the target user, convert the target voice information into corresponding text information, and determine the target voice information based on the preset emotion recognition model, the target voice information and the text information. State the emotional category of the target user;
- the present application also provides a business processing device based on sentiment analysis, and the business processing device based on sentiment analysis includes:
- the emotion determination module is used to obtain the target voice information of the target user when receiving the service handling request sent by the terminal, and convert the target voice information into corresponding text information, based on the preset emotion recognition model and the target voice information And text information to determine the emotional category of the target user;
- a speech technique determining module configured to determine the business processing flow and target service speech technique corresponding to the target user based on the text information and the emotion category;
- the processing guidance module is used to broadcast the target service language and guide the target user to complete the corresponding target service according to the service processing flow.
- the present application also provides a business processing device based on sentiment analysis.
- the business processing device based on sentiment analysis includes a processor, a memory, and stored in the memory and can be used by the processor.
- the executed business processing program based on sentiment analysis wherein when the business processing program based on sentiment analysis is executed by the processor, the steps of the above-mentioned business processing method based on sentiment analysis are implemented.
- the present application also provides a computer-readable storage medium that stores a business processing program based on sentiment analysis, wherein the business processing program based on sentiment analysis is processed by the processor When executed, the steps of the business processing method based on sentiment analysis as described above are implemented.
- This application provides a service processing method based on sentiment analysis, that is, when a service processing request sent by a terminal is received, target voice information of a target user is obtained, and the target voice information is converted into corresponding text information, based on preset emotions Recognition model, the target voice information and text information, determine the emotional category of the target user; based on the text information and the emotional category, determine the business processing flow and target service words corresponding to the target user; The target service language is described, and the target user is guided to complete the corresponding target service according to the business processing flow.
- this application determines the user’s emotional category based on the user’s voice information, then determines the corresponding business processing flow and corresponding target service language based on the emotional category, and then guides the target based on the business processing flow and target service language
- the user completes the corresponding target business, improves business processing efficiency, improves user experience, and solves the technical problems of low efficiency of existing self-service terminal business processing and poor user experience.
- FIG. 1 is a schematic diagram of the hardware structure of a business processing device based on sentiment analysis involved in the solution of an embodiment of the application;
- FIG. 2 is a schematic flowchart of a first embodiment of a business processing method based on sentiment analysis in this application;
- FIG. 3 is a schematic flowchart of a second embodiment of a business processing method based on sentiment analysis according to this application;
- FIG. 4 is a schematic flowchart of a third embodiment of a business processing method based on sentiment analysis of this application;
- FIG. 5 is a schematic diagram of functional modules of a first embodiment of a business processing apparatus based on sentiment analysis according to this application.
- the business processing method based on sentiment analysis involved in the embodiments of the present application is mainly applied to business processing equipment based on sentiment analysis.
- the business processing equipment based on sentiment analysis may be a PC, a portable computer, a mobile terminal, or other devices with display and processing functions.
- FIG. 1 is a schematic diagram of the hardware structure of a service processing device based on sentiment analysis involved in the solution of an embodiment of the application.
- the service processing device based on sentiment analysis may include a processor 1001 (for example, a CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
- the communication bus 1002 is used to realize the connection and communication between these components;
- the user interface 1003 may include a display (Display), an input unit such as a keyboard (Keyboard);
- the network interface 1004 may optionally include a standard wired interface, a wireless interface (Such as WI-FI interface);
- the memory 1005 can be a high-speed RAM memory or a non-volatile memory, such as a disk memory.
- the memory 1005 can optionally be a storage device independent of the aforementioned processor 1001 .
- FIG. 1 does not constitute a limitation on the business processing equipment based on sentiment analysis, and may include more or less components than shown in the figure, or combine certain components, or different The layout of the components.
- the memory 1005 as a computer-readable storage medium in FIG. 1 may include an operating system, a network communication module, and a business processing program based on sentiment analysis.
- the network communication module is mainly used to connect to the server and perform data communication with the server; and the processor 1001 can call the business processing program based on sentiment analysis stored in the memory 1005, and execute the sentiment analysis based on the embodiment of this application. Business processing methods.
- the embodiment of the application provides a business processing method based on sentiment analysis.
- the business processing method based on sentiment analysis includes the following steps:
- Step S10 When receiving the service processing request sent by the terminal, obtain the target voice information of the target user, and convert the target voice information into corresponding text information, based on a preset emotion recognition model, the target voice information and the text information To determine the emotional category of the target user;
- the client triggers a service handling request on the terminal
- the terminal sends the service handling request to the server
- the server responds to the service handling request, that is, when the server receives the service handling request, the server sends a preset initial voice to the terminal
- the preset initial voice refers to the pre-set recommended voice.
- the preset initial voice is: Dear customer, welcome to the xxx platform, is there anything I can serve you?
- the terminal receives the preset initial voice sent by the server and plays it.
- the terminal collects the customer's voice information through the preset voice collection device as target voice information.
- the preset voice collection device refers to a preset voice collection device.
- the preset voice collection device may be a recording device on the terminal, and the terminal sends the collected target voice information for the preset initial voice to the server. Receive the target voice information sent by the terminal to handle business according to the target voice information.
- the step of converting the target voice information into corresponding text information includes: inputting the target voice information into a preset voice recognition model, and performing target voice recognition through the preset voice recognition model to obtain The text information corresponding to the target voice information.
- the target voice information input by the user may also be pre-processed, and the pre-processing includes: word segmentation, part-of-speech tagging, named entity recognition, reference disambiguation, and similar word expansion.
- the word segmentation, part-of-speech tagging, and named entity recognition refer to word segmentation, part-of-speech tagging, and named entity recognition using a natural language processing tool trained by a deep neural network.
- the word segmentation refers to segmenting the sequence of Chinese characters into word sequences.
- the part-of-speech tagging refers to the identification and tagging based on the part-of-speech of words.
- the part-of-speech includes: nouns, adverbs, adjectives, verbs, pronouns, etc., for example, NT stands for time nouns, V stands for verbs, NN stands for spoken nouns, PU stands for calibration symbols, AD stands for adverbs, PN stands for pronouns, etc.
- the named entity recognition refers to a named entity such as a person's name, a place name, and an organization's name in the recognition sentence.
- the named entity includes 3 categories, such as entity, time, and number, and 7 categories, such as person's name, place name, and organization name , Time, date, currency and percentage.
- the reference disambiguation refers to the elimination of the reference ambiguity of the personal pronouns, which is accomplished by using dependency parsing (dependency parsing, DP) to identify the dependency relationship between the components in the language unit and reveal its syntactic structure.
- dependency parsing dependency parsing
- the expansion of similar sentences refers to the expansion of similar words using Word2vec technology.
- the emotion categories include: anger, anxiety, anger, happiness, disappointment, surprise, curiosity, etc.
- the emotion category of the user may also be identified based on the emotion analysis method of the dictionary and the pre-trained emotion classification model based on deep learning.
- the dictionary-based sentiment analysis method refers to the construction of an sentiment analysis dictionary by professionals with grammatical sensitivity. According to the sentiment analysis dictionary constructed: positive sentiment dictionary, negative sentiment dictionary and neutral sentiment dictionary, a sentence is used The vocabulary expressing emotion is divided into three categories, and then the number of positive, negative and neutral emotion words in the sentence is compared to evaluate the emotion category of the sentence.
- the emotion classification models based on deep learning include, but are not limited to, Long Short-Term Memory (LSTM) models, Support Vector Machines (SVM) models, Random Forests (RF) models, and A pre-trained model in the Naive Bayesian Model (NBM) model.
- LSTM Long Short-Term Memory
- SVM Support Vector Machines
- RF Random Forests
- NBM Naive Bayesian Model
- the training step of the preset emotion recognition model is: obtaining preset training voice information and corresponding emotion category information, and training the preset training voice information through a neural network model skip-gram or a continuous bag of words model , Get the low-dimensional word vector; by looking up the word vector table, the low-dimensional word vector is converted into a corresponding vector, and the vector expression is converted into a feature vector through convolution and pooling operations.
- the vector uses the Hard Tanh function to extract nonlinear features to obtain the final features of the preset training voice information;
- the preset emotion recognition model is generated according to the final feature of the preset training voice information and the corresponding emotion category information.
- Step S20 based on the text information and the emotion category, determine the business processing flow and target service words corresponding to the target user;
- a library of service words corresponding to different emotion categories can be established in advance.
- the service words corresponding to the angry emotion category are simple and friendly words.
- the business you need to handle is X.
- Two processes are required. The first process is X and the second process is X. Please check whether the above information is correct.
- the service language corresponding to the anxious emotion category is a comfort and guidance type. In order to prevent network fraud or phone fraud, you can also broadcast anti-fraud promotion information.
- different business processing procedures can also be set according to different emotional categories, such as recommending the most efficient business processing procedures with the least time-consuming or the least business process nodes for users in the anxious and angry emotional categories, for happy and joyful users It is recommended to add a business handling process with wealth management product recommendation or other related business introductions to increase the sales volume of wealth management products.
- the server determines the target business that the target user needs to handle according to the text information, such as transfer, loan, bank card or credit card, etc. Then, according to the emotional category of the target user, the target service language and business processing flow suitable for serving the target user are determined.
- Step S30 Broadcast the target service language, and guide the target user to complete the corresponding target service according to the service processing flow.
- each operation node corresponding to the business processing flow displays the corresponding interface to be operated, and simultaneously plays the corresponding target service words, such as "place the ID card in the front ID card recognition area".
- Guide the target user to handle the target business through the target service language that conforms to the current emotional category of the target user. That is, the target service language and the business processing flow corresponding to the target user are determined, and the guidance information of each process step corresponding to the business processing flow is correspondingly added to the target service language.
- the business types with the same business process that is, the same business types with the same process guidance information, can use the same set of service words, and then replace the specific business name or business number.
- the current payee information input by the target user may also be compared with the historical payee information corresponding to the target user to determine whether the current payee is a frequent contact of the target user for transactions. If it is not, then broadcast the anti-fraud warning information. If there are frequent fraud crimes recently and the current payee is not a frequent contact person, please confirm again whether to perform the current transfer operation, etc.
- the background color of the current operation interface can also be displayed according to the emotional category of the target user, so as to further enhance the user experience.
- This embodiment provides a service processing method based on sentiment analysis, that is, when a service processing request sent by a terminal is received, target voice information of a target user is obtained, and the target voice information is converted into corresponding text information, based on a preset Emotion recognition model, the target voice information and text information, determine the emotion category of the target user; based on the text information and the emotion category, determine the business processing flow and target service words corresponding to the target user; broadcast
- the target service language is used to guide the target user to complete the corresponding target service according to the business processing flow.
- this application determines the user’s emotional category based on the user’s voice information, then determines the corresponding business processing flow and corresponding target service language based on the emotional category, and then guides the target based on the business processing flow and target service language
- the user completes the corresponding target business, improves business processing efficiency, improves user experience, and solves the technical problems of low efficiency of existing self-service terminal business processing and poor user experience.
- Fig. 3 is a schematic flowchart of a second embodiment of a business processing method based on sentiment analysis in this application.
- the step S30 includes:
- Step S31 Broadcast the target service speech, and receive the feedback voice information of the target user based on the target service speech feedback, and input the feedback speech information into a preset speech recognition model, and pass the A preset voice recognition model obtains feedback text information corresponding to the feedback voice information;
- the target user while playing the target service speech, the target user’s feedback voice information based on the target service speech is acquired in real time, and the feedback speech information is converted into corresponding feedback through a preset speech recognition model.
- Text information is: separately collecting the voice data of the preset dialect language system and the common language system, and extracting the voice feature parameters corresponding to each of the voice data, and composing each of the voice feature parameters into a voice feature Set; extract each of the voice feature parameters in the voice feature set in a preset proportion, and construct an initial voice recognition model through each of the voice feature parameters; train the initial voice recognition model through an iterative algorithm, and obtain the training result
- the speech recognition model whose recognition accuracy rate is higher than the preset threshold is used as the preset speech recognition model.
- Step S32 According to the feedback text information, determine whether the business processing flow is the business processing flow requested by the target user;
- the feedback text information it is determined whether the currently determined target service language and business processing flow are the target service language corresponding to the target business requested by the target user and the business processing flow corresponding to the target business. If the feedback text information is "No", “Exit”, etc., that is, the guidance information of the business processing process broadcast by the target service voice is not the business process corresponding to the target business that the target user needs to handle. If the feedback text information is "confirmation”, "thank you”, etc., that is, the business process of the target service speech broadcast is the business process corresponding to the target business that the target user needs to handle.
- step S33 if it is not the business processing flow requested by the target user, re-determine the business processing flow corresponding to the target business to be processed and the corresponding service skills according to the feedback voice information.
- the feedback voice information entered by the target user can be re-based, if it is not the feedback voice information for handling transfers or savings , Re-execute the operation of determining the target service words and the business processing flow corresponding to the savings business.
- the feedback voice information of the target user is an exit instruction, the voice information of the target user is re-entered to perform the business processing flow corresponding to the target service and the determination operation of the target service language.
- FIG. 4 is a schematic flowchart of a third embodiment of a business processing method based on sentiment analysis in this application.
- the step S10 specifically includes:
- Step S11 when receiving the service processing request sent by the terminal, obtain the face information of the target user, and judge whether the age of the target user is greater than a preset threshold according to the face information;
- the target user in order to prevent middle-aged and elderly people from being unable to use self-service terminals to cause business processing obstacles, when it is detected that the age of the target user exceeds a preset threshold, such as 50 years old, manual assistance can be provided to the target user.
- a preset threshold such as 50 years old
- manual assistance can be provided to the target user.
- the face information of the target user is acquired through a camera, and it is determined whether the target user corresponding to the face information is a user requiring assistance, that is, the person requiring assistance is a user whose age exceeds a preset threshold.
- Step S12 If the age of the target user is greater than the preset threshold, generate assistance request information according to the terminal identification of the current service terminal, and send the assistance request information to the staff, so as to remind the relevant staff of the target user Provide manual guidance;
- the terminal identifier of the current service terminal that the target user handles the service is generated to carry the The assistance request information of the terminal identification. If the self-service terminal No. 3 is currently waiting to assist the user in business processing, please go to assist as soon as possible, and send the assistance request information to the relevant terminal of the staff, such as the terminal corresponding to the lobby manager. In this way, relevant staff are reminded in time to help some older middle-aged and elderly people to handle business with autonomous terminals in time to improve business handling efficiency.
- Step S13 If the age of the target user is not greater than a preset threshold, obtain target voice information of the target user, and convert the target voice information into corresponding text information, based on the preset emotion recognition model and the target voice information And text information to determine the emotional category of the target user.
- the user’s emotional category is determined according to the user’s voice information, and then the corresponding business processing flow and corresponding target service language are determined according to the emotional category, and then processed according to the business
- the process and target service words guide the target user to complete the corresponding target business.
- step S10 it further includes:
- emergency request information is generated according to the terminal identification of the current service terminal, and the emergency request information is sent to the staff, so as to remind the relevant staff to provide the target user with Quick assistance.
- the target user is also possible to provide quick assistance to users in emergency situations, such as providing an emergency service window for users whose bank cards are stolen, so as to quickly handle the business of freezing bank cards for the target users. That is, when it is determined that the emotional category of the target user is an anxious emotional category, the emergency request information is sent to the relevant staff according to the terminal identification of the current business terminal where the target user handles the business, so that the relevant staff can go to confirm and manually check. And when it is determined that the business that the target user really needs to handle is indeed an emergency business, the target user is provided with quick assistance to help the target user quickly complete the target business.
- the embodiment of the present application also provides a business processing device based on sentiment analysis.
- FIG. 5 is a schematic diagram of the functional modules of the first embodiment of the business processing apparatus based on sentiment analysis in this application.
- the business processing device based on sentiment analysis includes:
- the emotion determination module 10 is configured to obtain target voice information of a target user when receiving a service handling request sent by a terminal, and convert the target voice information into corresponding text information, based on a preset emotion recognition model and the target voice Information and text information to determine the emotional category of the target user;
- the speech technique determining module 20 is configured to determine the business processing flow and target service speech technique corresponding to the target user based on the text information and the emotion category;
- the processing guidance module 30 is configured to broadcast the target service language, and guide the target user to complete the corresponding target service according to the service processing flow.
- the business processing device based on sentiment analysis further includes:
- the information training module is used to obtain preset training voice information and corresponding emotional category information, and train preset training voice information through neural network model skip-gram or continuous word bag model to obtain low-dimensional word vectors;
- the feature extraction module is used to convert a low-dimensional word vector into a corresponding vector by looking up a word vector table, convert the vector expression into a feature vector through convolution and pooling operations, and use the feature vector
- the Hard Tanh function extracts nonlinear features to obtain the final features of the preset training voice information
- the model generation module is configured to generate the preset emotion recognition model according to the final feature of the preset training voice information and the corresponding emotion category information.
- emotion determination module 10 is also used for:
- the target voice information is input into a preset voice recognition model, and the target voice recognition is performed through the preset voice recognition model to obtain text information corresponding to the target voice information.
- emotion determination module 10 is also used for:
- the initial speech recognition model is trained by an iterative algorithm, and the speech recognition accuracy of the trained speech recognition model is obtained, and the speech recognition model with the recognition accuracy higher than a preset threshold is used as the preset speech recognition model.
- processing guidance module 30 specifically includes:
- the feedback information acquiring unit is configured to broadcast the target service speech, and receive the feedback voice information of the target user based on the target service speech feedback, and input the feedback speech information into a preset speech recognition model, And obtain the feedback text information corresponding to the feedback voice information through the preset voice recognition model;
- the business process verification unit is configured to determine whether the business process is the business process requested by the target user according to the feedback text information
- the business process determining unit is configured to, if it is not the business process process requested by the target user, re-determine the business process process corresponding to the target business to be processed and the corresponding service technique according to the feedback voice information.
- emotion determination module 10 specifically includes:
- a user judging unit configured to obtain face information of the target user when receiving a service handling request sent by a terminal, and judge whether the age of the target user is greater than a preset threshold according to the face information
- the assistance request unit is configured to generate assistance request information according to the terminal identification of the current service terminal if the age of the target user is greater than the preset threshold, and send the assistance request information to the staff, so as to remind relevant staff to The target user provides manual guidance;
- emotion determination module 10 is also used for:
- the target voice information of the target user If the age of the target user is not greater than the preset threshold, obtain target voice information of the target user, and convert the target voice information into corresponding text information, based on the preset emotion recognition model, the target voice information and the text information To determine the emotional category of the target user.
- emotion determination module 10 is also used for:
- emergency request information is generated according to the terminal identification of the current service terminal, and the emergency request information is sent to the staff, so as to remind the relevant staff to provide the target user with Quick assistance.
- each module in the above-mentioned emotion analysis-based service processing device corresponds to each step in the above-mentioned embodiment of the above-mentioned emotion analysis-based service processing method, and its functions and implementation processes will not be repeated here.
- the embodiment of the present application also provides a computer-readable storage medium.
- a business processing program based on sentiment analysis is stored on the computer-readable storage medium of the present application.
- the business processing program based on sentiment analysis is executed by a processor, the steps of the business processing method based on sentiment analysis as described above are implemented.
- the computer-readable storage medium may be non-volatile or volatile.
- the method implemented when the business processing program based on sentiment analysis is executed can refer to the various embodiments of the business processing method based on sentiment analysis of this application, which will not be repeated here.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Child & Adolescent Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Machine Translation (AREA)
- Telephonic Communication Services (AREA)
Abstract
Description
Claims (20)
- 一种基于情绪分析的业务处理方法,其中,所述基于情绪分析的业务处理方法包括以下步骤:在接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别;基于所述文本信息和所述情绪类别,确定所述目标用户对应的业务处理流程以及目标服务话术;播报所述目标服务话术,并根据所述业务处理流程,指导所述目标用户完成对应的目标业务。
- 如权利要求1所述的基于情绪分析的业务处理方法,其中,所述在接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别的步骤之前,还包括:获取预设训练语音信息以及对应的所属情绪类别信息,通过神经网络模型skip-gram或者连续词袋模型训练预设训练语音信息,得到低维度的字向量;通过查找字向量表的方式,将低维度的字向量转换成相应的向量,通过卷积、池化操作将所述向量表示式转换成特征向量,对所述特征向量使用Hard Tanh函数进行非线性特征的抽取,获取预设训练语音信息的最终特征;根据所述预设训练语音信息的最终特征和对应的所属情绪类别信息生成所述预设情绪识别模型。
- 如权利要求1所述的基于情绪分析的业务处理方法,其中,所述将所述目标语音信息转换为对应的文本信息的步骤包括:将所述目标语音信息输入至预设语音识别模型,通过所述预设语 音识别模型进行目标语音识别,得到所述目标语音信息对应的文本信息。
- 如权利要求3所述的基于情绪分析的业务处理方法,其中,所述将所述目标语音信息输入至预设语音识别模型,通过所述预设语音识别模型进行目标语音识别,得到所述目标语音信息对应的文本信息的步骤之前,还包括:分别采集预设方言语系和普通话语系朗读语音数据,并提取各所述语音数据对应的语音特征参数,将各所述语音特征参数组成语音特征集合;抽取所述语音特征集合中预设比例的各所述语音特征参数,并通过各所述语音特征参数构建初始语音识别模型;通过迭代算法训练所述初始语音识别模型,并获取训练得到的语音识别模型的语音识别准确率,将识别准确率高于预设阈值的语音识别模型作为预设语音识别模型。
- 如权利要求4所述的基于情绪分析的业务处理方法,其中,所述播报所述目标服务话术,并根据所述业务处理流程,指导所述目标用户完成对应的目标业务的步骤具体包括:播报所述目标服务话术,并接收到所述目标用户基于所述目标服务话术反馈的反馈语音信息,并将所述反馈语音信息输入至预设语音识别模型,并通过所述预设语音识别模型得到所述反馈语音信息对应的反馈文本信息;根据所述反馈文本信息,判断所述业务处理流程是否为所述目标用户请求办理的业务处理流程;若不是所述目标用户请求办理的业务处理流程,则根据所述反馈语音信息重新确定待办理目标业务对应的业务处理流程以及对应的服务话术。
- 如权利要求1所述的基于情绪分析的业务处理方法,其中,所述在接收到终端发送的业务办理请求时,获取目标用户目标语音信息 ,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别的步骤之后,还包括:判断所述目标用户的情绪类别是否为焦急情绪类别;若判定所述目标用户的情绪类别为焦急情绪类别时,根据当前业务终端的终端标识生成紧急请求信息,并将所述紧急请求信息发送至工作人员,以便提醒相关工作人员对所述目标用户提供快捷协助。
- 如权利要求1至6任意一项所述的基于情绪分析的业务处理方法,其中,所述在接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别的步骤包括:在接收到终端发送的业务办理请求时,获取所述目标用户的人脸信息,并根据所述人脸信息判断所述目标用户的年龄是否大于预设阈值;若所述目标用户的年龄大于预设阈值,则根据当前业务终端的终端标识生成协助请求信息,并将所述协助请求信息发送至工作人员,以便提醒相关工作人员对所述目标用户提供人工指导;若所述目标用户的年龄不大于预设阈值,则获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别。
- 一种基于情绪分析的业务处理设备,其中,所述基于情绪分析的业务处理设备包括处理器、存储器、以及存储在所述存储器上并可被所述处理器执行的基于情绪分析的业务处理程序,其中所述基于情绪分析的业务处理程序被所述处理器执行时,实现如下步骤:在接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别;基于所述文本信息和所述情绪类别,确定所述目标用户对应的业务处理流程以及目标服务话术;播报所述目标服务话术,并根据所述业务处理流程,指导所述目标用户完成对应的目标业务。
- 如权利要求8所述的基于情绪分析的业务处理设备,其中,所述在接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别的步骤之前,所述基于情绪分析的业务处理程序还被所述处理器执行实现:获取预设训练语音信息以及对应的所属情绪类别信息,通过神经网络模型skip-gram或者连续词袋模型训练预设训练语音信息,得到低维度的字向量;通过查找字向量表的方式,将低维度的字向量转换成相应的向量,通过卷积、池化操作将所述向量表示式转换成特征向量,对所述特征向量使用Hard Tanh函数进行非线性特征的抽取,获取预设训练语音信息的最终特征;根据所述预设训练语音信息的最终特征和对应的所属情绪类别信息生成所述预设情绪识别模型。
- 如权利要求8所述的基于情绪分析的业务处理设备,其中,所述基于情绪分析的业务处理程序被所述处理器执行实现所述将所述目标语音信息转换为对应的文本信息的步骤,包括:将所述目标语音信息输入至预设语音识别模型,通过所述预设语音识别模型进行目标语音识别,得到所述目标语音信息对应的文 本信息。
- 如权利要求10所述的基于情绪分析的业务处理设备,其中,所述将所述目标语音信息输入至预设语音识别模型,通过所述预设语音识别模型进行目标语音识别,得到所述目标语音信息对应的文本信息的步骤之前,所述基于情绪分析的业务处理程序还被所述处理器执行实现:分别采集预设方言语系和普通话语系朗读语音数据,并提取各所述语音数据对应的语音特征参数,将各所述语音特征参数组成语音特征集合;抽取所述语音特征集合中预设比例的各所述语音特征参数,并通过各所述语音特征参数构建初始语音识别模型;通过迭代算法训练所述初始语音识别模型,并获取训练得到的语音识别模型的语音识别准确率,将识别准确率高于预设阈值的语音识别模型作为预设语音识别模型。
- 如权利要求11所述的基于情绪分析的业务处理设备,其中,所述基于情绪分析的业务处理程序被所述处理器执行实现所述播报所述目标服务话术,并根据所述业务处理流程,指导所述目标用户完成对应的目标业务的步骤具体包括:播报所述目标服务话术,并接收到所述目标用户基于所述目标服务话术反馈的反馈语音信息,并将所述反馈语音信息输入至预设语音识别模型,并通过所述预设语音识别模型得到所述反馈语音信息对应的反馈文本信息;根据所述反馈文本信息,判断所述业务处理流程是否为所述目标用户请求办理的业务处理流程;若不是所述目标用户请求办理的业务处理流程,则根据所述反馈语音信息重新确定待办理目标业务对应的业务处理流程以及对应的服务话术。
- 如权利要求8所述的基于情绪分析的业务处理设备,其中,所述在 接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别的步骤之后,所述基于情绪分析的业务处理程序还被所述处理器执行实现:判断所述目标用户的情绪类别是否为焦急情绪类别;若判定所述目标用户的情绪类别为焦急情绪类别时,根据当前业务终端的终端标识生成紧急请求信息,并将所述紧急请求信息发送至工作人员,以便提醒相关工作人员对所述目标用户提供快捷协助。
- 如权利要求8至13任意一项所述的基于情绪分析的业务处理设备,其中,所述基于情绪分析的业务处理程序被所述处理器执行实现所述在接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别的步骤包括:在接收到终端发送的业务办理请求时,获取所述目标用户的人脸信息,并根据所述人脸信息判断所述目标用户的年龄是否大于预设阈值;若所述目标用户的年龄大于预设阈值,则根据当前业务终端的终端标识生成协助请求信息,并将所述协助请求信息发送至工作人员,以便提醒相关工作人员对所述目标用户提供人工指导;若所述目标用户的年龄不大于预设阈值,则获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有基于情绪分析的业务处理程序,其中所述基于情绪分析的业 务处理程序被处理器执行时,实现如下步骤:在接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别;基于所述文本信息和所述情绪类别,确定所述目标用户对应的业务处理流程以及目标服务话术;播报所述目标服务话术,并根据所述业务处理流程,指导所述目标用户完成对应的目标业务。
- 如权利要求15所述的计算机可读存储介质,其中,所述在接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别的步骤之前,所述基于情绪分析的业务处理程序还被所述处理器执行实现:获取预设训练语音信息以及对应的所属情绪类别信息,通过神经网络模型skip-gram或者连续词袋模型训练预设训练语音信息,得到低维度的字向量;通过查找字向量表的方式,将低维度的字向量转换成相应的向量,通过卷积、池化操作将所述向量表示式转换成特征向量,对所述特征向量使用Hard Tanh函数进行非线性特征的抽取,获取预设训练语音信息的最终特征;根据所述预设训练语音信息的最终特征和对应的所属情绪类别信息生成所述预设情绪识别模型。
- 如权利要求15所述的计算机可读存储介质,其中,所述基于情绪分析的业务处理程序被所述处理器执行实现所述将所述目标语音信息转换为对应的文本信息的步骤包括:将所述目标语音信息输入至预设语音识别模型,通过所述预设语 音识别模型进行目标语音识别,得到所述目标语音信息对应的文本信息。
- 如权利要求17所述的计算机可读存储介质,其中,所述将所述目标语音信息输入至预设语音识别模型,通过所述预设语音识别模型进行目标语音识别,得到所述目标语音信息对应的文本信息的步骤之前,所述基于情绪分析的业务处理程序还被所述处理器执行实现:分别采集预设方言语系和普通话语系朗读语音数据,并提取各所述语音数据对应的语音特征参数,将各所述语音特征参数组成语音特征集合;抽取所述语音特征集合中预设比例的各所述语音特征参数,并通过各所述语音特征参数构建初始语音识别模型;通过迭代算法训练所述初始语音识别模型,并获取训练得到的语音识别模型的语音识别准确率,将识别准确率高于预设阈值的语音识别模型作为预设语音识别模型。
- 如权利要求18所述的计算机可读存储介质,其中,所述基于情绪分析的业务处理程序被所述处理器执行实现所述播报所述目标服务话术,并根据所述业务处理流程,指导所述目标用户完成对应的目标业务的步骤具体包括:播报所述目标服务话术,并接收到所述目标用户基于所述目标服务话术反馈的反馈语音信息,并将所述反馈语音信息输入至预设语音识别模型,并通过所述预设语音识别模型得到所述反馈语音信息对应的反馈文本信息;根据所述反馈文本信息,判断所述业务处理流程是否为所述目标用户请求办理的业务处理流程;若不是所述目标用户请求办理的业务处理流程,则根据所述反馈语音信息重新确定待办理目标业务对应的业务处理流程以及对应的服务话术。
- 如权利要求15所述的计算机可读存储介质,其中,所述在接收到终端发送的业务办理请求时,获取目标用户目标语音信息,并将所述目标语音信息转换为对应的文本信息,基于预设情绪识别模型、所述目标语音信息和文本信息,确定所述目标用户的情绪类别的步骤之后,所述基于情绪分析的业务处理程序还被所述处理器执行实现:判断所述目标用户的情绪类别是否为焦急情绪类别;若判定所述目标用户的情绪类别为焦急情绪类别时,根据当前业务终端的终端标识生成紧急请求信息,并将所述紧急请求信息发送至工作人员,以便提醒相关工作人员对所述目标用户提供快捷协助。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910537349.1 | 2019-06-20 | ||
CN201910537349.1A CN110379445A (zh) | 2019-06-20 | 2019-06-20 | 基于情绪分析的业务处理方法、装置、设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020253362A1 true WO2020253362A1 (zh) | 2020-12-24 |
Family
ID=68249062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/086161 WO2020253362A1 (zh) | 2019-06-20 | 2020-04-22 | 基于情绪分析的业务处理方法、装置、设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110379445A (zh) |
WO (1) | WO2020253362A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115471141A (zh) * | 2022-11-02 | 2022-12-13 | 成都飞机工业(集团)有限责任公司 | 一种业务流程周期管控方法、装置、设备及介质 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110379445A (zh) * | 2019-06-20 | 2019-10-25 | 深圳壹账通智能科技有限公司 | 基于情绪分析的业务处理方法、装置、设备及存储介质 |
CN111026867A (zh) * | 2019-11-28 | 2020-04-17 | 杭州飞步科技有限公司 | 客诉处理方法和装置、电子设备、存储介质 |
CN111177308B (zh) * | 2019-12-05 | 2023-07-18 | 上海云洽信息技术有限公司 | 一种文本内容的识别情绪方法 |
CN111062332A (zh) * | 2019-12-18 | 2020-04-24 | 秒针信息技术有限公司 | 信息推送方法和装置 |
CN111179903A (zh) * | 2019-12-30 | 2020-05-19 | 珠海格力电器股份有限公司 | 一种语音识别方法、装置、存储介质及电器 |
CN113539275A (zh) * | 2020-04-22 | 2021-10-22 | 北京有限元科技有限公司 | 确定话术的方法、装置以及存储介质 |
CN111540358B (zh) * | 2020-04-26 | 2023-05-26 | 云知声智能科技股份有限公司 | 人机交互方法、装置、设备和存储介质 |
WO2021217769A1 (zh) * | 2020-04-27 | 2021-11-04 | 平安科技(深圳)有限公司 | 基于情绪识别的答复方法、装置、计算机设备及存储介质 |
CN112201277B (zh) * | 2020-09-29 | 2024-03-22 | 中国银行股份有限公司 | 语音应答的方法、装置、及设备及计算机可读存储介质 |
CN112612878A (zh) * | 2020-12-17 | 2021-04-06 | 大唐融合通信股份有限公司 | 一种客服信息提供方法、电子设备及装置 |
CN112949708B (zh) * | 2021-02-26 | 2023-10-24 | 平安科技(深圳)有限公司 | 情绪识别方法、装置、计算机设备和存储介质 |
CN112951429A (zh) * | 2021-03-25 | 2021-06-11 | 浙江连信科技有限公司 | 用于中小学生心理危机筛查的信息处理方法及装置 |
CN113345419B (zh) * | 2021-06-30 | 2022-05-27 | 广西电网有限责任公司 | 基于方言口音的语音转译方法、系统和可读存储介质 |
CN113516183B (zh) * | 2021-07-05 | 2024-04-16 | 深圳小湃科技有限公司 | 故障响应方法、系统、设备及存储介质 |
CN113609273A (zh) * | 2021-08-12 | 2021-11-05 | 云知声(上海)智能科技有限公司 | 一种机器话术配置方法、装置、电子设备和存储介质 |
CN113743126B (zh) * | 2021-11-08 | 2022-06-14 | 北京博瑞彤芸科技股份有限公司 | 一种基于用户情绪的智能交互方法和装置 |
CN115022395B (zh) * | 2022-05-27 | 2023-08-08 | 艾普科创(北京)控股有限公司 | 业务视频推送方法、装置、电子设备及存储介质 |
CN115171284B (zh) * | 2022-07-01 | 2023-12-26 | 国网汇通金财(北京)信息科技有限公司 | 一种老年人关怀方法及装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080096533A1 (en) * | 2006-10-24 | 2008-04-24 | Kallideas Spa | Virtual Assistant With Real-Time Emotions |
CN102802114A (zh) * | 2012-06-20 | 2012-11-28 | 北京语言大学 | 利用语音进行座席筛选的方法及系统 |
CN105895101A (zh) * | 2016-06-08 | 2016-08-24 | 国网上海市电力公司 | 用于电力智能辅助服务系统的语音处理设备及处理方法 |
CN109767791A (zh) * | 2019-03-21 | 2019-05-17 | 中国—东盟信息港股份有限公司 | 一种针对呼叫中心通话的语音情绪识别及应用系统 |
CN109815494A (zh) * | 2019-01-16 | 2019-05-28 | 中民乡邻投资控股有限公司 | 一种基于客户情绪的问答服务方法 |
CN110379445A (zh) * | 2019-06-20 | 2019-10-25 | 深圳壹账通智能科技有限公司 | 基于情绪分析的业务处理方法、装置、设备及存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10037767B1 (en) * | 2017-02-01 | 2018-07-31 | Wipro Limited | Integrated system and a method of identifying and learning emotions in conversation utterances |
CN107705807B (zh) * | 2017-08-24 | 2019-08-27 | 平安科技(深圳)有限公司 | 基于情绪识别的语音质检方法、装置、设备及存储介质 |
CN108427722A (zh) * | 2018-02-09 | 2018-08-21 | 卫盈联信息技术(深圳)有限公司 | 智能交互方法、电子装置及存储介质 |
CN109033257A (zh) * | 2018-07-06 | 2018-12-18 | 中国平安人寿保险股份有限公司 | 话术推荐方法、装置、计算机设备和存储介质 |
CN109389971B (zh) * | 2018-08-17 | 2022-06-17 | 深圳壹账通智能科技有限公司 | 基于语音识别的保险录音质检方法、装置、设备和介质 |
-
2019
- 2019-06-20 CN CN201910537349.1A patent/CN110379445A/zh active Pending
-
2020
- 2020-04-22 WO PCT/CN2020/086161 patent/WO2020253362A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080096533A1 (en) * | 2006-10-24 | 2008-04-24 | Kallideas Spa | Virtual Assistant With Real-Time Emotions |
CN102802114A (zh) * | 2012-06-20 | 2012-11-28 | 北京语言大学 | 利用语音进行座席筛选的方法及系统 |
CN105895101A (zh) * | 2016-06-08 | 2016-08-24 | 国网上海市电力公司 | 用于电力智能辅助服务系统的语音处理设备及处理方法 |
CN109815494A (zh) * | 2019-01-16 | 2019-05-28 | 中民乡邻投资控股有限公司 | 一种基于客户情绪的问答服务方法 |
CN109767791A (zh) * | 2019-03-21 | 2019-05-17 | 中国—东盟信息港股份有限公司 | 一种针对呼叫中心通话的语音情绪识别及应用系统 |
CN110379445A (zh) * | 2019-06-20 | 2019-10-25 | 深圳壹账通智能科技有限公司 | 基于情绪分析的业务处理方法、装置、设备及存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115471141A (zh) * | 2022-11-02 | 2022-12-13 | 成都飞机工业(集团)有限责任公司 | 一种业务流程周期管控方法、装置、设备及介质 |
CN115471141B (zh) * | 2022-11-02 | 2023-03-24 | 成都飞机工业(集团)有限责任公司 | 一种业务流程周期管控方法、装置、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110379445A (zh) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020253362A1 (zh) | 基于情绪分析的业务处理方法、装置、设备及存储介质 | |
WO2019153522A1 (zh) | 智能交互方法、电子装置及存储介质 | |
US9740677B2 (en) | Methods and systems for analyzing communication situation based on dialogue act information | |
US10977452B2 (en) | Multi-lingual virtual personal assistant | |
US11500917B2 (en) | Providing a summary of a multimedia document in a session | |
US7860705B2 (en) | Methods and apparatus for context adaptation of speech-to-speech translation systems | |
US9753914B2 (en) | Natural expression processing method, processing and response method, device, and system | |
US9792279B2 (en) | Methods and systems for analyzing communication situation based on emotion information | |
WO2021042904A1 (zh) | 会话意图识别方法、装置、计算机设备和存储介质 | |
KR20190039572A (ko) | 다음 사용자 프롬프트 유형 선택하기 | |
CN111540353B (zh) | 一种语义理解方法、装置、设备及存储介质 | |
US9792909B2 (en) | Methods and systems for recommending dialogue sticker based on similar situation detection | |
CN111241237A (zh) | 一种基于运维业务的智能问答数据处理方法及装置 | |
CN107967250B (zh) | 一种信息处理方法及装置 | |
Adel et al. | Features for factored language models for code-Switching speech. | |
US11163961B2 (en) | Detection of relational language in human-computer conversation | |
CN110223134B (zh) | 基于语音识别的产品推荐方法及相关设备 | |
CN112699686A (zh) | 基于任务型对话系统的语义理解方法、装置、设备及介质 | |
CN114817507A (zh) | 基于意图识别的回复推荐方法、装置、设备和存储介质 | |
TW202133027A (zh) | 人機協作對話系統與方法 | |
CN115602160A (zh) | 基于语音识别的业务办理方法、装置及电子设备 | |
CN110765242A (zh) | 一种客服信息的提供方法,装置及系统 | |
CN114418320A (zh) | 客服服务质量评估方法、装置、设备、介质和程序产品 | |
CN112800189A (zh) | 人机交互方法、装置、智能机器人和存储介质 | |
CN109359181B (zh) | 负面情绪原因识别方法、设备及计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20825616 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20825616 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.08.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20825616 Country of ref document: EP Kind code of ref document: A1 |