CN110222333A - A kind of voice interactive method, device and relevant device - Google Patents
A kind of voice interactive method, device and relevant device Download PDFInfo
- Publication number
- CN110222333A CN110222333A CN201910421285.9A CN201910421285A CN110222333A CN 110222333 A CN110222333 A CN 110222333A CN 201910421285 A CN201910421285 A CN 201910421285A CN 110222333 A CN110222333 A CN 110222333A
- Authority
- CN
- China
- Prior art keywords
- client
- voice
- customer
- dialogue
- term
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/374—Thesaurus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/906—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
- G06F40/247—Thesauruses; Synonyms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
The embodiment of the present disclosure discloses a kind of voice interactive method and device and storage medium, this method comprises: establishing user's portrait of client, is drawn a portrait according to the user and determines the classification of the client;Based on the classification of the client, collection strategy is formulated, carries out voice communication according to the collection strategy and client;The customer voice received when by the voice communication generates term vector after being converted to text information;Using the length, memory network parses the term vector in short-term, obtains the parsing result of the customer voice.Using the embodiment of the present disclosure, the precision of speech analysis can be improved, and collection is effectively performed.
Description
Technical field
This disclosure relates to artificial intelligence and interactive voice field more particularly to a kind of voice interactive method and device, related
Equipment and storage medium.
Background technique
With the fast development of artificial intelligence, interactive voice has gradually been used in all trades and professions, such as intelligent navigation system
System, intelligent customer service system and robot field etc..In traditional artificial collection customer service system, the efficiency of collection can be by
To the mood of collection personnel, the influence of the individual factors such as state, constructing intelligent collection system by interactive voice technology can be solved
Collection caused by certainly artificial customer service individual factor is unsuccessful.But it in intelligent collection customer service system, is parsed to customer voice
During, the correct meaning of customer voice cannot be accurately parsed, the accuracy rate of speech analysis is to be improved.
Summary of the invention
The embodiment of the present disclosure provides a kind of interactive voice technology.
In a first aspect, disclosing a kind of voice interactive method, comprising:
The user's portrait for establishing client, draws a portrait according to the user and determines the classification of the client;
Based on the classification of the client, collection strategy is formulated, carries out voice communication according to the collection strategy and client;
The customer voice received when by the voice communication generates term vector after being converted to text information;
Using the length, memory network parses the term vector in short-term, obtains the parsing result of the customer voice.
In one possible implementation, the user's portrait for establishing client, draws a portrait according to the user and determines institute
State the classification of client, comprising:
It obtains customer data and the customer data is analyzed, the user for establishing each client based on the analysis results draws
Picture;
It is drawn a portrait according to the user and determines the classification of the client.
In one possible implementation, the classification based on the client is formulated collection strategy, is urged according to described
It receives strategy and carries out voice communication with client, comprising:
According to the classification of client, the collection strategy of time and frequency that voice communication is carried out with client are formulated, according to institute
It states collection strategy and client carries out voice communication.
In one possible implementation, it is described by the voice communication when customer voice that receives be converted to text
Term vector is generated after information, comprising:
The dictionary of corresponding customer group is determined according to the classification of the client, wherein the different classifications of client are corresponding not
It include the particular words of the customer group with the dictionary of customer group, in the dictionary of each customer group;
Based on the dictionary of the determining customer group, after being segmented to the text information after customer voice conversion
Generate term vector.
In one possible implementation, described using the length, memory network parses the term vector in short-term, obtains
The parsing result of the customer voice, comprising:
The long memory network in short-term of building, introduces attention mechanism in long memory network in short-term;
Using the length after training, memory network parses the term vector in short-term, obtains parsing result, wherein the length
The training set of short-term memory network includes the corpus that the dialogue of collection scene is constituted.
In one possible implementation, described using the length, memory network parses the term vector in short-term, obtains
After the parsing result of the customer voice, comprising:
Dialogue response text is provided according to the parsing result of the customer voice.
In one possible implementation, the parsing result according to the customer voice provides dialogue response text
This, comprising:
The parsing result is input in the Dialogue management model built and the dialogue in the Dialogue management model
Type label is matched, based on matching result output dialogue response text;Wherein, each in the Dialogue management model
Dialogue types label is arranged in the dialogue of type.
In one possible implementation, described that the parsing result is input in the Dialogue management model built
It is matched with the dialogue types label in the Dialogue management model, based on matching result output dialogue response text;Wherein,
Dialogue types label is arranged in the dialogue of each type in the Dialogue management model, comprising:
The parsing result is input in Dialogue management model and the setting label in the Dialogue management model
It is matched, obtains the matching probability of the parsing result and each setting label;
Dialogue mode under the big setting label of selection matching probability talks with response text to export.
In one possible implementation, according to the classification of the client, selection dialogue response text.
In one possible implementation, mark Dialogue management model dialogue as a result, and by the dialogue with
And result saves, wherein the result of the dialogue includes that collection success and collection fail and wait refund, postpone to refund.
In one possible implementation, according to Dialogue management model label dialogue as a result, being urged described in adjustment
Receive strategy.
Second aspect discloses a kind of voice interaction device, comprising:
Unit is established, the user for establishing client draws a portrait, and draws a portrait according to the user and determines the classification of the client;
Formulate unit, for the classification based on the client, formulate collection strategy, according to the collection strategy and client into
Row voice communication;
Converting unit, the customer voice received when for by the voice communication be converted to generate after text information word to
Amount;
Resolution unit, using the length, memory network parses the term vector in short-term, obtains the parsing of the customer voice
As a result.
Optionally, described to establish unit, it is also used to:
It obtains customer data and the customer data is analyzed, the user for establishing each client based on the analysis results draws
Picture;
It is drawn a portrait according to the user and determines the classification of the client.
Optionally, the formulation unit, is also used to:
According to the classification of client, the collection strategy of time and frequency that voice communication is carried out with client are formulated, according to institute
It states collection strategy and client carries out voice communication.
Optionally, the converting unit, is also used to:
The dictionary of corresponding customer group is determined according to the classification of the client, wherein the different classifications of client are corresponding not
It include the particular words of the customer group with the dictionary of customer group, in the dictionary of each customer group;
Based on the dictionary of the determining customer group, after being segmented to the text information after customer voice conversion
Generate term vector.
Optionally, the resolution unit, is also used to:
The long memory network in short-term of building, introduces attention mechanism in long memory network in short-term;
Using the length after training, memory network parses the term vector in short-term, obtains parsing result, wherein the length
The training set of short-term memory network includes the corpus that the dialogue of collection scene is constituted.
Optionally, described device is also used to:
Dialogue response text is provided according to the parsing result of the customer voice.
Optionally, described device is also used to:
The parsing result is input in the Dialogue management model built and the dialogue in the Dialogue management model
Type label is matched, based on matching result output dialogue response text;Wherein, each in the Dialogue management model
Dialogue types label is arranged in the dialogue of type.
Optionally, described device is also used to:
The parsing result is input in Dialogue management model and the setting label in the Dialogue management model
It is matched, obtains the matching probability of the parsing result and each setting label;
Dialogue mode under the big setting label of selection matching probability talks with response text to export.
Optionally, described device is also used to:
According to the classification of the client, selection dialogue response text.
Optionally, described device is also used to:
Mark Dialogue management model dialogue as a result, and the dialogue and result are saved, wherein the dialogue
Result include collection success and collection failure and wait refund, postpone to refund.
The third aspect discloses a kind of interactive voice equipment, including processor and memory, wherein the memory is used
In storage computer program code, the processor is configured for calling the computer program code, executes such as above-mentioned the
Method in any possible implementation of one side or first aspect.
Fourth aspect, discloses a kind of computer readable storage medium, and the computer storage medium is stored with computer
Readable instruction, when described instruction is called by processor, the processor executes any of such as above-mentioned first aspect or first aspect
Method in possible implementation.
In the embodiments of the present disclosure, the user's portrait that can establish client, draws a portrait according to the user and determines the client
Classification;Based on the classification of the client, collection strategy is formulated, carries out voice communication according to the collection strategy and client;It will
The customer voice received when the voice communication generates term vector after being converted to text information;Utilize the long short-term memory net
Network parses the term vector, obtains the parsing result of the customer voice.In this way collection can be formulated for each customer group
Strategy is drawn a portrait by the user of client and understands the proper time period of speak habit and the progress collection of client, to make collection
It can be more smoothly more efficient.The dictionary of corresponding customer group is determined according to the classification of the client, wherein the difference of client
Classify and correspond to the dictionary of different clients group, includes the particular words of the customer group in the dictionary of each customer group;
Based on the dictionary of the determining customer group, generated after being segmented to the text information after the customer voice conversion word to
Amount.It is segmented, can more accurately be segmented according to the habit of speaking of each client.The long memory network in short-term of building,
Attention mechanism is introduced in long memory network in short-term;Using the length after training, memory network parses the term vector in short-term,
Obtain parsing result, wherein the length in short-term memory network training set include collection scene dialogue constitute corpus.It utilizes
Attention mechanism enhances the attention rate to keyword, weakens the attention rate of inessential word, to improve semantic parsing accuracy rate, benefit
Artificial design features are avoided the need for carry out semantic parsing with the neural network that joined two-way length memory unit in short-term, are improved
The efficiency of semantic parsing.
Detailed description of the invention
In order to illustrate more clearly of the embodiment of the present disclosure or technical solution in the prior art, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described.
Wherein:
Fig. 1 is the voice interactive system configuration diagram of embodiment of the present disclosure application;
Fig. 2 is the voice interactive system schematic diagram that the embodiment of the present disclosure provides;
Fig. 3 is the flow diagram for the voice interactive method that the embodiment of the present disclosure provides;
Fig. 4 is the flow diagram for the interactive voice based on At-BLSTM that the embodiment of the present disclosure provides;
Fig. 5 is the structural schematic diagram for the voice interaction device that the embodiment of the present disclosure provides;
Fig. 6 is the structural schematic diagram for the interactive voice equipment that the embodiment of the present disclosure provides.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present disclosure, the technical solution in the embodiment of the present disclosure is clearly retouched
It states, it is clear that described embodiment is only disclosure a part of the embodiment, instead of all the embodiments.
It is also understood that mesh of the term used in this present disclosure specification merely for the sake of description specific embodiment
And be not intended to limit this disclosure.
It will be further appreciated that the term "and/or" used in present disclosure specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
In the specific implementation, technical solution described in the embodiment of the present disclosure can by mobile phone, desktop computer, laptop computer,
Wearable device etc. has the terminal device of language process function or server or system are realized, is not construed as limiting herein.In order to just
In understanding, the executing subject of voice interactive method is hereafter known as voice interaction device.
The embodiment of the present disclosure provides a kind of voice interactive method, comprising: the user's portrait for establishing client, according to the user
Portrait determines the classification of the client;Based on the classification of the client, collection strategy is formulated, according to the collection strategy and client
Carry out voice communication;The customer voice received when by the voice communication generates term vector after being converted to text information;It utilizes
Memory network parses the term vector to the length in short-term, obtains the parsing result of the customer voice.
The embodiment of the present disclosure also provides corresponding voice interaction device, computer readable storage medium and computer program and produces
Product.It is described in detail individually below.
The voice interactive system framework being applicable in below the embodiment of the present disclosure is described.Refering to fig. 1, Fig. 1 is the disclosure
The exemplary configuration diagram of application system for the technical solution that embodiment provides.As shown in Figure 1, voice interactive system may include
One or more servers and multiple terminal devices, in which:
Server can be communicated with terminal device by internet.Specifically, terminal device is provided with voice collecting
Device (such as microphone, microphone array), terminal device acquires voice by voice collector, and sends and acquire to server
The voice arrived, or voice can also be further processed, such as feature extraction etc., and processing result is sent to service
Device.It, can be based on the information received after server receives the voice of terminal device transmission or the processing result of voice
Interactive voice is carried out, and is further processed result or operational order by interactive voice result or based on what interactive voice result obtained
It is sent to terminal device.
Server can include but is not limited to background server, component server, voice interactive system server or voice
Interactive software server etc., interactive voice result is sent terminal by server.Terminal device can be installed and run correlation
Client (Client) (such as interactive voice client etc.).Client (Client) refers to corresponding with server and is to use
The program of family offer local service.Here, which may include but be not limited to: acquisition voice, provide data acquisition interface,
Speech processes interface, interactive voice interface etc. as the result is shown as the result is shown.
Specifically, client can include: the application program of local runtime runs on function on web browser (also known as
For Web App) etc..For client, needing to run on server has corresponding server to provide corresponding voice
One or more functions such as processing, speech feature extraction, interactive voice, intelligent collection based on interactive voice.
Terminal device in the embodiment of the present disclosure can include but is not limited to any electricity based on intelligent operating system
Sub- product can be carried out with user by input equipments such as keyboard, dummy keyboard, touch tablet, touch screen and voice-operated devices
Human-computer interaction, smart phone, tablet computer, PC etc..Wherein, intelligent operating system includes but is not limited to any logical
It crosses and provides various mobile applications to mobile device to enrich the operating system of functions of the equipments, such as Android (Android), iOSTM、
Windows Phone etc..
It should be noted that the framework of the voice interactive system of embodiment of the present disclosure application is not limited to example shown in FIG. 1.
It is described below with reference to the voice interactive system that Fig. 2 provides the embodiment of the present disclosure, voice shown in Figure 2 is handed over
Mutual system includes: data analysis module, call module, voice interaction module, dialogue management module;Wherein,
Data analysis module is built based on the analysis results for obtaining customer data and analyzing the customer data
The user's portrait for standing each client, to classify to client and formulate collection strategy;
Call module, for carrying out voice communication according to the collection strategy and client;
Voice interaction module, for carrying out the client received when voice communication with client using the parsing of interactive voice technology
Voice obtains the parsing result of customer voice;
Dialogue management module provides dialogue response text for the parsing result according to customer voice.
Data analysis module triggers call module according to specified collection strategy and carries out language according to collection strategy and client
Sound call;After dial module is connect with client communication, interactive voice is just carried out by voice interaction module and client, and parse reception
To the meaning of customer voice;Parsing result is sent to dialogue management module to provide dialogue response text and response after parsing
The voice of client.
The data analysis module transmission that the user for establishing each client can also draw a portrait come to dialogue management module, so as to
To select dialogue mode according to the user of the client client personal information known of portrait, it is more accurately right to may thereby determine that
Talk about response text.Dialogue management module can also send dialogue result to data analysis module, and data analysis module is according to dialogue
As a result the collection strategy of formulation is adjusted.
It is described below in conjunction with Fig. 3 voice interactive method provided the embodiment of the present disclosure.
S101, the user's portrait for establishing client, draw a portrait according to the user and determine the classification of the client.
Specifically, the user for establishing client about client's details draws a portrait, such as describe age of client, personality, occupation,
Take in, reason of providing a loan, the time for facilitating voice communication, date etc. information of refunding user's portrait, then according to each client
User's portrait chooses the representative information in certain some user's portrait as the criteria for classifying, will have these in all users portrait
The client of representative information is divided into one kind.
Optionally, it obtains customer data and the customer data is analyzed, establish each client based on the analysis results
User portrait;It is drawn a portrait according to the user and determines the classification of the client.Pass through the data filled out when customer lending or its
He draws a portrait to the understanding of client to establish the user of client at approach (such as crawling client's online data using web crawlers), client's
User's portrait is used to describe age, the personality, occupation, income, reason of providing a loan, the time for facilitating voice communication, repayment date of client
Phase etc. information, by user draw a portrait in age, personality, time occupation, taken in, facilitate voice communication, refund date etc.
Etc. one or combinations thereof in information classify to client.For example, the client of same age and identical personality is divided into one
Class, perhaps by it is identical occupation and identical personality client be divided into one kind or by same age, it is identical income, identical loan original
The client of cause is divided into one kind, etc., herein without limitation.
S102, the classification based on the client formulate collection strategy, and it is logical to carry out voice according to the collection strategy and client
Words.
Specifically, according to the classification of client, to formulate collection strategy, for example, if the classification of client is to belong to mutually
Network industry, 25 years old etc., then according to occupation and Analysis of age go out this client living habit should be at night be easier or
It is that having time carries out voice communication.It is exactly that the period in evening is selected to carry out language with the client in the collection strategy so formulated
Sound call, and voice communication is carried out with client according to the collection strategy of formulation.For example, collection strategy is since xx month xx day
Voice communication is carried out with A client, A client facilitates air time section between 17:00-19:00, then phone system is set to
A client is called in the 17:00-19:00 of xx month xx day, carries out voice communication with A client.
In one possible implementation, the classification based on the client is formulated collection strategy, is urged according to described
Receive strategy and carry out voice communication with client, comprising: according to the classification of client, formulate with client carry out voice communication time and
The collection strategy of frequency carries out voice communication according to the collection strategy and client.Client is urged according to collection strategy
Receipts can be improved collection success rate.
S103, by the voice communication when customer voice that receives be converted to text information after generate term vector.
Specifically, after getting conversation sentence, conversation sentence is first converted into text information from voice messaging, then
Text information is divided into after N number of word using segmentation methods and being parsed again, is such as segmented into " I likes China " according to segmentation methods
" I ", " love ", " China " or the words such as " I likes ", " China ", according to the difference of segmentation methods, the result of the same text participle
Difference, herein without limitation.There are three big mainstream segmentation methods at present: segmentation methods based on string matching, based on understanding
Segmentation methods and segmentation methods based on statistics.Each word being divided into finally is converted into the form of term vector, thus
Each text conversation sentence can also be expressed as a multi-dimensional matrix.
In one possible implementation, it is described the customer voice is converted into text information after generate term vector,
It include: the dictionary that corresponding customer group is determined according to the classification of the client, wherein the different classifications of client correspond to different visitors
The dictionary of family group includes the particular words of the customer group in the dictionary of each customer group;Described in determining
The dictionary of customer group generates term vector after segmenting to the text information after customer voice conversion.Include in dictionary
The particular words of customer group, collection can be talked in text information more accurately segment.The dictionary can be root
According to the dictionary of the dialogue building of a large amount of collection scene kinds, it is also possible to the dictionary open using Iflytek, Baidu etc. company
The middle higher word of the frequency of occurrences etc. being added in collection scene, herein without limitation.For example, this group divided
Body like by generous person be " office's gas " people, then " office's gas " this word be added to dictionary can be more accurately by this
The voice dialogue of customer group is divided into word, and word vector indicates.
In one possible implementation, the dictionary based on the determining customer group, to client's language
Text information after sound conversion generates term vector after being segmented, comprising: by the word in the text information and the dictionary
Similarity calculation is carried out, is generated after being segmented according to similarity calculation result to the text information after customer voice conversion
Term vector.It can more accurately be segmented in this way.
In one possible implementation, the word by the text information and the dictionary carries out similarity
It calculates, generates term vector after segmenting according to similarity calculation result to the text information after customer voice conversion, wrap
It includes: the word in the text information and the dictionary is subjected to similarity calculation, obtain N number of similar greater than similarity threshold
Angle value;According to the corresponding word of N number of similarity value greater than similarity threshold, the text information is divided into N number of word.Generally
The range on ground, the similarity value calculated is 0-1, and similarity value illustrates the more similar of two words closer to 1.Similarity
Threshold value can be with 0.8,0.9,0.95 etc. any one close 1 value, when several adjacent texts and dictionary of text information
In some word similarity value be greater than similarity threshold, then this several adjacent text is just divided into a word
Language.Similarity threshold, which is arranged, can make the word divided more accurate, so that semantic parsing is more accurate.
In one possible implementation, the word by the text information and the dictionary carries out similarity
It calculates, obtains N number of similarity value greater than similarity threshold;According to the corresponding word of N number of similarity value greater than similarity threshold
The text information is divided into N number of word by language, comprising: if the word of word composition text A in text adjacent with front and
The word formed with the word of rear adjacent is all larger than similar threshold value with the similarity value of the word in dictionary, the word for selecting similarity value big
Language combination is used as word segmentation result.In this way word can be formed to avoid the adjacent text of the word of some in text information and front and back
And lead to participle inaccuracy.
S104, using the length, memory network parses the term vector in short-term, obtains the parsing result of the customer voice.
Using the length, memory network parses the term vector in short-term, obtains parsing result.Traditional neural network can only
Unidirectional list entries data (such as term vector), can not both use following information;The neural network of long memory unit in short-term is added
It can use entire sequence contextual information when handling current time data, and RNN that can be traditional recycles nerve net
Network layers number causes parameter training gradient to disappear when excessive the problem of.
In one possible implementation, described using the length, memory network parses the term vector in short-term, obtains
To parsing result, comprising: the long memory network in short-term of building introduces attention mechanism in long memory network in short-term;Utilize training
Memory network parses the term vector to the length afterwards in short-term, obtains parsing result, wherein the instruction of length memory network in short-term
Practice the corpus that collection is constituted comprising the dialogue of collection scene.Attention probability is calculated using attention mechanism, attention probability can be with
Prominent specific word introduces attention mechanism and considers more context semantemes passes for the significance level of entire sentence
Connection.
In some embodiments, as Fig. 4 shows the stream of the interactive voice based on At-BLSTM of embodiment of the present disclosure offer
The text information is converted term vector by journey schematic diagram, and the term vector includes text information vector sum substance feature vector;
The term vector is input to the two-way length for being introduced into attention mechanism of building in short-term in memory unit neural network At-BLSTM,
Wherein the At-BLSTM includes the two-way length part memory unit BLSTM, attention mechanism in short-term, pond layer, Fusion Features, spy
The parts such as sign classification.BLSTM can make full use of the information of entire text sequence, including the correlation letter between each word
Breath, and this kind of information is used in the middle of the processing to each word.Attention probability is calculated using attention mechanism, attention is general
Rate can protrude specific word for the significance level of entire sentence, introduce attention mechanism and consider more context languages
Justice association.
In one possible implementation, it is described using training after the length in short-term memory network parse institute's predicate to
Amount, after obtaining parsing result, comprising: be input to the parsing result in the Dialogue management model built and the dialogue
Dialogue types label in administrative model is matched, based on matching result output dialogue response text;Wherein, the dialogue pipe
Dialogue types label is arranged in the dialogue for managing each type in model.
Optionally, described to be input to the parsing result in the Dialogue management model built and the dialogue management mould
The setting label in type is matched, based on matching result output dialogue response text, comprising: the parsing result is defeated
Enter into Dialogue management model and matched with the setting label in the Dialogue management model, obtains the parsing result
With the matching probability of each setting label;Dialogue mode under the big setting label of selection matching probability is answered to export dialogue
Answer text.Label can be arranged in the dialogue of collection in dialog management system, can preferably carry out debt collection pair in this way
Words.
Optionally, according to the classification of the client, selection dialogue response text.It can be selected in this way according to personality of client etc.
Select more suitable dialogue mode.
Optionally, mark Dialogue management model dialogue as a result, and the dialogue and result are saved, wherein
The result of the dialogue includes that collection success and collection fail and wait refund, postpone to refund.Save dialogue as a result, being convenient for
What is more prepared carries out collection next time to client.
Optionally, according to Dialogue management model label dialogue as a result, adjusting the collection strategy.Convenient for more flexible
The collection strategy for being more suitable for client is formulated on ground, improves collection success rate.
In the embodiments of the present disclosure, the user's portrait that can establish client, draws a portrait according to the user and determines the client
Classification;Based on the classification of the client, collection strategy is formulated, carries out voice communication according to the collection strategy and client;It will
The customer voice received when the voice communication generates term vector after being converted to text information;Utilize the long short-term memory net
Network parses the term vector, obtains the parsing result of the customer voice.In this way collection can be formulated for each customer group
Strategy is drawn a portrait by the user of client and understands the proper time period of speak habit and the progress collection of client, to make collection
It can be more smoothly more efficient.The dictionary of corresponding customer group is determined according to the classification of the client, wherein the difference of client
Classify and correspond to the dictionary of different clients group, includes the particular words of the customer group in the dictionary of each customer group;
Based on the dictionary of the determining customer group, generated after being segmented to the text information after the customer voice conversion word to
Amount.It is segmented, can more accurately be segmented according to the habit of speaking of each client.The long memory network in short-term of building,
Attention mechanism is introduced in long memory network in short-term;Using the length after training, memory network parses the term vector in short-term,
Obtain parsing result, wherein the length in short-term memory network training set include collection scene dialogue constitute corpus.It utilizes
Attention mechanism enhances the attention rate to keyword, weakens the attention rate of inessential word, to improve semantic parsing accuracy rate, benefit
Artificial design features are avoided the need for carry out semantic parsing with the neural network that joined two-way length memory unit in short-term, are improved
The efficiency of semantic parsing.
For the ease of better implementing the above scheme of the embodiment of the present disclosure, the disclosure is also corresponding to be provided a kind of voice and hands over
Mutual device is described in detail with reference to the accompanying drawing:
The structural schematic diagram for the voice interaction device that the embodiment of the present disclosure as shown in Figure 5 provides, voice interaction device can
To include: to establish unit 101, formulation unit 102 and resolution unit 103, wherein
Unit 101 is established, the user for establishing client draws a portrait, and draws a portrait according to the user and determines point of the client
Class;
Unit 102 is formulated, for the classification based on the client, collection strategy is formulated, according to the collection strategy and visitor
Family carries out voice communication;
Converting unit 103, the customer voice received when for by the voice communication generate after being converted to text information
Term vector;
Resolution unit 104, for obtaining the customer voice using the length memory network parsing in short-term term vector
Parsing result.
Optionally, described to establish unit 101, it is also used to:
It obtains customer data and the customer data is analyzed, the user for establishing each client based on the analysis results draws
Picture;
It is drawn a portrait according to the user and determines the classification of the client.
Optionally, the formulation unit 102, is also used to:
According to the classification of client, the collection strategy of time and frequency that voice communication is carried out with client are formulated, according to institute
It states collection strategy and client carries out voice communication.
Optionally, the converting unit 103, is also used to:
The dictionary of corresponding customer group is determined according to the classification of the client, wherein the different classifications of client are corresponding not
It include the particular words of the customer group with the dictionary of customer group, in the dictionary of each customer group;
Based on the dictionary of the determining customer group, after being segmented to the text information after customer voice conversion
Generate term vector.
Optionally, the resolution unit 104, is also used to:
The long memory network in short-term of building, introduces attention mechanism in long memory network in short-term;
Using the length after training, memory network parses the term vector in short-term, obtains parsing result, wherein the length
The training set of short-term memory network includes the corpus that the dialogue of collection scene is constituted.
Optionally, described device is also used to:
Dialogue response text is provided according to the parsing result of the customer voice.
Optionally, described device is also used to:
The parsing result is input in the Dialogue management model built and the dialogue in the Dialogue management model
Type label is matched, based on matching result output dialogue response text;Wherein, each in the Dialogue management model
Dialogue types label is arranged in the dialogue of type.
Optionally, described device is also used to:
The parsing result is input in Dialogue management model and the setting label in the Dialogue management model
It is matched, obtains the matching probability of the parsing result and each setting label;
Dialogue mode under the big setting label of selection matching probability talks with response text to export.
Optionally, described device is also used to:
According to the classification of the client, selection dialogue response text.
Optionally, described device is also used to:
Mark Dialogue management model dialogue as a result, and the dialogue and result are saved, wherein the dialogue
Result include collection success and collection failure and wait refund, postpone to refund.
In the embodiments of the present disclosure, the user's portrait that can establish client, draws a portrait according to the user and determines the client
Classification;Based on the classification of the client, collection strategy is formulated, carries out voice communication according to the collection strategy and client;It will
The customer voice received when the voice communication generates term vector after being converted to text information;Utilize the long short-term memory net
Network parses the term vector, obtains the parsing result of the customer voice.In this way collection can be formulated for each customer group
Strategy is drawn a portrait by the user of client and understands the proper time period of speak habit and the progress collection of client, to make collection
It can be more smoothly more efficient.The dictionary of corresponding customer group is determined according to the classification of the client, wherein the difference of client
Classify and correspond to the dictionary of different clients group, includes the particular words of the customer group in the dictionary of each customer group;
Based on the dictionary of the determining customer group, generated after being segmented to the text information after the customer voice conversion word to
Amount.It is segmented, can more accurately be segmented according to the habit of speaking of each client.The long memory network in short-term of building,
Attention mechanism is introduced in long memory network in short-term;Using the length after training, memory network parses the term vector in short-term,
Obtain parsing result, wherein the length in short-term memory network training set include collection scene dialogue constitute corpus.It utilizes
Attention mechanism enhances the attention rate to keyword, weakens the attention rate of inessential word, to improve semantic parsing accuracy rate, benefit
Artificial design features are avoided the need for carry out semantic parsing with the neural network that joined two-way length memory unit in short-term, are improved
The efficiency of semantic parsing.
It should be noted that the voice interaction device 10 in the embodiment of the present disclosure is the language in above-mentioned Fig. 1 to Fig. 4 embodiment
Sound interactive device, the function of each unit can be corresponded to reference to Fig. 1 to Fig. 4 in above-mentioned each method embodiment in the voice interaction device 10
The specific implementation of embodiment, which is not described herein again.
For the ease of better implementing the above scheme of the embodiment of the present disclosure, the disclosure is also corresponding to be provided a kind of voice and hands over
Mutual equipment is described in detail with reference to the accompanying drawing:
The structural schematic diagram for the interactive voice equipment that the embodiment of the present disclosure as shown in Figure 6 provides, interactive voice equipment 110
It may include processor 1101, input unit 1102, output unit 1103, memory 1104 and communication unit 1105, bus
1106, processor 1101, input unit 1102, output unit 1103, memory 1104 and communication unit 1105 can be by total
Line 1106 is connected with each other.Memory 1104 can be high speed RAM memory, be also possible to non-volatile memory (non-
Volatile memory), a for example, at least magnetic disk storage.Memory 1104 optionally can also be that at least one is located at
Storage system far from aforementioned processor 1101.Memory 1104 for storing application code, may include operating system,
Network communication module, Subscriber Interface Module SIM and interactive voice program, communication unit 1105 are used to carry out information with external unit
Interaction;Processor 1101 is configured for calling said program code, executes following steps:
Processor 1101 establishes user's portrait of client, is drawn a portrait according to the user and determines the classification of the client;
Collection strategy is formulated in classification of the processor 1101 based on the client, is carried out according to the collection strategy and client
Voice communication;
The customer voice received when processor 1101 is by the voice communication generates term vector after being converted to text information;
Using the length, memory network parses the term vector to processor 1101 in short-term, obtains the parsing of the customer voice
As a result.
Processor 1101 formulates the collection plan of time and frequency that voice communication is carried out with client according to the classification of client
Slightly, voice communication is carried out according to the collection strategy and client;
Processor 1101 determines the dictionary of corresponding customer group according to the classification of the client, wherein the difference of client
Classify and correspond to the dictionary of different clients group, includes the particular words of the customer group in the dictionary of each customer group;
Processor 1101 based on the determining customer group dictionary, to the customer voice conversion after text information
Term vector is generated after being segmented;
The long memory network in short-term of the building of processor 1101, introduces attention mechanism in long memory network in short-term;
Using the length after training, memory network parses the term vector to processor 1101 in short-term, obtains parsing result,
Wherein, the length in short-term memory network training set include collection scene dialogue constitute corpus.
Processor 1101 establishes Dialogue management model, and the dialogue of each type is arranged in the Dialogue management model and marks
Label;
The parsing result is input in Dialogue management model and the institute in the Dialogue management model by processor 1101
It states setting label to be matched, based on matching result output dialogue response text.
In the embodiments of the present disclosure, the user's portrait that can establish client, draws a portrait according to the user and determines the client
Classification;Based on the classification of the client, collection strategy is formulated, carries out voice communication according to the collection strategy and client;It will
The customer voice received when the voice communication generates term vector after being converted to text information;Utilize the long short-term memory net
Network parses the term vector, obtains the parsing result of the customer voice.In this way collection can be formulated for each customer group
Strategy is drawn a portrait by the user of client and understands the proper time period of speak habit and the progress collection of client, to make collection
It can be more smoothly more efficient.The dictionary of corresponding customer group is determined according to the classification of the client, wherein the difference of client
Classify and correspond to the dictionary of different clients group, includes the particular words of the customer group in the dictionary of each customer group;
Based on the dictionary of the determining customer group, generated after being segmented to the text information after the customer voice conversion word to
Amount.It is segmented, can more accurately be segmented according to the habit of speaking of each client.The long memory network in short-term of building,
Attention mechanism is introduced in long memory network in short-term;Using the length after training, memory network parses the term vector in short-term,
Obtain parsing result, wherein the length in short-term memory network training set include collection scene dialogue constitute corpus.It utilizes
Attention mechanism enhances the attention rate to keyword, weakens the attention rate of inessential word, to improve semantic parsing accuracy rate, benefit
Artificial design features are avoided the need for carry out semantic parsing with the neural network that joined two-way length memory unit in short-term, are improved
The efficiency of semantic parsing.
It should be noted that the interactive voice equipment 110 in the embodiment of the present disclosure is in above-mentioned Fig. 1 to Fig. 4 embodiment
Interactive voice equipment can specifically correspond to the specific implementation with reference to Fig. 1 to Fig. 4 embodiment in above-mentioned each method embodiment, this
In repeat no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, wherein the computer readable storage medium can
It is stored with program, which includes some or all of any one recorded in above method embodiment step when executing.
The embodiment of the present invention also provides a kind of computer program, which includes instruction, when the computer program
When being computer-executed, computer is allowed to execute some or all of any one voice interactive method step.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a computer-readable storage medium
In, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can be U
Disk, magnetic disk, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random
Access Memory, RAM) etc..
The disclosure can be system, method and/or computer program product.Computer program product may include computer
Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment
Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage
Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium
More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits
It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable
Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon
It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above
Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to
It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire
Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/
Processing unit, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network
Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway
Computer and/or Edge Server.Adapter or network interface in each calculating/processing unit are received from network to be counted
Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing unit
In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs,
Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages
The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as
Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer
Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one
Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part
Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions
Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can
Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure
Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/
Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/
Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas
The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas
When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced
The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to
It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction
Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram
The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other
In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce
Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment
Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use
The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box
It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel
Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or
The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic
The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.With
Upper disclosed is only disclosure preferred embodiment, cannot limit the interest field of the disclosure with this certainly, therefore according to
Equivalent variations made by disclosure claim still belongs to the range that the disclosure is covered.
Claims (10)
1. a kind of voice interactive method characterized by comprising
The user's portrait for establishing client, draws a portrait according to the user and determines the classification of the client;
Based on the classification of the client, collection strategy is formulated, carries out voice communication according to the collection strategy and client;
The customer voice received when by the voice communication generates term vector after being converted to text information;
Using the length, memory network parses the term vector in short-term, obtains the parsing result of the customer voice.
2. the method according to claim 1, wherein the user for establishing client draws a portrait, according to the user
Portrait determines the classification of the client, comprising:
It obtains customer data and the customer data is analyzed, establish user's portrait of each client based on the analysis results;
It is drawn a portrait according to the user and determines the classification of the client.
3. collection strategy is formulated the method according to claim 1, wherein the classification based on the client,
Voice communication is carried out according to the collection strategy and client, comprising:
According to the classification of client, the collection strategy of the time and frequency that carry out voice communication with client are formulated, is urged according to described
It receives strategy and carries out voice communication with client.
4. the method according to claim 1, wherein it is described by the voice communication when customer voice that receives
Term vector is generated after being converted to text information, comprising:
The dictionary of corresponding customer group is determined according to the classification of the client, wherein the different classifications of client correspond to different visitors
The dictionary of family group includes the particular words of the customer group in the dictionary of each customer group;
Based on the dictionary of the determining customer group, generated after being segmented to the text information after customer voice conversion
Term vector.
5. the method according to claim 1, wherein described, using the length, memory network parses institute's predicate in short-term
Vector obtains the parsing result of the customer voice, comprising:
The long memory network in short-term of building, introduces attention mechanism in long memory network in short-term;
Using the length after training, memory network parses the term vector in short-term, obtains parsing result, wherein the length is in short-term
The training set of memory network includes the corpus that the dialogue of collection scene is constituted.
6. according to the method described in claim 5, it is characterized in that, described utilize the length institute's predicate of memory network parsing in short-term
Vector, after obtaining the parsing result of the customer voice, comprising:
Dialogue response text is provided according to the parsing result of the customer voice.
7. according to the method described in claim 6, it is characterized in that, the parsing result according to the customer voice provides pair
Talk about response text, comprising:
The parsing result is input in the Dialogue management model built and the dialogue types in the Dialogue management model
Label is matched, based on matching result output dialogue response text;Wherein, each type in the Dialogue management model
Dialogue be arranged dialogue types label.
8. a kind of voice interaction device, which is characterized in that including for executing the method according to claim 1 to 7
Unit.
9. a kind of interactive voice equipment, which is characterized in that including processor, input equipment, output equipment and memory, the place
It manages device, input equipment, output equipment and memory to be connected with each other, wherein the memory is for storing application code, institute
It states processor to be configured for calling said program code, executes the method according to claim 1 to 7.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey
Sequence, the computer program include program instruction, and described program instruction executes the processor such as
The described in any item methods of claim 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910421285.9A CN110222333A (en) | 2019-05-20 | 2019-05-20 | A kind of voice interactive method, device and relevant device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910421285.9A CN110222333A (en) | 2019-05-20 | 2019-05-20 | A kind of voice interactive method, device and relevant device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110222333A true CN110222333A (en) | 2019-09-10 |
Family
ID=67821517
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910421285.9A Pending CN110222333A (en) | 2019-05-20 | 2019-05-20 | A kind of voice interactive method, device and relevant device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110222333A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113112282A (en) * | 2021-04-20 | 2021-07-13 | 平安银行股份有限公司 | Method, device, equipment and medium for processing consult problem based on client portrait |
CN113362169A (en) * | 2021-08-09 | 2021-09-07 | 上海慧捷智能技术有限公司 | Catalytic recovery optimization method and device |
CN113824828A (en) * | 2021-10-29 | 2021-12-21 | 平安普惠企业管理有限公司 | Dialing method and device, electronic equipment and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090826A (en) * | 2017-11-13 | 2018-05-29 | 平安科技(深圳)有限公司 | A kind of phone collection method and terminal device |
CN109376361A (en) * | 2018-11-16 | 2019-02-22 | 北京九狐时代智能科技有限公司 | A kind of intension recognizing method and device |
CN109635080A (en) * | 2018-11-15 | 2019-04-16 | 上海指旺信息科技有限公司 | Acknowledgment strategy generation method and device |
-
2019
- 2019-05-20 CN CN201910421285.9A patent/CN110222333A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090826A (en) * | 2017-11-13 | 2018-05-29 | 平安科技(深圳)有限公司 | A kind of phone collection method and terminal device |
CN109635080A (en) * | 2018-11-15 | 2019-04-16 | 上海指旺信息科技有限公司 | Acknowledgment strategy generation method and device |
CN109376361A (en) * | 2018-11-16 | 2019-02-22 | 北京九狐时代智能科技有限公司 | A kind of intension recognizing method and device |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113112282A (en) * | 2021-04-20 | 2021-07-13 | 平安银行股份有限公司 | Method, device, equipment and medium for processing consult problem based on client portrait |
CN113362169A (en) * | 2021-08-09 | 2021-09-07 | 上海慧捷智能技术有限公司 | Catalytic recovery optimization method and device |
CN113824828A (en) * | 2021-10-29 | 2021-12-21 | 平安普惠企业管理有限公司 | Dialing method and device, electronic equipment and computer readable storage medium |
CN113824828B (en) * | 2021-10-29 | 2024-03-08 | 河北科燃信息科技股份有限公司 | Dialing method and device, electronic equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI788529B (en) | Credit risk prediction method and device based on LSTM model | |
US11934791B2 (en) | On-device projection neural networks for natural language understanding | |
CN108255805B (en) | Public opinion analysis method and device, storage medium and electronic equipment | |
CN108536679B (en) | Named entity recognition method, device, equipment and computer readable storage medium | |
US10037768B1 (en) | Assessing the structural quality of conversations | |
WO2018036555A1 (en) | Session processing method and apparatus | |
AU2021322785B2 (en) | Communication content tailoring | |
CN111143576A (en) | Event-oriented dynamic knowledge graph construction method and device | |
CN107220352A (en) | The method and apparatus that comment collection of illustrative plates is built based on artificial intelligence | |
WO2021208685A1 (en) | Method and apparatus for executing automatic machine learning process, and device | |
CN109271493A (en) | A kind of language text processing method, device and storage medium | |
US11410644B2 (en) | Generating training datasets for a supervised learning topic model from outputs of a discovery topic model | |
CN107657056A (en) | Method and apparatus based on artificial intelligence displaying comment information | |
CN110222333A (en) | A kind of voice interactive method, device and relevant device | |
CN109509010A (en) | A kind of method for processing multimedia information, terminal and storage medium | |
Windiatmoko et al. | Developing facebook chatbot based on deep learning using rasa framework for university enquiries | |
CN109670148A (en) | Collection householder method, device, equipment and storage medium based on speech recognition | |
Windiatmoko et al. | Developing FB chatbot based on deep learning using RASA framework for university enquiries | |
CN110489730A (en) | Text handling method, device, terminal and storage medium | |
US11115520B2 (en) | Signal discovery using artificial intelligence models | |
CN115827865A (en) | Method and system for classifying objectionable texts by fusing multi-feature map attention mechanism | |
CN109523185A (en) | The method, apparatus and storage medium of collection scorecard are generated based on artificial intelligence | |
CN113051607B (en) | Privacy policy information extraction method | |
CN112506405B (en) | Artificial intelligent voice large screen command method based on Internet supervision field | |
CN112150103B (en) | Schedule setting method, schedule setting device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |