CN110458732A - Training Methodology, device, computer equipment and storage medium - Google Patents
Training Methodology, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110458732A CN110458732A CN201910520923.2A CN201910520923A CN110458732A CN 110458732 A CN110458732 A CN 110458732A CN 201910520923 A CN201910520923 A CN 201910520923A CN 110458732 A CN110458732 A CN 110458732A
- Authority
- CN
- China
- Prior art keywords
- training
- user
- text information
- knowledge point
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 475
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 230000002452 interceptive effect Effects 0.000 claims abstract description 37
- 238000004590 computer program Methods 0.000 claims description 31
- 238000013136 deep learning model Methods 0.000 claims description 23
- 230000003993 interaction Effects 0.000 claims description 10
- 238000013139 quantization Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 2
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 13
- 238000004088 simulation Methods 0.000 abstract description 10
- 239000000843 powder Substances 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 7
- 238000012423 maintenance Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
- G06Q50/2057—Career enhancement or continuing education service
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/01—Customer relationship services
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Theoretical Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Acoustics & Sound (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Primary Health Care (AREA)
- Marketing (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
This application involves a kind of Training Methodology, device, computer equipment and storage mediums.The described method includes: obtaining the target training scene chosen from multiple default training scenes by training user;Interactive voice is carried out by training user with described according to the target training scene creation Virtual User, and by the Virtual User;By the voice messaging of training user according to the interactive voice, result of training is quantified.Through the embodiment of the present invention, it can be after choosing target training scene by training user, simulation Virtual User carries out interactive voice, that is, simulation of real scenes training by training user with by training user, make to be trained user and accumulate smell of powder in training process, and then improves result of training.
Description
Technical field
This application involves training technique fields, are situated between more particularly to a kind of Training Methodology, device, computer equipment and storage
Matter.
Background technique
Contact staff is indispensable post during enterprise operation, and the service of contact staff is to promoting enterprises service matter
Amount, maintaining enterprise image are particularly significant.But since contact staff's quantity is more, mobility is big, enterprise will often be put into largely
Repetitive operation gives training.
With the development of information technology, there is customer service training system on line.This kind of system enumerates some learned lessons
And data resource, it can allow student's on-line study.For example, playing PPT or instructional video etc. online.Contact staff learns to complete
Afterwards, moreover it is possible to carry out on-line examination in systems.
But due to customer service training system often only theoretical output on this kind of line, contact staff is in actual operation simultaneously
Theory cannot be combined reality, therefore result of training is bad.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of Training Methodology that can be improved result of training, dress
It sets, computer equipment and storage medium.
In a first aspect, the embodiment of the invention provides a kind of Training Methodologies, this method comprises:
Obtain the target training scene chosen from multiple default training scenes by training user;
Interactive voice is carried out with by training user according to target training scene creation Virtual User, and by Virtual User;
According to the voice messaging for being trained user in interactive voice, result of training is quantified.
It is above-mentioned according to the voice messaging for being trained user in interactive voice in one of the embodiments, to result of training
Quantified, comprising:
Corresponding text information will be converted to by the voice messaging of training user;
It whether include training knowledge point using the text information that preset matching mode judges to be trained user;
The quantized value of result of training is determined according to judging result.
Above-mentioned preset matching mode includes Keywords matching in one of the embodiments, and training knowledge point includes default
Keyword;
Whether the above-mentioned text information for judging to be trained user using preset matching mode includes training knowledge point, comprising:
Judge whether to be included predetermined keyword in the text information of training user;
If determining to include training knowledge point by the text information of training user comprising predetermined keyword.
Above-mentioned preset matching mode includes semantic matches in one of the embodiments, and training knowledge point includes standard speech
The semantic label of sentence;
Whether the above-mentioned text information for judging to be trained user using preset matching mode includes training knowledge point, comprising:
It will be input in semantic model trained in advance by the text information of training user, obtain the text for being trained user
The semantic label of information;
When consistent with the semantic label of standard sentence by the semantic label of the text information of training user, judgement is trained
The text information of user includes training knowledge point.
This method in one of the embodiments, further include:
Obtain multiple sample canonical sentences;
Sample semantic label is added to each sample canonical sentence;
Using multiple sample canonical sentences as the input of deep learning model, by the sample semantic label of sample canonical sentence
As the output of deep learning model, deep learning model is trained, semantic model is obtained.
The above-mentioned quantized value that result of training is determined according to judging result in one of the embodiments, comprising:
If including training knowledge point by the text information of training user, according to the corresponding pass between knowledge point and score value
System determines the corresponding target score in each training knowledge point that the text information by training user includes;
Each target score is counted, the quantized value of result of training is obtained.
This method in one of the embodiments, further include:
If text information does not include training knowledge point, training knowledge point is shown.
It is above-mentioned according to target training scene creation Virtual User in one of the embodiments, and by Virtual User with
Interactive voice is carried out by training user, comprising:
Show the user information of Virtual User;
The voice messaging of Virtual User is generated according to the text information of preset Virtual User;
Voice is played according to the voice messaging of Virtual User, and acquires the voice messaging replied by training user.
The target chosen from multiple default training scenes in above-mentioned acquisition by training user in one of the embodiments,
Before training scene, this method further include:
Receive the scene information of input;Wherein, scene information include at least scene title, the user information of Virtual User,
The text information of Virtual User, training knowledge point score value corresponding with training knowledge point;
Default training scene is generated according to scene information.
Second aspect, the embodiment of the invention provides a kind of training devices, comprising:
Target training scene obtains module, for obtaining the target chosen from multiple default training scenes by training user
Train scene;
Voice interaction module is used for according to target training scene creation Virtual User, and by Virtual User and is trained
User carries out interactive voice;
Result of training quantization modules, for according in interactive voice by training user voice messaging, to result of training into
Row quantization.
Result of training quantization modules include: in one of the embodiments,
Informoter module, for corresponding text information will to be converted to by the voice messaging of training user;
Judging submodule, whether the text information for judging to be trained user using preset matching mode includes that training is known
Know point;
Quantify submodule, for determining the quantized value of result of training according to judging result.
Preset matching mode includes Keywords matching in one of the embodiments, and training knowledge point includes default key
Word;
Judging submodule includes:
Predetermined keyword judging unit, for judging whether to be included predetermined keyword in the text information of training user;
First judging unit, if determining by the text information of training user to include training for comprising predetermined keyword
Knowledge point.
Preset matching mode includes semantic matches in one of the embodiments, and training knowledge point includes standard sentence
Semantic label;
Judging submodule includes:
Semantic label obtaining unit, for semantic model trained in advance will to be input to by the text information of training user
In, obtain the semantic label for the text information for being trained user;
Second judging unit, for by the semantic label of the semantic label of the text information of training user and standard sentence
When consistent, determine to include training knowledge point by the text information of training user.
The device in one of the embodiments, further include:
Sample canonical sentence obtains module, for obtaining multiple sample canonical sentences;
Sample semantic label adding module, for adding sample semantic label to each sample canonical sentence;
Semantic model training module, for using multiple sample canonical sentences as the input of deep learning model, by sample
Output of the sample semantic label of standard sentence as deep learning model, is trained deep learning model, obtains semanteme
Model.
Result of training quantization modules include: in one of the embodiments,
Intended branch determines submodule, if basis is known for including training knowledge point by the text information of training user
Know the corresponding relationship between point and score value, determines the corresponding target in each training knowledge point that the text information by training user includes
Score value;
Quantized value obtains submodule and obtains the quantized value of result of training for counting each target score.
The device in one of the embodiments, further include:
Knowledge point display module is trained, if not including training knowledge point for text information, shows training knowledge point.
Voice interaction module includes: in one of the embodiments,
User information display sub-module, for showing the user information of Virtual User;
Voice messaging generates submodule, for generating the voice of Virtual User according to the text information of preset Virtual User
Information;
Acquisition submodule is played, for playing voice according to the voice messaging of Virtual User, and acquires and is returned by training user
Multiple voice messaging.
In one of the embodiments, before target training scene obtains module, the device further include:
Scene information receiving module, scene information for receiving input;Wherein, scene information includes at least scene name
Title, the text information of the user information of Virtual User, Virtual User, training knowledge point score value corresponding with training knowledge point;
Default training scenario generating module, for generating default training scene according to scene information.
The third aspect, the embodiment of the invention provides a kind of computer equipment, including memory and processor, the storages
Device is stored with computer program, and the processor is realized described in first aspect any embodiment when executing the computer program
Training Methodology.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage mediums, are stored thereon with computer journey
Sequence realizes Training Methodology described in first aspect any embodiment when the computer program is executed by processor.
Above-mentioned Training Methodology, device, computer equipment and storage medium obtain and are trained user from multiple default training fields
The target training scene chosen in scape;It is used according to target training scene creation Virtual User, and by Virtual User with by training
Family carries out interactive voice;According to the voice messaging for being trained user in interactive voice, result of training is quantified.By this hair
Bright embodiment can preset different training scenes according to training requirement, to adapt to the training of different field;It is being trained
After instructing user's selection target training scene, simulation Virtual User carries out interactive voice with by training user, that is, simulates true
Scene training is made to be trained user and is accumulated smell of powder in training process by training user;After training, also to being trained
The quantized value for instructing user feedback result of training makes the knowledge point for being trained user's understanding missing, and then improves result of training.
Detailed description of the invention
Fig. 1 is the applied environment figure of Training Methodology in one embodiment;
Fig. 2 is the flow diagram of Training Methodology in one embodiment;
Fig. 3 is the flow diagram for carrying out quantization step in one embodiment to result of training;
Fig. 4 is the flow diagram of interactive voice step in one embodiment;
Fig. 5 is the flow diagram that default training scene step is generated in one embodiment;
Fig. 6 is the flow diagram of Training Methodology in another embodiment;
Fig. 7 is the structural block diagram of training device in one embodiment;
Fig. 8 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
Training Methodology provided by the present application can be applied in application environment shown in FIG. 1.Wherein, people and terminal carry out
Interaction, terminal can carry out intelligent training to user by the voice that simulation generates.Wherein, terminal can be, but not limited to be various
Personal computer, laptop, smart phone, tablet computer.
In one embodiment, as shown in Fig. 2, providing a kind of Training Methodology, in this way applied to the terminal in Fig. 1
For be illustrated, comprising the following steps:
Step 101, the target training scene chosen from multiple default training scenes by training user is obtained.
In the present embodiment, training system can be set in terminal, the maintenance personnel of training system can in advance in the terminal
Multiple training scenes are set.For example, training bank's customer service, can be set credit card collection refund, financing consulting, bank of deposit's inquiry
Deng training scene;Certain official website customer service is trained, the training scenes such as product consulting, inquiry into balance, integral inquiry can be set.The present invention
Embodiment does not limit default training scene in detail, can be configured according to the actual situation.
In training, target training scene is chosen from multiple default training scenes by training user, terminal acquisition is trained
The target training scene that user chooses is instructed, subsequently into target training scene.For example, terminal show multiple default training scene a,
B, c, d choose target training scene a by training user from multiple options, and click determining key, and then, terminal is got
The triggering command of the target training scene a and determining key instruction that choose, and enter target training scene.Alternatively, being used by training
The option of target training scene a is clicked at family, and later, terminal gets click commands, and enters target training scene a.The present invention
Embodiment does not limit the acquisition modes of target training scene in detail, can be configured according to the actual situation.
Step 102, according to target training scene creation Virtual User, and language is carried out with by training user by Virtual User
Sound interaction.
In the present embodiment, after terminal enters target training scene, according to target training scene creation Virtual User.Specifically
Ground, target training scene provide scene information.For example, target training scene is that credit card collection is refunded, the letter of refund people is provided
Breath, such as name, age, gender, amount owed, minimum repayment amount, overdue number of days etc..Terminal can be according to target training field
The scene information that scape provides creates Virtual User.For example, the refund people that terminal is virtual according to refund people's information creating.
In training process, the Virtual User of creation carries out interactive voice with by training user.For example, being by training user
" you are good, is XX bank here " is said in bank's customer service, bank's customer service, and " your credit card debt A member, minimum refund B member have exceeded
Phase C days, would you please arrange to refund as early as possible ";And virtual refund people says " I has gone back ";Bank's customer service, which can continue, " to be woulded you please
Reaffirm whether gone back, my here temporary not no your refund information ";Virtual refund people says " my M month D day
Also ", " good, I goes to verify again, awfully sorry to bother you, goodbye " is said in bank's customer service again.In this interactive voice
During, terminal can be realized using artificial intelligence (Artificial Intelligence) technology, the embodiment of the present invention pair
This is not limited in detail, can be configured according to the actual situation.
Step 103, according to the voice messaging for being trained user in interactive voice, result of training is quantified.
In the present embodiment, during interactive voice, acquisition is by the voice messaging of training user.It is used collecting by training
After the voice messaging at family, judged according to model answer preset in terminal whether correct by the answer content of training user.For example,
Model answer is " your credit card debt A member, minimum refund B member overdue C days, would you please arrange to refund as early as possible ", if silver-colored
The voice messaging of row customer service is " your credit card debt is refunded as early as possible ", then the answer content with standard of bank's customer service are answered
Case difference is more, and terminal can determine bank's customer service erroneous answers.If the voice messaging of bank's customer service is that " your credit card is
Debt C days, total debt was A member, and minimum refund B member, when could you tell me can refund ", then time of bank's customer service
It is close compared with model answer to answer content, it is correct that terminal can determine that bank's customer service is answered.Finally, according to by the answer of training user
Situation quantifies result of training.For example, obtaining being trained the total score of user.
In above-mentioned Training Methodology, the target training scene chosen from multiple default training scenes by training user is obtained;
Interactive voice is carried out with by training user according to target training scene creation Virtual User, and by Virtual User;According to voice
By the voice messaging of training user in interaction, result of training is quantified.It through the embodiment of the present invention, can be according to training need
It asks and presets different training scenes, to adapt to the training of different field;Target training scene is being chosen by training user
Afterwards, simulation Virtual User with by training user carry out interactive voice, that is, simulation of real scenes training by training user, make by
Training user accumulates smell of powder in training process;After training, also to by the amount of training user feedback result of training
Change value makes the knowledge point for being trained user's understanding missing, and then improves result of training.
In another embodiment, as shown in figure 3, the present embodiment what is involved is according in interactive voice by training user
Voice messaging, a kind of optional process that result of training is quantified.It is above-mentioned on the basis of above-mentioned embodiment illustrated in fig. 2
Step 103 can specifically include following steps:
Step 201, corresponding text information will be converted to by the voice messaging of training user.
In the present embodiment, collects by after the voice messaging of training user, speech recognition is carried out to voice messaging, is obtained pair
The text information answered.For example, obtaining corresponding 1 " you of text information to by the in short progress speech recognition of training user
It is good, be XX bank here ";To by the second word progress speech recognition of training user, obtain corresponding text information 2 " you
Credit card debt A member, minimum refund B member overdue C days, would you please arrange to refund as early as possible ";To by the third sentence of training user
Words carry out speech recognition, obtain corresponding text information 3 and " would you please reaffirm whether gone back, I does not have temporarily here
There is your refund information ";To by the 4th word progress speech recognition of training user, obtain corresponding text information 4 " it is good,
I goes to verify again, awfully sorry to bother you, goodbye ".The embodiment of the present invention does not limit speech recognition in detail, can be with
It is configured according to the actual situation.
It step 202, whether include training knowledge point using the text information that preset matching mode judges to be trained user.
In the present embodiment, after it will be converted to corresponding text information by the voice messaging of training user, using default
Matching way judges whether text information includes training knowledge point.Wherein, preset matching mode may include Keywords matching, language
At least one of justice matching.Specifically, following judgment mode can be used:
Mode one, preset matching mode include Keywords matching training knowledge point include predetermined keyword in the case where,
Judge whether to be included predetermined keyword in the text information of training user;If determining to be used by training comprising predetermined keyword
The text information at family includes training knowledge point.
For example, the training knowledge point predetermined keyword that includes include: your good, XX bank, credit card debt A member, it is minimum also
Money B is first, C days overdue etc..Judge whether to be included these predetermined keywords in the text information of training user, if comprising default
Keyword then determines to include training knowledge point by the text information of training user.For example, including default key in text information 1
Word " you are good " and " XX bank " determine that text information 1 includes training knowledge point;It include predetermined keyword " credit in text information 2
Card debt A member ", " minimum refund B member " and " overdue C days ", then determine that text information 2 includes training knowledge point.
Mode two includes semantic matches in preset matching mode, and training knowledge point includes the semantic label of standard sentence
In the case of, it will be input in semantic model trained in advance by the text information of training user, obtain the text for being trained user
The semantic label of information;When consistent with the semantic label of standard sentence by the semantic label of the text information of training user, sentence
The fixed text information by training user includes training knowledge point.
For example, text information 2 is input in semantic model trained in advance, the semantic label for obtaining text information 2 is
" recall money, including repayment amount and overdue number of days ", and standard sentence is that " your credit card debt A member, minimum refund B are first,
Through overdue C days, would you please arrange to refund as early as possible ", the semantic label of standard sentence is also " recall money, including repayment amount and overdue
The semantic label of number of days ", text information 2 is consistent with the semantic label of standard sentence, determines that text information 2 includes training knowledge
Point.
In another example text information 3 is input in semantic model trained in advance, the semantic label of text information 3 is obtained
" reaffirming ", and the semantic label of standard sentence is also " reaffirming ", semantic label and the standard sentence of text information 3
Semantic label is consistent, determines that text information 3 includes training knowledge point.
It is to be appreciated that above-mentioned Keywords matching and semantic matches may be used alone, can also be used in combination, using this
A little preset matching modes are judged, the accuracy rate of judgement can be improved, and know that the answer content of contact staff comprising training
It is more flexible under the premise of knowing point.
Train in one of the embodiments, includes: to obtain multiple sample canonical sentences the step of semantic model;To various kinds
This standard sentence adds sample semantic label;Using multiple sample canonical sentences as the input of deep learning model, by sample mark
Output of the sample semantic label of quasi- sentence as deep learning model, is trained deep learning model, obtains semantic mould
Type.
For example, multiple sample canonical sentences include: " your credit card debt C days, total debt are A members, it is minimum also
Money B member, may I ask when you can refund ", " your credit card debt A member, minimum refund B member, overdue C days,
Would you please arrange as early as possible refund " etc., to these sample canonical sentences addition sample semantic label " recall money, including repayment amount and
Overdue number of days ".By in multiple sample canonical input by sentence deep learning models, deep learning model exports semantic label.In depth
Training terminates when the semantic label of degree learning model output is consistent with sample semantic label, obtains semantic model.
Step 203, the quantized value of result of training is determined according to judging result.
In the present embodiment, if including training knowledge point by the text information of training user, according to knowledge point and score value it
Between corresponding relationship, determine by the text information corresponding target score in each training knowledge point that includes of training user;Statistics is each
Target score obtains the quantized value of result of training.
For example, the training knowledge point is 25 points corresponding, it is determined that is used by training if text information 1 includes training knowledge point
Family obtains 25 points;If text information 4 does not include training knowledge point, and the training knowledge point is 25 points corresponding, it is determined that is trained
User cannot obtain this 25 points.Finally, the score that statistics is obtained by training user from text information 1 to text information 4, is trained
Instruct the gross score of effect.
If text information does not include training knowledge point in one of the embodiments, training knowledge point is shown.
Specifically, if text information does not include training knowledge point, user's not score is trained, at the same time it can also aobvious
Show training knowledge point.For example, not including " verifying again " in text information 4, then by training user not score, and display standard sentence
" good, I goes to verify again, awfully sorry to bother you, goodbye ".Display training knowledge point, can make to be trained user
Where the problem of solution is lost points, to improve result of training.
Optionally, after obtaining the quantized value of result of training, can to it is same by training user multiple quantized values into
Row sequence can also be ranked up multiple multiple quantized values by training user, can also show wrong topic set, this field skill
Art personnel can be extended on this basis.
It is above-mentioned according in interactive voice by the voice messaging of training user, will in the step of quantifying to result of training
Corresponding text information is converted to by the voice messaging of training user, the text for being trained user is judged using preset matching mode
Whether information includes training knowledge point, and the quantized value of result of training is determined according to judging result.It through the embodiment of the present invention, can be with
Judge whether the answer for being trained user complies with standard answer using a variety of preset matching modes, so as to improve judging efficiency
And judging nicety rate, it is more flexible the answer for being trained user under the premise of comprising training knowledge point.
In another embodiment, as shown in figure 4, the present embodiment carries out language what is involved is Virtual User and by training user
A kind of optional process of sound interaction.On the basis of above-mentioned embodiment illustrated in fig. 2, above-mentioned steps 102 can specifically include with
Lower step:
Step 301, the user information of Virtual User is shown.
It, can be virtual in displaying target training scene after choosing target training scene by training user in the present embodiment
The user information of user.Specifically, name, age, gender, the user situation of Virtual User are shown.For example, display name XXX,
Age 26, gender male, amount owed A member, minimum refund B member, overdue number of days C days.
Step 302, the voice messaging of Virtual User is generated according to the text information of preset Virtual User.
In the present embodiment, the text information of Virtual User can be preset, it, will be virtual then when creating Virtual User
The text information of user is converted to the voice messaging of Virtual User.Specifically, speech synthesis technique can be used, virtual use is calculated
The meaning of a word, tone, the velocity of sound of the text information at family, and then speech waveform is synthesized according to calculated result.For example, the text of Virtual User
Information includes: " feeding, hello ", " being myself ", " I has gone back ", " I, which confirms, has gone back ", then uses speech synthesis skill
Art synthesizes 4 speech waveforms, that is, generates 4 voice messagings of Virtual User.Generation side of the embodiment of the present invention to voice messaging
Formula does not limit in detail, can be configured according to the actual situation.
Step 303, voice is played according to the voice messaging of Virtual User, and acquires the voice replied by training user and believes
Breath.
In the present embodiment, terminal shows the user information of Virtual User, and plays in order the voice messaging of Virtual User.Quilt
User is trained after seeing the user information of Virtual User, is replied according to the voice messaging of the Virtual User of terminal plays.
When being replied by training user, acquisition is by the voice messaging of training user's reply.For example, being adopted after terminal plays " feeding, hello "
Collect by the voice messaging " you are good, is XX bank here " of training user, then terminal plays " being myself ", then collects
By the voice messaging of training user, " your credit card debt A member, minimum refund B member overdue C days, would you please arrange as early as possible also
Money " then plays " I has gone back " with terminal again.In this way, the voice messaging for playing Virtual User and acquisition are by training user
Voice messaging alternately, to realize interactive voice.
During above-mentioned interactive voice, the user information of Virtual User is shown, according to the text of preset Virtual User
Information generates the voice messaging of Virtual User, plays voice according to the voice messaging of Virtual User, and acquire and returned by training user
Multiple voice messaging.Through the embodiment of the present invention, by training user can according to the user information for the Virtual User that terminal is shown,
And the voice messaging of terminal plays Virtual User is replied, during interactive voice, simulation of real scenes training is trained
User is instructed, makes to be trained user and accumulates smell of powder in training process, to improve result of training.
In another embodiment, as shown in figure 5, the present embodiment is optional what is involved is default one kind for training scene is generated
Process.On the basis of above-mentioned embodiment illustrated in fig. 2, before above-mentioned steps 101 can with the following steps are included:
Step 401, the scene information of input is received;Wherein, scene information includes at least scene title, the use of Virtual User
Family information, the text information of Virtual User, training knowledge point score value corresponding with training knowledge point.
In the present embodiment, maintenance personnel can be to terminal input scene information.Specifically, input scene title, virtual use
The user information at family, the text information of Virtual User, training knowledge point score value corresponding with training knowledge point etc..For example, input
Scene title: credit card collection-has been refunded in person;The user information of Virtual User: name XXX is inputted, at the age 26, gender is male,
Amount owed A member, minimum refund B member, overdue number of days C days;Input the text information of Virtual User: " feed, hello ", " be I this
People ", " I has gone back ", " I, which confirms, has gone back ";Input training knowledge point: your good, XX bank, sample canonical sentence or sample
The semantic label of this standard sentence;The corresponding score value in input training knowledge point: 20 points, 25 points.The embodiment of the present invention believes scene
Breath does not limit in detail, can be configured according to the actual situation.After maintenance personnel's input scene information, terminal receives input
Scene information.
Step 402, default training scene is generated according to scene information.
In the present embodiment, terminal generates default training scene after receiving scene information, according to scene information.For example,
The scene information for receiving credit card collection refund then generates the default training scene of credit card collection refund, as shown in table 1;
The scene information for receiving financing consulting then generates the default training scene of financing consulting;Receive the scene letter of integral inquiry
Breath then generates the default training scene of integral inquiry.
Table 1
It is to be appreciated that the default training of different field can be generated due to the scene information for receiving different field
Scene is instructed, so as to be suitable for the training of different field.
In the default training scene of above-mentioned generation, the scene information of input is received, default training field is generated according to scene information
Scape.Through the embodiment of the present invention, the default training scene of different field can be generated in the scene information for receiving different field,
To make training system can be adapted for the training of different field, make to train more flexible.
In another embodiment, as shown in fig. 6, the present embodiment is what is involved is a kind of optional process of Training Methodology,
It can specifically include following steps:
Step 501, the scene information of input is received;Wherein, scene information includes at least scene title, the use of Virtual User
Family information, the text information of Virtual User, training knowledge point score value corresponding with training knowledge point.
Step 502, default training scene is generated according to scene information.
Step 503, the user information of Virtual User is shown.
Step 504, the voice messaging of Virtual User is generated according to the text information of preset Virtual User.
Step 505, voice is played according to the voice messaging of Virtual User, and acquires the voice replied by training user and believes
Breath.
Step 506, corresponding text information will be converted to by the voice messaging of training user.
It step 507, whether include training knowledge point using the text information that preset matching mode judges to be trained user.
Specifically, preset matching mode include Keywords matching training knowledge point include predetermined keyword in the case where,
Judge whether to be included predetermined keyword in the text information of training user;If determining to be used by training comprising predetermined keyword
The text information at family includes training knowledge point.It include semantic matches in preset matching mode, training knowledge point includes standard sentence
Semantic label in the case where, will be input in advance trained semantic model by the text information of training user, and obtain being trained
Instruct the semantic label of the text information of user;It is marked by the semantic label of the text information of training user and the semantic of standard sentence
When signing consistent, determine to include training knowledge point by the text information of training user.
Step 508, the quantized value of result of training is determined according to judging result.
Specifically, if including training knowledge point by the text information of training user, according between knowledge point and score value
Corresponding relationship determines the corresponding target score in each training knowledge point that the text information by training user includes;Count each target
Score value obtains the quantized value of result of training.If text information does not include training knowledge point, training knowledge point is shown.
In above-mentioned Training Methodology, maintenance personnel edits default training scene, is trained user from multiple default training scenes
Middle selection target training scene, then terminal carries out voice friendship with by training user according to target training scenario simulation Virtual User
Mutually.In interactive process, terminal generates the voice messaging of Virtual User according to the text information of Virtual User, and acquires and trained
The voice messaging that user replys.Then, terminal judges to be trained the reply of user using modes such as Keywords matching, semantic matches
Whether content includes training knowledge point.Finally, obtaining the quantized value of result of training according to judging result.Implement through the invention
Example, different training scenes can be preset according to training requirement, to adapt to the training of different field;It is used by training
After target training scene is chosen at family, simulation Virtual User carries out interactive voice, that is, simulation of real scenes with by training user
Training is made to be trained user and is accumulated smell of powder in training process by training user;After training, also used to by training
The quantized value of result of training is fed back at family, makes the knowledge point for being trained user's understanding missing, and then improve result of training.
It should be understood that although each step in the flow chart of Fig. 2-6 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-6
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately
It executes.
In one embodiment, as shown in fig. 7, providing a kind of training device, comprising:
Target training scene obtains module 601, is trained what user chose from multiple default training scenes for obtaining
Target training scene;
Voice interaction module 602 is used for according to target training scene creation Virtual User, and by Virtual User and is trained
It instructs user and carries out interactive voice;
Result of training quantization modules 603, for according in interactive voice by training user voice messaging, to result of training
Quantified.
Result of training quantization modules include: in one of the embodiments,
Informoter module, for corresponding text information will to be converted to by the voice messaging of training user;
Judging submodule, whether the text information for judging to be trained user using preset matching mode includes that training is known
Know point;
Quantify submodule, for determining the quantized value of result of training according to judging result.
Preset matching mode includes Keywords matching in one of the embodiments, and training knowledge point includes default key
Word;
Judging submodule includes:
Predetermined keyword judging unit, for judging whether to be included predetermined keyword in the text information of training user;
First judging unit, if determining by the text information of training user to include training for comprising predetermined keyword
Knowledge point.
Preset matching mode includes semantic matches in one of the embodiments, and training knowledge point includes standard sentence
Semantic label;
Judging submodule includes:
Semantic label obtaining unit, for semantic model trained in advance will to be input to by the text information of training user
In, obtain the semantic label for the text information for being trained user;
Second judging unit, for by the semantic label of the semantic label of the text information of training user and standard sentence
When consistent, determine to include training knowledge point by the text information of training user.
The device in one of the embodiments, further include:
Sample canonical sentence obtains module, for obtaining multiple sample canonical sentences;
Sample semantic label adding module, for adding sample semantic label to each sample canonical sentence;
Semantic model training module, for using multiple sample canonical sentences as the input of deep learning model, by sample
Output of the sample semantic label of standard sentence as deep learning model, is trained deep learning model, obtains semanteme
Model.
Result of training quantization modules include: in one of the embodiments,
Intended branch determines submodule, if basis is known for including training knowledge point by the text information of training user
Know the corresponding relationship between point and score value, determines the corresponding target in each training knowledge point that the text information by training user includes
Score value;
Quantized value obtains submodule and obtains the quantized value of result of training for counting each target score.
The device in one of the embodiments, further include:
Knowledge point display module is trained, if not including training knowledge point for text information, shows training knowledge point.
Voice interaction module includes: in one of the embodiments,
User information display sub-module, for showing the user information of Virtual User;
Voice messaging generates submodule, for generating the voice of Virtual User according to the text information of preset Virtual User
Information;
Acquisition submodule is played, for playing voice according to the voice messaging of Virtual User, and acquires and is returned by training user
Multiple voice messaging.
In one of the embodiments, before target training scene obtains module, the device further include:
Scene information receiving module, scene information for receiving input;Wherein, scene information includes at least scene name
Title, the text information of the user information of Virtual User, Virtual User, training knowledge point score value corresponding with training knowledge point;
Default training scenario generating module, for generating default training scene according to scene information.
Specific about training device limits the restriction that may refer to above for Training Methodology, and details are not described herein.
Modules in above-mentioned training device can be realized fully or partially through software, hardware and combinations thereof.Above-mentioned each module can
It is embedded in the form of hardware or independently of in the processor in computer equipment, computer can also be stored in a software form and set
In memory in standby, the corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be terminal, internal structure
Figure can be as shown in Figure 8.The computer equipment includes processor, the memory, network interface, display connected by system bus
Screen and input unit.Wherein, the processor of the computer equipment is for providing calculating and control ability.The computer equipment is deposited
Reservoir includes non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system and computer journey
Sequence.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The network interface of machine equipment is used to communicate with external terminal by network connection.When the computer program is executed by processor with
Realize a kind of Training Methodology.The display screen of the computer equipment can be liquid crystal display or electric ink display screen, the meter
The input unit for calculating machine equipment can be the touch layer covered on display screen, be also possible to be arranged on computer equipment shell by
Key, trace ball or Trackpad can also be external keyboard, Trackpad or mouse etc..
It will be understood by those skilled in the art that structure shown in Fig. 8, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is stored in memory
Computer program, the processor perform the steps of when executing computer program
Obtain the target training scene chosen from multiple default training scenes by training user;
Interactive voice is carried out with by training user according to target training scene creation Virtual User, and by Virtual User;
According to the voice messaging for being trained user in interactive voice, result of training is quantified.
In one embodiment, it is also performed the steps of when processor executes computer program
Corresponding text information will be converted to by the voice messaging of training user;
It whether include training knowledge point using the text information that preset matching mode judges to be trained user;
The quantized value of result of training is determined according to judging result.
In one embodiment, above-mentioned preset matching mode includes Keywords matching, and training knowledge point includes default key
Word;Processor also performs the steps of when executing computer program
Judge whether to be included predetermined keyword in the text information of training user;
If determining to include training knowledge point by the text information of training user comprising predetermined keyword.
In one embodiment, above-mentioned preset matching mode includes semantic matches, and training knowledge point includes standard sentence
Semantic label;
Processor also performs the steps of when executing computer program
It will be input in semantic model trained in advance by the text information of training user, obtain the text for being trained user
The semantic label of information;
When consistent with the semantic label of standard sentence by the semantic label of the text information of training user, judgement is trained
The text information of user includes training knowledge point.
In one embodiment, it is also performed the steps of when processor executes computer program
Obtain multiple sample canonical sentences;
Sample semantic label is added to each sample canonical sentence;
Using multiple sample canonical sentences as the input of deep learning model, by the sample semantic label of sample canonical sentence
As the output of deep learning model, deep learning model is trained, semantic model is obtained.
In one embodiment, it is also performed the steps of when processor executes computer program
If including training knowledge point by the text information of training user, according to the corresponding pass between knowledge point and score value
System determines the corresponding target score in each training knowledge point that the text information by training user includes;
Each target score is counted, the quantized value of result of training is obtained.
In one embodiment, it is also performed the steps of when processor executes computer program
If text information does not include training knowledge point, training knowledge point is shown.
In one embodiment, it is also performed the steps of when processor executes computer program
Show the user information of Virtual User;
The voice messaging of Virtual User is generated according to the text information of preset Virtual User;
Voice is played according to the voice messaging of Virtual User, and acquires the voice messaging replied by training user.
In one embodiment, it is also performed the steps of when processor executes computer program
Receive the scene information of input;Wherein, scene information include at least scene title, the user information of Virtual User,
The text information of Virtual User, training knowledge point score value corresponding with training knowledge point;
Default training scene is generated according to scene information.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor
Obtain the target training scene chosen from multiple default training scenes by training user;
Interactive voice is carried out with by training user according to target training scene creation Virtual User, and by Virtual User;
According to the voice messaging for being trained user in interactive voice, result of training is quantified.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Corresponding text information will be converted to by the voice messaging of training user;
It whether include training knowledge point using the text information that preset matching mode judges to be trained user;
The quantized value of result of training is determined according to judging result.
In one embodiment, above-mentioned preset matching mode includes Keywords matching, and training knowledge point includes default key
Word;It is also performed the steps of when computer program is executed by processor
Judge whether to be included predetermined keyword in the text information of training user;
If determining to include training knowledge point by the text information of training user comprising predetermined keyword.
In one embodiment, above-mentioned preset matching mode includes semantic matches, and training knowledge point includes standard sentence
Semantic label;
It is also performed the steps of when computer program is executed by processor
It will be input in semantic model trained in advance by the text information of training user, obtain the text for being trained user
The semantic label of information;
When consistent with the semantic label of standard sentence by the semantic label of the text information of training user, judgement is trained
The text information of user includes training knowledge point.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Obtain multiple sample canonical sentences;
Sample semantic label is added to each sample canonical sentence;
Using multiple sample canonical sentences as the input of deep learning model, by the sample semantic label of sample canonical sentence
As the output of deep learning model, deep learning model is trained, semantic model is obtained.
In one embodiment, it is also performed the steps of when computer program is executed by processor
If including training knowledge point by the text information of training user, according to the corresponding pass between knowledge point and score value
System determines the corresponding target score in each training knowledge point that the text information by training user includes;
Each target score is counted, the quantized value of result of training is obtained.
In one embodiment, it is also performed the steps of when computer program is executed by processor
If text information does not include training knowledge point, training knowledge point is shown.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Show the user information of Virtual User;
The voice messaging of Virtual User is generated according to the text information of preset Virtual User;
Voice is played according to the voice messaging of Virtual User, and acquires the voice messaging replied by training user.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Receive the scene information of input;Wherein, scene information include at least scene title, the user information of Virtual User,
The text information of Virtual User, training knowledge point score value corresponding with training knowledge point;
Default training scene is generated according to scene information.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Instruct relevant hardware to complete by computer program, computer program to can be stored in a non-volatile computer readable
It takes in storage medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, this Shen
Please provided by any reference used in each embodiment to memory, storage, database or other media, may each comprise
Non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
Above embodiments only express the several embodiments of the application, and the description thereof is more specific and detailed, but can not
Therefore it is construed as limiting the scope of the patent.It should be pointed out that for those of ordinary skill in the art, In
Under the premise of not departing from the application design, various modifications and improvements can be made, these belong to the protection scope of the application.
Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (12)
1. a kind of Training Methodology, which is characterized in that the described method includes:
Obtain the target training scene chosen from multiple default training scenes by training user;
Language is carried out by training user with described according to the target training scene creation Virtual User, and by the Virtual User
Sound interaction;
By the voice messaging of training user according to the interactive voice, result of training is quantified.
2. the method according to claim 1, wherein described trained user according to the interactive voice
Voice messaging, result of training is quantified, comprising:
The voice messaging by training user is converted into corresponding text information;
Judge whether the text information by training user includes training knowledge point using preset matching mode;
The quantized value of the result of training is determined according to judging result.
3. described according to the method described in claim 2, it is characterized in that, the preset matching mode includes Keywords matching
Training knowledge point includes predetermined keyword;
It is described to judge whether the text information by training user includes training knowledge point using preset matching mode, comprising:
Judge whether described included the predetermined keyword in the text information of training user;
If determining that the text information by training user includes the training knowledge point comprising the predetermined keyword.
4. according to the method described in claim 2, it is characterized in that, the preset matching mode includes semantic matches, the training
Instruction knowledge point includes the semantic label of standard sentence;
It is described to judge whether the text information by training user includes training knowledge point using preset matching mode, comprising:
The text information by training user is input in semantic model trained in advance, is obtained described by training user's
The semantic label of text information;
When the semantic label of the text information by training user is consistent with the semantic label of the standard sentence, institute is determined
Stating by the text information of training user includes the training knowledge point.
5. according to the method described in claim 4, it is characterized in that, the method also includes:
Obtain multiple sample canonical sentences;
Sample semantic label is added to each sample canonical sentence;
It is using the multiple sample canonical sentence as the input of deep learning model, the sample of the sample canonical sentence is semantic
Output of the label as the deep learning model, is trained the deep learning model, obtains the semantic model.
6. according to the method described in claim 2, it is characterized in that, the amount for determining the result of training according to judging result
Change value, comprising:
If the text information by training user includes the training knowledge point, according to corresponding between knowledge point and score value
Relationship determines the corresponding target score in each training knowledge point that the text information by training user includes;
Each target score is counted, the quantized value of the result of training is obtained.
7. according to the method described in claim 6, it is characterized in that, the method also includes:
If the text information does not include the training knowledge point, the training knowledge point is shown.
8. the method according to claim 1, wherein described virtually use according to the target training scene creation
Family, and interactive voice is carried out by training user with described by the Virtual User, comprising:
Show the user information of the Virtual User;
The voice messaging of the Virtual User is generated according to the text information of the preset Virtual User;
Voice is played according to the voice messaging of the Virtual User, and acquires the voice messaging replied by training user.
9. the method according to claim 1, wherein being trained user from multiple default training fields in described obtain
Before the target training scene chosen in scape, the method also includes:
Receive the scene information of input;Wherein, the scene information includes at least user's letter of scene title, the Virtual User
Breath, the text information of the Virtual User, training knowledge point score value corresponding with training knowledge point;
The default training scene is generated according to the scene information.
10. a kind of training device, which is characterized in that described device includes:
Target training scene obtains module, for obtaining the target training chosen from multiple default training scenes by training user
Scene;
Voice interaction module is used for according to the target training scene creation Virtual User, and passes through the Virtual User and institute
It states and interactive voice is carried out by training user;
Result of training quantization modules, for, by the voice messaging of training user, being imitated to training according to the interactive voice
Fruit is quantified.
11. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 9 the method when executing the computer program.
12. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 9 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910520923.2A CN110458732A (en) | 2019-06-17 | 2019-06-17 | Training Methodology, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910520923.2A CN110458732A (en) | 2019-06-17 | 2019-06-17 | Training Methodology, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110458732A true CN110458732A (en) | 2019-11-15 |
Family
ID=68481025
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910520923.2A Pending CN110458732A (en) | 2019-06-17 | 2019-06-17 | Training Methodology, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458732A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910694A (en) * | 2019-11-28 | 2020-03-24 | 大唐融合通信股份有限公司 | Intelligent customer service training system |
CN111553555A (en) * | 2020-03-27 | 2020-08-18 | 深圳追一科技有限公司 | Training method, training device, computer equipment and storage medium |
CN112053597A (en) * | 2020-10-13 | 2020-12-08 | 北京灵伴即时智能科技有限公司 | Artificial seat training and checking method and system |
CN112328742A (en) * | 2020-11-03 | 2021-02-05 | 平安科技(深圳)有限公司 | Training method and device based on artificial intelligence, computer equipment and storage medium |
CN113377200A (en) * | 2021-06-22 | 2021-09-10 | 平安科技(深圳)有限公司 | Interactive training method and device based on VR technology and storage medium |
CN113821619A (en) * | 2021-08-31 | 2021-12-21 | 前海人寿保险股份有限公司 | Training method, device, system and computer readable storage medium |
CN114117755A (en) * | 2021-11-11 | 2022-03-01 | 泰康保险集团股份有限公司 | Simulation drilling method and device, computing equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034475A (en) * | 2010-12-08 | 2011-04-27 | 中国科学院自动化研究所 | Method for interactively scoring open short conversation by using computer |
US20170053546A1 (en) * | 2015-08-19 | 2017-02-23 | Boe Technology Group Co., Ltd. | Teaching system and working method thereof |
CN109298779A (en) * | 2018-08-10 | 2019-02-01 | 济南奥维信息科技有限公司济宁分公司 | Virtual training System and method for based on virtual protocol interaction |
-
2019
- 2019-06-17 CN CN201910520923.2A patent/CN110458732A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034475A (en) * | 2010-12-08 | 2011-04-27 | 中国科学院自动化研究所 | Method for interactively scoring open short conversation by using computer |
US20170053546A1 (en) * | 2015-08-19 | 2017-02-23 | Boe Technology Group Co., Ltd. | Teaching system and working method thereof |
CN109298779A (en) * | 2018-08-10 | 2019-02-01 | 济南奥维信息科技有限公司济宁分公司 | Virtual training System and method for based on virtual protocol interaction |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910694A (en) * | 2019-11-28 | 2020-03-24 | 大唐融合通信股份有限公司 | Intelligent customer service training system |
CN111553555A (en) * | 2020-03-27 | 2020-08-18 | 深圳追一科技有限公司 | Training method, training device, computer equipment and storage medium |
CN112053597A (en) * | 2020-10-13 | 2020-12-08 | 北京灵伴即时智能科技有限公司 | Artificial seat training and checking method and system |
CN112053597B (en) * | 2020-10-13 | 2023-02-21 | 北京灵伴即时智能科技有限公司 | Artificial seat training and checking method and system |
CN112328742A (en) * | 2020-11-03 | 2021-02-05 | 平安科技(深圳)有限公司 | Training method and device based on artificial intelligence, computer equipment and storage medium |
WO2022095378A1 (en) * | 2020-11-03 | 2022-05-12 | 平安科技(深圳)有限公司 | Artificial-intelligence-based training method and apparatus, and computer device and storage medium |
CN112328742B (en) * | 2020-11-03 | 2023-08-18 | 平安科技(深圳)有限公司 | Training method and device based on artificial intelligence, computer equipment and storage medium |
CN113377200A (en) * | 2021-06-22 | 2021-09-10 | 平安科技(深圳)有限公司 | Interactive training method and device based on VR technology and storage medium |
CN113377200B (en) * | 2021-06-22 | 2023-02-24 | 平安科技(深圳)有限公司 | Interactive training method and device based on VR technology and storage medium |
CN113821619A (en) * | 2021-08-31 | 2021-12-21 | 前海人寿保险股份有限公司 | Training method, device, system and computer readable storage medium |
CN114117755A (en) * | 2021-11-11 | 2022-03-01 | 泰康保险集团股份有限公司 | Simulation drilling method and device, computing equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110458732A (en) | Training Methodology, device, computer equipment and storage medium | |
CN112346567B (en) | Virtual interaction model generation method and device based on AI (Artificial Intelligence) and computer equipment | |
AU2023200468A1 (en) | Intelligent systems based training of customer service agents | |
CN110472060B (en) | Topic pushing method and device, computer equipment and storage medium | |
US20180124437A1 (en) | System and method for video data collection | |
CN110232183A (en) | Keyword extraction model training method, keyword extracting method, device and storage medium | |
US11140360B1 (en) | System and method for an interactive digitally rendered avatar of a subject person | |
CN107423851A (en) | Adaptive learning method based on learning style context aware | |
CN109033418A (en) | A kind of the intelligent recommendation method and facility for study of learning Content | |
WO2006125347A1 (en) | A homework assignment and assessment system for spoken language education and testing | |
CN109389427A (en) | Questionnaire method for pushing, device, computer equipment and storage medium | |
CN109582796A (en) | Generation method, device, equipment and the storage medium of enterprise's public sentiment event network | |
CN109784639A (en) | Recruitment methods, device, equipment and medium on line based on intelligent scoring | |
CN109543011A (en) | Question and answer data processing method, device, computer equipment and storage medium | |
CN110321409A (en) | Secondary surface method for testing, device, equipment and storage medium based on artificial intelligence | |
CN113377200A (en) | Interactive training method and device based on VR technology and storage medium | |
US20170358234A1 (en) | Method and Apparatus for Inquiry Driven Learning | |
Fauzia et al. | Implementation of chatbot on university website using RASA framework | |
Díaz et al. | Are requirements elicitation sessions influenced by participants' gender? An empirical experiment | |
Harahap et al. | Teacher-students discourse in English teaching at high school (Classroom discourse analysis) | |
KR102534275B1 (en) | Teminal for learning language, system and method for learning language using the same | |
CN111553555A (en) | Training method, training device, computer equipment and storage medium | |
CN116645251A (en) | Panoramic knowledge training method, device, equipment and medium based on artificial intelligence | |
US20230015312A1 (en) | System and Method for an Interactive Digitally Rendered Avatar of a Subject Person | |
CN116153152A (en) | Cloud teaching platform and method for online course learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191115 |
|
RJ01 | Rejection of invention patent application after publication |