CN110619588B - Evaluation method and device for scene exercise, storage medium and intelligent device - Google Patents

Evaluation method and device for scene exercise, storage medium and intelligent device Download PDF

Info

Publication number
CN110619588B
CN110619588B CN201910752171.2A CN201910752171A CN110619588B CN 110619588 B CN110619588 B CN 110619588B CN 201910752171 A CN201910752171 A CN 201910752171A CN 110619588 B CN110619588 B CN 110619588B
Authority
CN
China
Prior art keywords
voice information
word
exercise
word segmentation
sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910752171.2A
Other languages
Chinese (zh)
Other versions
CN110619588A (en
Inventor
姚雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN201910752171.2A priority Critical patent/CN110619588B/en
Publication of CN110619588A publication Critical patent/CN110619588A/en
Application granted granted Critical
Publication of CN110619588B publication Critical patent/CN110619588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • G06Q50/2057Career enhancement or continuing education service
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a method, a device, a storage medium and intelligent equipment for evaluating scene exercises, which comprise the following steps: acquiring a drilling scene selected by a student; acquiring first voice information input by the learner in the drilling scene; determining an exercise interaction script corresponding to the first voice information; acquiring second voice information of the learner; playing feedback information corresponding to the second voice information, and acquiring a preset score of the feedback information; converting a plurality of pieces of second voice information obtained during execution of the exercise interaction script into sentence texts, and performing word segmentation processing on the sentence texts to obtain each word segmentation forming the sentence texts; and evaluating the exercise of the learner according to the preset scores of the feedback information corresponding to each word segmentation of the sentence text and the second voice information. The invention can objectively evaluate the exercise of the students, and enables the students to know own exercise effect so as to improve the exercise efficiency.

Description

Evaluation method and device for scene exercise, storage medium and intelligent device
Technical Field
The invention relates to the technical field of intelligent interaction, in particular to a scene drilling evaluation method, a device, a storage medium and intelligent equipment.
Background
In conventional training of the products of the salesmen, the training teacher provides a scenario for the salesmen to practice, the scenario includes product introduction and related questions and answers of the products, and the salesmen perform simulation practice according to the scenario. However, under the condition of limited resources and time, the salesmen generally perform exercise with teachers or other salesmen according to the script, and the exercise effect is judged by the teacher or the salesmen who perform exercise together, that is, the evaluation of the exercise effect is subjective, and lacks objective basis, and the salesmen cannot improve own dialogue according to objective exercise result, so that the training efficiency is not high enough.
Disclosure of Invention
The embodiment of the invention provides a scene exercise evaluation method, a device, a storage medium and intelligent equipment, which are used for solving the problems that evaluation of exercise effects is subjective, objective basis is lacking, a salesman cannot improve own dialogue according to objective exercise results, and training efficiency is not high enough in the prior art.
A first aspect of an embodiment of the present invention provides a method for evaluating a scene exercise, including:
Acquiring a drilling scene selected by a student;
Acquiring first voice information input by the learner in the drilling scene;
determining a drilling interaction script corresponding to the first voice information from a plurality of drilling interaction scripts corresponding to the drilling scene;
acquiring second voice information of the learner based on the exercise interaction script corresponding to the first voice information;
playing feedback information corresponding to the second voice information according to the exercise interaction script corresponding to the first voice information, and obtaining a preset score of the feedback information;
After the exercise interaction script is executed, converting a plurality of pieces of second voice information acquired during execution of the exercise interaction script into sentence texts, and performing word segmentation processing on the sentence texts to obtain each word segmentation forming the sentence texts;
And evaluating the exercise of the learner according to the preset scores of the feedback information corresponding to each word segmentation of the sentence text and the second voice information.
A second aspect of an embodiment of the present invention provides an evaluation apparatus for scene exercises, including:
the training scene acquisition unit is used for acquiring training scenes selected by students;
The first voice information acquisition unit is used for acquiring first voice information input by the learner in the drilling scene;
a drilling interaction script determining unit, configured to determine a drilling interaction script corresponding to the first voice information from a plurality of drilling interaction scripts corresponding to the drilling scene;
the second voice information acquisition unit is used for acquiring second voice information of the student based on the exercise interaction script corresponding to the first voice information;
The feedback information acquisition unit is used for playing the feedback information corresponding to the second voice information according to the exercise interaction script corresponding to the first voice information and acquiring the preset score of the feedback information;
the word segmentation processing unit is used for converting a plurality of pieces of second voice information acquired during execution of the exercise interaction script into sentence texts after the execution of the exercise interaction script is completed, and performing word segmentation processing on the sentence texts to obtain each word segmentation forming the sentence texts;
And the exercise evaluation unit is used for evaluating the exercise of the learner according to the preset scores of the feedback information corresponding to each word of the sentence text and the plurality of pieces of second voice information.
A third aspect of an embodiment of the present invention provides an intelligent device, including a memory and a processor, where the memory stores a computer program that can run on the processor, and when the processor executes the computer program, the processor implements the following steps:
Acquiring a drilling scene selected by a student;
Acquiring first voice information input by the learner in the drilling scene;
determining a drilling interaction script corresponding to the first voice information from a plurality of drilling interaction scripts corresponding to the drilling scene;
acquiring second voice information of the learner based on the exercise interaction script corresponding to the first voice information;
playing feedback information corresponding to the second voice information according to the exercise interaction script corresponding to the first voice information, and obtaining a preset score of the feedback information;
After the exercise interaction script is executed, converting a plurality of pieces of second voice information acquired during execution of the exercise interaction script into sentence texts, and performing word segmentation processing on the sentence texts to obtain each word segmentation forming the sentence texts;
And evaluating the exercise of the learner according to the preset scores of the feedback information corresponding to each word segmentation of the sentence text and the second voice information.
A fourth aspect of the embodiments of the present invention provides a computer readable storage medium storing a computer program which when executed by a processor performs the steps of:
Acquiring a drilling scene selected by a student;
Acquiring first voice information input by the learner in the drilling scene;
determining a drilling interaction script corresponding to the first voice information from a plurality of drilling interaction scripts corresponding to the drilling scene;
acquiring second voice information of the learner based on the exercise interaction script corresponding to the first voice information;
The method comprises the steps of obtaining feedback information corresponding to second voice information in a drilling interaction script corresponding to the first voice information, and obtaining preset scores of the feedback information;
After the exercise interaction script is executed, converting a plurality of pieces of second voice information acquired during execution of the exercise interaction script into sentence texts, and performing word segmentation processing on the sentence texts to obtain each word segmentation forming the sentence texts;
And evaluating the exercise of the learner according to the preset scores of the feedback information corresponding to each word segmentation of the sentence text and the second voice information.
According to the embodiment of the invention, the first voice information input by a student is acquired through acquiring the exercise scene selected by the student, the first voice information input by the student is acquired in the exercise scene, the exercise interaction script corresponding to the first voice information is determined from a plurality of exercise interaction scripts corresponding to the exercise scene, then the second voice information of the student is acquired based on the exercise interaction script corresponding to the first voice information, namely, the exercise state is entered, the feedback information corresponding to the second voice information is played according to the exercise interaction script corresponding to the first voice information, the exercise interaction with the student is realized, the effect of the exercise simulation is more realistic, the preset score of the feedback information is acquired, after the exercise interaction script is executed, a plurality of pieces of second voice information acquired in the exercise interaction script are converted into sentence texts, the sentence texts are subjected to word segmentation processing, the exercise interaction script is obtained, the exercise interaction of each word forming the sentence texts is carried out according to each word segmentation of the sentence texts and the exercise interaction script corresponding to the plurality of pieces of the exercise interaction script, the exercise interaction script is further known by the student according to the preset word information, and the corresponding to the exercise interaction script is further improved, the result of the student can be evaluated by the student, and the evaluation result can be further improved, and the evaluation is further carried out by the student.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an implementation of a method for evaluating a scene exercise according to an embodiment of the present invention;
Fig. 2 is a flowchart of a specific implementation of a training step of a tf_idf matrix in the evaluation method for scene drilling according to the embodiment of the present invention;
Fig. 3 is a flowchart of a specific implementation of an evaluation method S105 for scene drilling according to an embodiment of the present invention;
Fig. 4 is a flowchart of an implementation of a scenario exercise evaluation method B3 provided in an embodiment of the present invention;
fig. 5 is a block diagram of an evaluation apparatus for scene exercises provided by an embodiment of the present invention;
Fig. 6 is a schematic diagram of an intelligent device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention are described in detail below with reference to the accompanying drawings, and it is apparent that the embodiments described below are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 shows an implementation flow of the scenario exercise evaluation method provided by the embodiment of the present invention, and the method flow includes steps S101 to S107. The specific implementation principle of each step is as follows:
s101: and acquiring the exercise scene selected by the learner.
In the embodiment of the invention, a plurality of drilling scenes are provided for students to select at their own, and different drilling scenes can simulate different real dialogue environments. The drill scene includes a telephone scene, a face-to-face scene, and a multi-person scene. Specifically, in the telephone scene, the intelligent device provides a scene of telephone simulation, namely, simulation interaction is carried out with a learner through voice, and the scene is suitable for the learner to drill and communicate with a client telephone; in a face-to-face scene, the intelligent equipment provides video images and voices of virtual clients to simulate interaction with students, and the scene is suitable for the students to exercise and communicate with the clients in a video mode or in a face-to-face mode; in a multi-person scenario, where the smart device provides video images and voices including multiple virtual clients for simulated interaction with a learner, the scenario is suitable for the learner to drill while communicating with multiple clients.
Further, in the embodiment of the invention, a scene identifier is set for identifying the exercise scene, and meanwhile, the mapping relation between the scene identifier and the script library can be established. And the script library is used for storing the exercise interaction script. And determining the exercise interaction script corresponding to the exercise scene selected by the learner by inquiring a database for storing the corresponding relation between the exercise scene and the exercise interaction script. The exercise interaction script comprises interaction content of the intelligent equipment and the trainee. And the intelligent equipment executes the searched exercise interaction script corresponding to the exercise scene.
Optionally, the exercise scenes to which different products to be promoted are applicable may be different, and in the embodiment of the present invention, a mapping relationship between the products and the exercise scenes is established. The product scene comparison table is preset, the product scene comparison table comprises the corresponding relation between the product identifiers and the scene identifiers, different products correspond to different exercise scenes, a learner determines the product to be exercised first, and the exercise scene is determined according to the product identifiers of the product to be exercised determined by the learner and the preset product scene comparison table. Further, if more than one exercise scene corresponding to the product is needed, after the trainee determines the product identification of the product to be exercised, the exercise scene is randomly selected from the exercise scenes corresponding to the product identification to interact with the trainee, so that the strain capacity of the trainee can be trained.
S102: and under the exercise scene, acquiring the first voice information input by the learner.
S103: determining a drilling interaction script corresponding to the first voice information from a plurality of drilling interaction scripts corresponding to the drilling scene;
In the embodiment of the invention, a plurality of exercise interaction scripts corresponding to the exercise scene are provided, after the learner selects the exercise scene, voice detection is performed, first voice information input by the learner is obtained, specifically, the first voice information is recognized, voice characteristics of the first voice information are obtained, the voice characteristics comprise speech speed and volume, and the exercise interaction script corresponding to the voice characteristics of the first voice information is determined from the plurality of exercise interaction scripts corresponding to the exercise scene.
Optionally, the first voice information is identified, the first voice information may include a product identifier of a product to be exercised, the product identifier includes at least one of a product name or a product number, and according to the identified product identifier in the first voice information, a database storing mapping relations among the product identifier, a scene identifier of an exercise scene and a scene exercise interaction script is queried, and an exercise interaction script corresponding to the product identifier is selected from a script library corresponding to the scene identifier. And more than one exercise interaction script corresponding to the product identifier, and further, searching the exercise interaction script corresponding to the voice characteristics of the first voice information from the exercise interaction scripts corresponding to the product identifier according to the voice characteristics of the first voice information.
S104: and acquiring second voice information of the learner based on the exercise interaction script corresponding to the first voice information.
Specifically, after determining the drill interaction script, the intelligent device starts to collect the second voice information of the learner in real time. The second voice information includes a time when the voice starts and a time when the voice ends, and in the embodiment of the invention, the microphone array can be used for picking up the voice in all directions in real time, and the first voice information and the second voice information can be received. Optionally, when the exercise scene selected by the learner is a phone scene, picking up the voice in real time by using the unidirectional microphone, and receiving the first voice information or the second voice information; when the exercise scene selected by the learner is a face-to-face scene or a multi-person scene, the microphone array is used for picking up the voice in all directions in real time, and the first voice information or the second voice information is received.
Optionally, in the phone scenario and the face-to-face scenario, a student is generally performing exercise, before the second voice information input by the student when the exercise interaction script is executed is obtained, the identity of the student is obtained, after the second voice information input by the student when the exercise interaction script is executed is obtained, the obtained second voice information is stored in a second voice information set with the identity of the student as a tag, and the second voice information of the exercise of the student is stored according to the identity of the student. In the embodiment of the invention, the identification of the learner is also included in the identification result of the first voice information.
In the embodiment of the invention, if the exercise scene is a multi-person scene, a plurality of students perform exercises simultaneously, and when the exercise interaction script is executed, the second voice information input by the students comes from a plurality of students, so that the second voice information is required to be classified according to the students in order to distinguish from which student the second voice information comes. Specifically, before second voice information input by a student is acquired, acquiring a drilling role selected by the student, acquiring the second voice information input by the student when executing the drilling interaction script based on the drilling role, and classifying the second voice information of the student according to the drilling role. Further, in order to improve accuracy of classification of the second voice information, the acquired second voice information is labeled so as to distinguish a student corresponding to the second voice information. Specifically, voiceprint recognition is performed on the acquired second voice information, voiceprint features of the second voice information are acquired, and the second voice information is marked according to the voiceprint features. The second speech information marked with the same voiceprint features is classified as second speech information of the same learner.
S105: and playing the feedback information corresponding to the second voice information according to the exercise interaction script corresponding to the first voice information, and obtaining the preset score of the feedback information.
In the embodiment of the invention, the played feedback information corresponds to the second voice information one by one. Specifically, the exercise interaction script includes an exercise preset standard statement, feedback information corresponding to the preset standard statement, and a preset score of the feedback information, where the preset score of the feedback information is initially preset. Carrying out semantic recognition on the second voice information, searching a preset standard statement which is the same as the semantic recognition result from the exercise interaction script according to the semantic recognition result, playing feedback information corresponding to the preset standard statement, and obtaining a preset score of the feedback information; and if the preset standard statement which is the same as the semantic recognition result of the second voice information is found in the exercise interaction script, playing default feedback information (namely designated feedback information), and acquiring a preset score of the default feedback information. In the embodiment of the invention, the feedback information is a response to the second voice information of the student, and the feedback information corresponding to the second voice information is played to give feedback to the student to cooperate with the exercise of the student.
S106: after the exercise interaction script is executed, converting a plurality of pieces of second voice information acquired during execution of the exercise interaction script into sentence texts, and performing word segmentation processing on the sentence texts to obtain each word segmentation forming the sentence texts.
In the embodiment of the invention, during the execution of the exercise interaction script, a learner inputs one or more pieces of second voice information, acquires the second voice information of the learner in real time, converts the second voice information acquired from the beginning of the execution of the exercise interaction script to the end of the execution into sentence text, and performs word segmentation processing on the sentence text to obtain each word segment forming the sentence text. The word segmentation processing refers to the step of segmenting a sentence text into individual words, namely, each word segmentation, and in the step, the sentence text can be segmented according to a general dictionary, so that the words which are segmented are normal words, and the words which are not in the dictionary are segmented into individual words. When words can be formed in the front-back direction, for example, "good books" are classified according to the size of the word frequency, if the word frequency of the "good" is high, "good books" are classified, and if the word frequency of the "good books" is high, "good books" are classified.
Optionally, the text information may be subjected to word segmentation processing by using barker word segmentation. Before word segmentation is carried out on the text information, the text information is preprocessed to remove stop words, wherein the stop words comprise periods, commas, semicolons and the like, namely punctuation marks such as periods, commas and semicolons in the text information are filtered out, so that storage space is saved and word segmentation efficiency is improved. And storing the removed stop word mark sequence into a stop word temporary library. And periodically cleaning the stop word temporary library to save storage space and improve word segmentation efficiency.
Optionally, a word list is constructed, the word list comprises each word of the sentence text, and keywords are extracted from the word list based on a trained TF_IDF (Term Frequency-inverse document Frequency) matrix. Training is performed on the corpus in the exercise interaction script corresponding to the tf_idf matrix exercise scene, specifically, as shown in fig. 2, the training steps of the tf_idf matrix are as follows:
a1: and acquiring the corpus in the exercise interaction script corresponding to the designated exercise scene. Specifically, the corpus includes product introductions.
A2: and performing stuffiness word segmentation on the corpus according to the text, and removing stop words to obtain a training vocabulary.
A3: and generating a TF_IDF matrix based on the training vocabulary, wherein the horizontal axis is text, the vertical axis is keywords, and the key coefficients of the keywords are determined according to the word frequency of the keywords in the sentence text.
A4: and arranging the segmented words in the training vocabulary from high to low according to the word frequency, and selecting a specified number of segmented words according to the arrangement result to determine the segmented words as key words.
In the embodiment of the invention, after the exercise interaction script is executed, a plurality of pieces of second voice information acquired during the execution of the exercise interaction script are converted into sentence texts, and word segmentation processing is carried out on the sentence texts, so that each word segmentation forming the sentence texts is accurately acquired, and the intelligent equipment can score the exercises of students according to the word segmentation in the voice information, thereby improving the accuracy of the exercise scoring.
S107: and evaluating the exercise of the learner according to the preset scores of the feedback information corresponding to each word segmentation of the sentence text and the second voice information.
According to the embodiment of the invention, the training of the learner is objectively evaluated according to the preset scores of the feedback information corresponding to each word segment of the sentence text and the corresponding pieces of the second voice information, so that the training efficiency of the learner is improved.
As an embodiment of the present invention, as shown in fig. 3, the step S107 specifically includes:
B1: and sequencing each word segmentation obtained by word segmentation processing according to the sequence of the word segmentation in the sentence text, and constructing a word segmentation sequence.
B2: according to a preset keyword weight list, searching the word weights and the ordering weights of keywords in the word segmentation sequence.
B3: and evaluating the exercise of the learner according to the word weight and the ordering weight of the keywords in the word segmentation sequence and the preset scores of the feedback information corresponding to the second voice information. The word weights are used for identifying the importance degree of the keywords, and the ranking weights are used for identifying the influence degree of the order of the keywords in the word segmentation sequence.
Specifically, the number of pieces of the second voice information acquired during execution of the exercise interaction script and the number of pieces of feedback information corresponding to the second voice information are determined, and an exercise score of the learner is determined according to the following formula (1):
(1);
wherein, Representing the drill score,/>Representing a preset constant, for example 100, A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, and I >For any one implementationTo/>A monotonically increasing function of the mapping. For example/>Any one of the following functions may be taken: /(I)、/>、/>E l represents the preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information obtained after the exercise script is executed.
Optionally, the preset constantThe value of (2) can be determined according to the exercise interaction script corresponding to the first voice information. Specifically, the closer the voice feature of the first voice information is to the voice feature of the corresponding voice information in the exercise interaction script, the preset constant/>The higher the value of (2).
In the embodiment of the invention, the training of the students is objectively scored according to the word weight and the ordering weight of the keywords in the word segmentation sequence, so that the accuracy of scoring can be improved.
Optionally, if the word segmentation processing obtains that the word segmentation includes stop words, that is, the word segmentation sequence includes stop words, a specific implementation flow of the step B3 includes:
B31: determining the number of pieces of second voice information acquired during execution of the exercise interaction script and the number of pieces of feedback information corresponding to the second voice information;
B32: acquiring word frequency of stop words in the word segmentation sequence;
b33: determining the stop word weight of the stop word in the word segmentation sequence according to the word frequency of the stop word and a preset stop word weight table;
B34: determining the drill score of the trainee according to the following formula (2):
(2);
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, C is a stop word weight matrix constructed according to the stop word weight of the stop words in the word segmentation sequence, and I/OFor any one implementation of slave/>To/>And E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information.
In the embodiment of the invention, considering the influence of the stop words on the exercise, for example, the score of the exercise is influenced by too many stop words or too many stop words used by a learner in the exercise process, and proper stop is necessary, the exercise of the user is scored according to the word weight matrix, the sequencing weight matrix and the stop word weight matrix of the keywords by adding the stop words and the weights thereof, so that the score of the exercise of the learner is more objective and effective, and the efficiency of the exercise is further improved by the learner.
Alternatively, as an embodiment of the present invention, as shown in fig. 4, the specific steps of B3 include:
B31': and acquiring a preset standard sentence corresponding to the sentence text in the drilling interaction script.
B32': and calculating the sentence similarity between the sentence text and the preset standard sentence.
B33': and evaluating the training of the learner according to the sentence similarity, the word weight and the ordering weight of the keywords in the word segmentation sequence and the preset scores of the feedback information corresponding to the second voice information.
Specifically, the step B33' specifically includes: determining the number of pieces of second voice information acquired during execution of the exercise interaction script and the number of pieces of feedback information corresponding to the second voice information, and determining an exercise score of the student according to the following formula (3):
(3);
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, n is the number of voice information converted into sentence text, n is a positive integer, s_grogram i represents the sentence similarity between the sentence text converted by the i voice information and the corresponding preset standard sentence, and the meaning of the i voice information is expressed by the nSimilarity parameter representing sentence text converted from ith voice information,/>For any one implementation of slave/>To/>And E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information.
In the embodiment of the invention, the exercise interaction script comprises preset standard sentences, namely preset standard sentences used for the dialogue between a learner and a client, the exercise of the learner is evaluated according to the sentence similarity and the preset scores of the keyword weights, the ordering weights and the feedback information corresponding to the plurality of pieces of second voice information in the word segmentation sequence by calculating the sentence similarity of the sentence text and the preset standard sentences, and the exercise of the learner is evaluated by considering the similarity of the sentence text and the preset standard sentences in the voice information of the learner and the keyword weights and the ordering weights of the keywords in the word segmentation sequence of the sentence text, so that the score is further more objective and accurate, the accuracy of the score is improved, and the exercise efficiency of the learner is effectively improved.
According to the embodiment of the invention, the first voice information input by a student is acquired through acquiring the exercise scene selected by the student, the first voice information input by the student is acquired in the exercise scene, the exercise interaction script corresponding to the first voice information is determined from a plurality of exercise interaction scripts corresponding to the exercise scene, then the second voice information of the student is acquired based on the exercise interaction script corresponding to the first voice information, namely, the exercise is started, the feedback information corresponding to the second voice information is played according to the exercise interaction script corresponding to the first voice information, the exercise interaction of the student is realized, the effect of the exercise simulation is enabled to be more vivid, the preset score of the feedback information is acquired, after the exercise interaction script is executed, the plurality of pieces of second voice information acquired during the exercise interaction script are converted into sentence texts, the sentence texts are subjected to word segmentation processing, the second voice information corresponding to the sentence texts is acquired, the exercise interaction script is accurately estimated according to the corresponding to the exercise interaction script of the students, and the corresponding to the exercise interaction script is better estimated by the students, and the corresponding to the students can be better estimated according to the exercise interaction score, and the result is better known by the students.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Corresponding to the method for evaluating a scene exercise described in the above embodiments, fig. 5 shows a block diagram of a device for evaluating a scene exercise according to an embodiment of the present application, and for convenience of explanation, only a portion related to the embodiment of the present application is shown.
Referring to fig. 5, the evaluation apparatus for scene exercises includes: the exercise scene acquisition unit 51, the first voice information acquisition unit 52, the exercise interaction script determination unit 53, the second voice information acquisition unit 54, the feedback information acquisition unit 55, the word segmentation processing unit 56, the exercise evaluation unit 57, wherein:
A training scene acquisition unit 51 for acquiring a training scene selected by a learner;
a first voice information acquiring unit 52, configured to acquire first voice information input by the learner in the exercise scene;
A drilling interaction script determining unit 53, configured to determine a drilling interaction script corresponding to the first voice information from a plurality of drilling interaction scripts corresponding to the drilling scene;
A second voice information obtaining unit 54, configured to obtain second voice information of the learner based on an exercise interaction script corresponding to the first voice information;
The feedback information obtaining unit 55 is configured to play feedback information corresponding to the second voice information according to the exercise interaction script corresponding to the first voice information, and obtain a preset score of the feedback information;
A word segmentation processing unit 56, configured to convert, after the execution of the exercise interaction script, a plurality of pieces of the second voice information acquired during the execution of the exercise interaction script into sentence text, and perform word segmentation processing on the sentence text, so as to obtain each word segment that constitutes the sentence text;
And an exercise evaluation unit 57, configured to evaluate the exercise of the learner according to preset scores of feedback information corresponding to each word segment of the sentence text and the plurality of pieces of second speech information.
Optionally, the exercise evaluation unit 57 includes:
The word segmentation sequence construction module is used for sequentially sequencing each word segmentation obtained by word segmentation processing according to the sequence of the word segmentation in the sentence text to construct a word segmentation sequence;
The weight searching module is used for searching the word weights and the ordering weights of the keywords in the word segmentation sequence according to a preset keyword weight list;
And the first exercise scoring module is used for evaluating exercises of the students according to the word weights and the ordering weights of the keywords in the word segmentation sequence and preset scores of the feedback information corresponding to the second voice information.
Optionally, the first drilling scoring module specifically includes:
The voice information statistics module is used for determining the number of the second voice information acquired during the execution of the exercise interaction script and the number of feedback information corresponding to the second voice information;
the first scoring sub-module is used for determining the drill score of the student according to the following formula:
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, and the method comprises the steps ofFor any one implementation of slave/>To the point ofAnd E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information.
Optionally, the word segmentation sequence includes stop words, and the exercise evaluation unit 57 includes:
The voice information statistics module is used for determining the number of the second voice information acquired during the execution of the exercise interaction script and the number of feedback information corresponding to the second voice information;
The word frequency acquisition module is used for acquiring word frequencies of stop words in the word segmentation sequence;
the stop word weight determining module is used for determining the stop word weight of the stop word in the word segmentation sequence according to the word frequency of the stop word and a preset stop word weight table;
And the second drilling scoring module is used for determining drilling scoring of the trainee according to the following formula:
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, C is a stop word weight matrix constructed according to the stop word weight of the stop words in the word segmentation sequence, and I/OFor any one implementation of slave/>To/>And E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information.
Optionally, the exercise evaluation unit 57 includes:
the standard sentence acquisition module is used for acquiring a preset standard sentence corresponding to the sentence text in the drilling interaction script;
the sentence similarity calculation module is used for calculating the sentence similarity between the sentence text and the preset standard sentence;
and the third exercise scoring module is used for evaluating the exercises of the students according to the sentence similarity, the word weight and the ordering weight of the keywords in the word segmentation sequence and the preset scores of the feedback information respectively corresponding to the plurality of pieces of second voice information.
Optionally, the third drilling scoring module specifically includes:
The voice information statistics module is used for determining the number of the second voice information acquired during the execution of the exercise interaction script and the number of feedback information corresponding to the second voice information;
and the second scoring sub-module is used for determining the drill score of the student according to the following formula:
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, n is the number of voice information converted into sentence text, n is a positive integer, s_grogram i represents the sentence similarity between the sentence text converted by the i voice information and the corresponding preset standard sentence, and the meaning of the i voice information is expressed by the nSimilarity parameter representing sentence text converted from ith voice information,/>For any one implementation of slave/>To/>And E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information.
According to the embodiment of the invention, the first voice information input by a student is acquired through acquiring the exercise scene selected by the student, the first voice information input by the student is acquired in the exercise scene, the exercise interaction script corresponding to the first voice information is determined from a plurality of exercise interaction scripts corresponding to the exercise scene, then the second voice information of the student is acquired based on the exercise interaction script corresponding to the first voice information, namely, the exercise is started, the feedback information corresponding to the second voice information is played according to the exercise interaction script corresponding to the first voice information, the exercise interaction of the student is realized, the effect of the exercise simulation is enabled to be more vivid, the preset score of the feedback information is acquired, after the exercise interaction script is executed, the plurality of pieces of second voice information acquired during the exercise interaction script are converted into sentence texts, the sentence texts are subjected to word segmentation processing, the second voice information corresponding to the sentence texts is acquired, the exercise interaction script is accurately estimated according to the corresponding to the exercise interaction script of the students, and the corresponding to the exercise interaction script is better estimated by the students, and the corresponding to the students can be better estimated according to the exercise interaction score, and the result is better known by the students.
Fig. 6 is a schematic diagram of an intelligent device according to an embodiment of the present invention. As shown in fig. 6, the smart device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60, such as an evaluation program for a scene exercise. The processor 60, when executing the computer program 62, implements the steps of the above-described embodiments of the evaluation method for the respective scene exercises, such as steps 101 to 107 shown in fig. 1. Or the processor 60, when executing the computer program 62, performs the functions of the modules/units of the device embodiments described above, such as the functions of the units 51 to 57 shown in fig. 5.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions describing the execution of the computer program 62 in the smart device 6.
The smart device 6 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud smart device, etc. The smart device may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of a smart device 6 and is not meant to be limiting as smart device 6, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the smart device may also include input-output devices, network access devices, buses, etc.
The Processor 60 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the smart device 6, such as a hard disk or a memory of the smart device 6. The memory 61 may also be an external storage device of the smart device 6, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the smart device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the smart device 6. The memory 61 is used for storing the computer program as well as other programs and data required by the smart device. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (4)

1. A method of evaluating a scene maneuver, comprising:
Acquiring a drilling scene selected by a student;
Acquiring first voice information input by the learner in the drilling scene;
determining a drilling interaction script corresponding to the first voice information from a plurality of drilling interaction scripts corresponding to the drilling scene;
acquiring second voice information of the learner based on the exercise interaction script corresponding to the first voice information;
playing feedback information corresponding to the second voice information according to the exercise interaction script corresponding to the first voice information, and obtaining a preset score of the feedback information;
After the exercise interaction script is executed, converting a plurality of pieces of second voice information acquired during execution of the exercise interaction script into sentence texts, and performing word segmentation processing on the sentence texts to obtain each word segmentation forming the sentence texts;
sequentially sequencing each word segmentation obtained by word segmentation processing according to the sequence of the word segmentation in the sentence text, and constructing a word segmentation sequence;
according to a preset keyword weight list, searching the word weights and the ordering weights of keywords in the word segmentation sequence;
According to the word weight and the ordering weight of the keywords in the word segmentation sequence and the preset scores of the feedback information corresponding to the second voice information, the training of the learner is evaluated, and the method comprises the following steps: determining the number of pieces of second voice information acquired during execution of the exercise interaction script and the number of pieces of feedback information corresponding to the second voice information;
Determining the drill score of the trainee according to the following formula:
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, and the method comprises the steps ofFor any one implementation of slave/>To/>E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information;
Or the word segmentation sequence comprises stop words, and the word frequency of the stop words in the word segmentation sequence is obtained; determining the stop word weight of the stop word in the word segmentation sequence according to the word frequency of the stop word and a preset stop word weight table; determining the drill score of the trainee according to the following formula:
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, C is a stop word weight matrix constructed according to the stop word weight of the stop words in the word segmentation sequence, and I/OFor any one implementation of slave/>To/>E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information;
or acquiring a preset standard sentence corresponding to the sentence text in the drilling interaction script; calculating the sentence similarity between the sentence text and the preset standard sentence; determining the drill score of the trainee according to the following formula:
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, n is the number of voice information converted into sentence text, n is a positive integer, s_grogram i represents the sentence similarity between the sentence text converted by the i voice information and the corresponding preset standard sentence, and the meaning of the i voice information is expressed by the nSimilarity parameter representing sentence text converted from ith voice information,/>For any one implementation of slave/>To/>And E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information.
2. An evaluation device for a scene exercise, characterized in that the evaluation device for a scene exercise comprises:
the training scene acquisition unit is used for acquiring training scenes selected by students;
The first voice information acquisition unit is used for acquiring first voice information input by the learner in the drilling scene;
a drilling interaction script determining unit, configured to determine a drilling interaction script corresponding to the first voice information from a plurality of drilling interaction scripts corresponding to the drilling scene;
the second voice information acquisition unit is used for acquiring second voice information of the student based on the exercise interaction script corresponding to the first voice information;
The feedback information acquisition unit is used for playing the feedback information corresponding to the second voice information according to the exercise interaction script corresponding to the first voice information and acquiring the preset score of the feedback information;
the word segmentation processing unit is used for converting a plurality of pieces of second voice information acquired during execution of the exercise interaction script into sentence texts after the execution of the exercise interaction script is completed, and performing word segmentation processing on the sentence texts to obtain each word segmentation forming the sentence texts;
The exercise evaluation unit includes:
The word segmentation sequence construction module is used for sequentially sequencing each word segmentation obtained by word segmentation processing according to the sequence of the word segmentation in the sentence text to construct a word segmentation sequence;
The weight searching module is used for searching the word weights and the ordering weights of the keywords in the word segmentation sequence according to a preset keyword weight list;
The first exercise scoring module is used for evaluating exercises of the students according to the word weights and the ordering weights of the keywords in the word segmentation sequence and preset scores of feedback information corresponding to the second voice information;
the first exercise scoring module includes: the voice information statistics module is used for determining the number of the second voice information acquired during the execution of the exercise interaction script and the number of feedback information corresponding to the second voice information;
the first scoring sub-module is used for determining the drill score of the student according to the following formula:
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, and the method comprises the steps ofFor any one implementation of slave/>To/>E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information;
or the word segmentation sequence comprises stop words, and a word frequency acquisition module is used for acquiring word frequencies of the stop words in the word segmentation sequence;
the stop word weight determining module is used for determining the stop word weight of the stop word in the word segmentation sequence according to the word frequency of the stop word and a preset stop word weight table;
And the second drilling scoring module is used for determining drilling scoring of the trainee according to the following formula:
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, C is a stop word weight matrix constructed according to the stop word weight of the stop words in the word segmentation sequence, and I/OFor any one implementation of slave/>To/>E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information;
Or a standard sentence acquisition module, configured to acquire a preset standard sentence corresponding to the sentence text in the exercise interaction script;
the sentence similarity calculation module is used for calculating the sentence similarity between the sentence text and the preset standard sentence;
and the second scoring sub-module is used for determining the drill score of the student according to the following formula:
wherein, Representing the drill score,/>Representing a preset constant, wherein A t is a word weight matrix constructed by the t second voice information according to the word weight of the keywords in the word segmentation sequence, B t is a sorting weight matrix constructed by the t second voice information according to the sorting weight of the keywords in the word segmentation sequence, n is the number of voice information converted into sentence text, n is a positive integer, s_grogram i represents the sentence similarity between the sentence text converted by the i voice information and the corresponding preset standard sentence, and the meaning of the i voice information is expressed by the nSimilarity parameter representing sentence text converted from ith voice information,/>For any one implementation of slave/>To/>And E l represents a preset score of the first feedback information, T is more than or equal to 1 and less than or equal to T, T is the total number of the second voice information, L is more than or equal to 1 and less than or equal to L, and L is the total number of the feedback information.
3. A computer-readable storage medium storing a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the evaluation method of a scene exercise according to claim 1.
4. A smart device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the evaluation method of a scene maneuver as claimed in claim 1 when the computer program is executed by the processor.
CN201910752171.2A 2019-08-15 2019-08-15 Evaluation method and device for scene exercise, storage medium and intelligent device Active CN110619588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910752171.2A CN110619588B (en) 2019-08-15 2019-08-15 Evaluation method and device for scene exercise, storage medium and intelligent device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910752171.2A CN110619588B (en) 2019-08-15 2019-08-15 Evaluation method and device for scene exercise, storage medium and intelligent device

Publications (2)

Publication Number Publication Date
CN110619588A CN110619588A (en) 2019-12-27
CN110619588B true CN110619588B (en) 2024-04-26

Family

ID=68921794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910752171.2A Active CN110619588B (en) 2019-08-15 2019-08-15 Evaluation method and device for scene exercise, storage medium and intelligent device

Country Status (1)

Country Link
CN (1) CN110619588B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034475A (en) * 2010-12-08 2011-04-27 中国科学院自动化研究所 Method for interactively scoring open short conversation by using computer
CN109448458A (en) * 2018-11-29 2019-03-08 郑昕匀 A kind of Oral English Training device, data processing method and storage medium
CN109800663A (en) * 2018-12-28 2019-05-24 华中科技大学鄂州工业技术研究院 Teachers ' teaching appraisal procedure and equipment based on voice and video feature

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9218339B2 (en) * 2011-11-29 2015-12-22 Educational Testing Service Computer-implemented systems and methods for content scoring of spoken responses
US9449522B2 (en) * 2012-11-16 2016-09-20 Educational Testing Service Systems and methods for evaluating difficulty of spoken text

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034475A (en) * 2010-12-08 2011-04-27 中国科学院自动化研究所 Method for interactively scoring open short conversation by using computer
CN109448458A (en) * 2018-11-29 2019-03-08 郑昕匀 A kind of Oral English Training device, data processing method and storage medium
CN109800663A (en) * 2018-12-28 2019-05-24 华中科技大学鄂州工业技术研究院 Teachers ' teaching appraisal procedure and equipment based on voice and video feature

Also Published As

Publication number Publication date
CN110619588A (en) 2019-12-27

Similar Documents

Publication Publication Date Title
CN107329949B (en) Semantic matching method and system
CN110427463B (en) Search statement response method and device, server and storage medium
US10157619B2 (en) Method and device for searching according to speech based on artificial intelligence
CN103425635B (en) Method and apparatus are recommended in a kind of answer
WO2022095380A1 (en) Ai-based virtual interaction model generation method and apparatus, computer device and storage medium
CN109815491B (en) Answer scoring method, device, computer equipment and storage medium
CN112487139A (en) Text-based automatic question setting method and device and computer equipment
CN111242816A (en) Multimedia teaching plan making method and system based on artificial intelligence
CN111192170B (en) Question pushing method, device, equipment and computer readable storage medium
CN116049557A (en) Educational resource recommendation method based on multi-mode pre-training model
CN111062209A (en) Natural language processing model training method and natural language processing model
Arai et al. Predicting quality of answer in collaborative Q/A community
CN113761887A (en) Matching method and device based on text processing, computer equipment and storage medium
CN110708619B (en) Word vector training method and device for intelligent equipment
CN110619588B (en) Evaluation method and device for scene exercise, storage medium and intelligent device
CN113742461A (en) Dialogue system test method and device and statement rewriting method
CN116561271A (en) Question and answer processing method and device
CN114822557A (en) Method, device, equipment and storage medium for distinguishing different sounds in classroom
CN114297372A (en) Personalized note generation method and system
CN114186048A (en) Question-answer replying method and device based on artificial intelligence, computer equipment and medium
CN114333787A (en) Scoring method, device, equipment, storage medium and program product for spoken language examination
CN113591004A (en) Game tag generation method and device, storage medium and electronic equipment
CN114647717A (en) Intelligent question and answer method and device
CN111488448A (en) Method and device for generating machine reading marking data
CN115878849B (en) Video tag association method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant