CN113255843A - Speech manuscript evaluation method and device - Google Patents

Speech manuscript evaluation method and device Download PDF

Info

Publication number
CN113255843A
CN113255843A CN202110759496.0A CN202110759496A CN113255843A CN 113255843 A CN113255843 A CN 113255843A CN 202110759496 A CN202110759496 A CN 202110759496A CN 113255843 A CN113255843 A CN 113255843A
Authority
CN
China
Prior art keywords
lecture
preset
data
ranking information
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110759496.0A
Other languages
Chinese (zh)
Other versions
CN113255843B (en
Inventor
张�林
王晔
李东朔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youmu Technology Co ltd
Original Assignee
Beijing Youmu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youmu Technology Co ltd filed Critical Beijing Youmu Technology Co ltd
Priority to CN202110759496.0A priority Critical patent/CN113255843B/en
Publication of CN113255843A publication Critical patent/CN113255843A/en
Application granted granted Critical
Publication of CN113255843B publication Critical patent/CN113255843B/en
Priority to PCT/CN2021/133041 priority patent/WO2023279631A1/en
Priority to JP2023577794A priority patent/JP2024527185A/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a speech manuscript evaluation method and device, relates to the technical field of information processing, and mainly aims to solve the problems that a traditional evaluation method needs large sample data, the sample types are few, and the evaluation result is poor in effectiveness and fairness. The technical scheme comprises the following steps: acquiring a plurality of lecture manuscripts; respectively dividing each lecture manuscript into a plurality of sections; identifying all the subsections and a plurality of different preset questions by using a neural network model, wherein all the preset questions and all the subsections are sequentially used as input data, extracting characteristic data from the input data by using the neural network model, and outputting ranking information of all the subsections on the preset questions according to the characteristic data, wherein the ranking information is used for representing the recognition degree of each subsection on answering the preset questions; and determining the evaluation result of each lecture draft according to the ranking information of all the sections on each preset problem.

Description

Speech manuscript evaluation method and device
Technical Field
The embodiment of the invention relates to the technical field of information processing, in particular to a speech manuscript evaluation method and device.
Background
Speech is a speaking communication mode in a special environment, and professional speech training needs repeated training. In the training process, the training progress can be accelerated by timely scoring, evaluating and finding out the defects. The scoring of the speech includes the aspects of expression gesture, voice speed, speech content and the like.
The most common scoring method for speech content is to establish a regression model, that is, speech content of different score sections is collected by establishing a massive data set, characteristics are designed in a manual mode or automatically extracted by a machine, the contribution degree of each characteristic to the score is calculated, effective characteristics are extracted, and the relationship between the characteristics and the score is established. The training regression model is to extract features through the lecture manuscript data set, establish the relationship between the features and the scores and store the relationships in the form of a weight matrix. However, the method needs to rely on a large amount of data, the samples need to cover data of each score section, theme and the like, otherwise, the scoring results are randomly distributed, and the effectiveness and fairness of the whole scoring are affected. In practical applications, the speech training begins with only a few excellent samples, and extremely lacks of low-medium-score samples. And other speech data sets in the open data resources are the same, only the best speech case is left, and learning cannot be directly performed through transfer learning.
Disclosure of Invention
In view of this, the embodiment of the invention provides a speech manuscript evaluation method and device, and mainly aims to solve the problems of large sample data, few sample types, and poor evaluation result effectiveness and fairness in the conventional evaluation method.
In order to solve the above problems, embodiments of the present invention mainly provide the following technical solutions:
in a first aspect, an embodiment of the present invention provides a lecture manuscript evaluation method, where the method includes:
acquiring a plurality of lecture manuscripts;
respectively dividing each lecture manuscript into a plurality of sections;
and identifying all the subsections and a plurality of different preset questions by using a neural network model, wherein all the preset questions and all the subsections are sequentially used as input data, the neural network model extracts characteristic data from the input data, and sequencing information of all the subsections on the preset questions is output according to the characteristic data and is used for representing the recognition degree of each subsection on answering the preset questions.
And determining the evaluation result of each lecture draft according to the ranking information of all the sections on each preset problem.
Optionally, before the neural network model is used to identify all the subsections and a plurality of different preset problems, the method further includes:
acquiring a plurality of training data, wherein each training data respectively comprises a plurality of sample answers, a preset question and sequencing information of each sample answer to the preset question;
and training the neural network model by using the plurality of training data, outputting sequencing information by the neural network according to a plurality of sample answers and a preset problem, and optimizing model parameters according to the difference between the output sequencing information and the sequencing information in the training data.
Optionally, the obtaining of the plurality of training data specifically includes:
crawling contexts relevant to the preset questions and corresponding answer contents in a plurality of specified webpages;
and obtaining the sequencing information according to the sequencing condition of each answer content in the webpage.
Optionally, determining the evaluation result of each lecture draft according to the ranking information of all the sections on each preset question specifically includes:
acquiring the highest ranking information of each measure belonging to the same lecture manuscript for each preset problem from the ranking information of all the measures for each preset problem;
and obtaining an evaluation result aiming at the lecture draft according to each piece of highest ranking information of the same lecture draft.
Optionally, the ranking information corresponds to a preset score; the evaluation result is a score obtained according to each preset score.
Optionally, the lecture manuscript is based on text data obtained by performing voice recognition on a lecture recording, and the pause time length of voice is recorded in the voice recognition process; in the step of dividing each lecture draft into a plurality of sections, the lecture draft is divided into a plurality of sections according to the semantic meaning of the lecture draft and the length of the pause time in the lecture voice.
Optionally, the ranking information includes ranking information that is empty and/or ranking information that is parallel.
Optionally, in the process of extracting feature data from the input data by using the neural network model, the text data from the measure and the text data from the preset problem are processed by using an attention mechanism, and ranking information is output based on the processed feature data.
In a second aspect, an embodiment of the present invention provides a lecture manuscript evaluation device, where the device includes: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the lecture assessment method.
In a third aspect, an embodiment of the present invention provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the above lecture assessment method.
According to the assessment method and the device for the lecture notes, provided by the invention, an assessment task for the whole lecture notes is converted into assessment of the acceptance degree of the problems and the answers, a large number of lecture notes with different qualities are not required to be provided as learning samples of a neural network, only the problems related to the lecture notes are required to be preset, and the answers with different acceptance degrees are prepared, so that a neural network model can be trained, the assessment of a plurality of lecture notes is further completed, the problem that the lecture notes are difficult to assess due to the lack of samples in the prior art is solved, and the accuracy rate of the scheme is high.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the embodiments of the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart illustrating a lecture assessment method according to an embodiment of the present invention;
FIG. 2 shows a flow diagram of another method of assessing semantics in an embodiment of the invention;
FIG. 3 is a diagram illustrating the evaluation results obtained for paragraphs in an embodiment of the present invention;
fig. 4 is a schematic diagram showing an operation process of the neural network model in the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention provides a speech manuscript evaluation method, which can be executed by electronic equipment such as a computer, a server and the like, and as shown in figure 1, the method comprises the following steps:
101. a plurality of lecture manuscripts are acquired.
102. And respectively dividing each lecture manuscript into a plurality of sections.
In the embodiment of the present invention, the section refers to a paragraph expressing a certain argument, which may be a natural paragraph or a plurality of paragraphs. The following are exemplary: the plurality of natural segments are used as a paragraph to explain the advantages of the product. Note that C11 … … C1n indicates measures of the first lecture, C21 … … C2n indicates measures of the second lecture, and Cn1 … … Cnn indicates n measures of the nth lecture. The segmentation method of the lecture manuscript may include, but is not limited to, the following methods: by the existing semantic recognition technology, the whole document is divided into several sections according to the semantics of the text content. Or, according to the speech recognition result, the speech recognition method can integrate pause and semantic meaning to form sections, and the specific method is not limited.
103. And identifying all the subsections and a plurality of different preset problems by utilizing a neural network model, wherein all the preset problems and all the subsections are sequentially used as input data, extracting characteristic data from the input data by utilizing the neural network model, and outputting sequencing information of all the subsections on the preset problems according to the characteristic data so as to represent the recognition degree of all the subsections on the preset problems.
The recognition level can also be interpreted as a popularity level, which is learned by the neural network recognizing the training data. For example, when training a neural network, answers and questions can be manually prepared, and the ranking of the answers to the corresponding questions is manually given, that is, the popularity/recognition of each answer is manually given; of course, data can be migrated from other question-answering databases as sample data for training the neural network. There are many ways to train the neural network, and to set and acquire sample data, which will be described in the following embodiments. Therefore, the neural network in the scheme is not used for identifying shallow semantic association between the answers and the questions, but the answers are ranked based on the learned thinking of the knowledge simulator, and the answers ranked more top represent higher recognition/popularity, so that people are considered to prefer the answers. However, from the semantic point of view, the answers ranked at the top are not necessarily more relevant to the preset question than the answers ranked at the bottom.
Fig. 4 is a schematic diagram showing a working process of a neural network, and the scheme identifies the well-divided nodes by using the trained neural network and outputs sequencing information. Specifically, the question and the divided measure are preset as the input of the network, for example, question 1+ C11 … … Cnn is used as the input, and the sequencing information of C11 … … Cnn for preset question 1 is output; likewise, question 2+ C11 … … Cnn serves as an input, outputting the ranking information of C11 … … Cnn for preset question 2. The preset problem is set according to the content of the lecture manuscript, and specifically, related problems can be set according to the theme of the lecture manuscript to serve as the preset problem. The higher the top bar in the ranking indicates a higher acceptance/popularity for answering the preset question.
104. And determining the evaluation result of each lecture draft according to the ranking information of all the sections on each preset problem.
In one embodiment, the evaluation result is a score, and before the score is given according to the sorting condition, the corresponding relationship between the sorting and the score needs to be set, for example, 10 points for the first sorting condition, 8 points for the second sorting condition, and so on. Assuming that there are two predetermined questions, paragraph C11 … … C1n has corresponding ordering for both questions, here taking the highest ordering, say that the ordering of C11 for question 1 is second (highest), then the score is 8; the ranking of C14 for problem 2 is rank third (highest), with a score of 6. The method for calculating the total score of the lecture for the two questions may include, but is not limited to, the following: direct addition, weighted average, and the like.
In other embodiments, the evaluation result may also be a classification result, for example, a category such as "excellent", "good", "medium", or "poor" is preset, and the ranking information output by the neural network is classified to obtain a category to which each lecture manuscript belongs.
According to the assessment method and the device for the lecture notes, provided by the invention, an assessment task for the whole lecture notes can be converted into assessment of the acceptance of the problems and the answers, a large number of lecture notes with different qualities are not required to be provided as learning samples of a neural network, only the problems related to the lecture notes are required to be preset, and the answers with different acceptance degrees are prepared, so that the neural network model can be trained, the assessment of a plurality of lecture notes is further completed, the problem that the lecture notes are difficult to assess due to the lack of samples in the prior art is solved, and the accuracy rate of the scheme is high.
An embodiment of the present invention further provides a method for evaluating a lecture draft, as shown in fig. 2, the method includes:
201. the method comprises the steps of obtaining a plurality of training data, wherein each training data respectively comprises a plurality of sample answers, a preset question and ranking information of each sample answer to the preset question.
In the embodiment of the present invention, the neural network model needs to be trained, and a set of training data includes a preset question and a plurality of corresponding candidate answers, for example, the preset question is "which basic facts are described below", the number of corresponding candidate answers is 40, and the label is a ranking of the 40 answers, which is given based on the subjective intention of the person. The higher the ranking, the higher the quality of the answers, which can be interpreted as higher recognition, higher popularity, and more liked by people for the sample answers ranked further forward.
In the embodiment of the present invention, for the way of the plurality of training data, context and corresponding answer content related to the preset question may be crawled in a plurality of specified webpages, it should be noted that when there is repeated answer content, the sample answers are obtained after merging; and obtaining the sequencing information according to the sequencing condition of each answer content in the webpage. The following are exemplary: the preset questions of the lecture manuscript of a certain theme can be found out to be matched with the questions in the internet (such as a question-answering system), and answers to the questions can be obtained, wherein a certain answer is selected as the best answer by a questioner, other answers are arranged behind the certain answer, ranking can be carried out according to the interaction condition between the questioner and the respondents, such as the degree of heat and the like, and the ranking can be directly used as a label of training data.
It should be noted that the number of ranks is not equal to the number of candidate answers, for example, there are 40 candidate answers, and the number of ranks is 10. The ordering may appear parallel and blank. For example, the first answer is 0, the second answer is 0, the third answer is 2 or 3, and so on, so the ranking information includes the ranking information that is empty and/or parallel.
The preset problems are set according to the content of the lecture manuscript, the lecture manuscript with the same theme is correspondingly evaluated every time, and some problems can be set according to the theme of the lecture manuscript when the problems are preset, which is exemplary: "what basic facts are described below", "what are the advantages over other products", "what benefits the user can get", "what the owner is doing", "what the owner is advanced", etc. Referring to the common practice of answer ranking for the question-answering system, we will select k candidate answers and put them together with the lecture. the topK algorithm first finds top-n (exemplary: n =10) relevant documents for a given question, either using the tf-idf or bm25 algorithms. Next, n documents are divided into paragraphs, a candidate answer group far larger than n is obtained, and topk candidate answers (for example: k =40) are selected from the n document, it should be noted that the number of related documents and candidate answers in the above example is only an example, and is not intended to limit the number.
202. And training the neural network model by using the plurality of training data, outputting sequencing information by the neural network according to a plurality of sample answers and a preset problem, and optimizing model parameters according to the difference between the output sequencing information and the sequencing information in the training data.
In the embodiment of the invention, a two-layer feedforward neural network is selected, the input is a preset question and a plurality of candidate answers, and the output is labels of the candidate answers. And training the network by using the plurality of training data, and determining the loss according to the sequence of the network output and the difference of the labels, thereby optimizing the network parameters.
The network representation is f (xi) = ReLU (xiAT + b1) BT + b2
Wherein xi represents the features after attention, A is equal to Rm multiplied by d and B is equal to R1 multiplied by m as optimized weight matrix parameters, B1 is equal to Rm and B2 is equal to R as linear deviation vector.
203. A plurality of lecture manuscripts are acquired.
204. And respectively dividing each lecture manuscript into a plurality of sections.
In a specific implementation process, the lecture manuscript is character data obtained by performing voice recognition on a lecture recording, and the pause time length of sound is recorded in the voice recognition process; in the step of dividing each lecture draft into a plurality of sections, the lecture draft is divided into a plurality of sections according to the semantic meaning of the lecture draft and the length of the pause time in the lecture recording.
205. And identifying all the subsections and a plurality of different preset questions by utilizing a neural network model, wherein all the preset questions and all the subsections are sequentially used as input data, extracting characteristic data from the input data by utilizing the neural network model, and outputting ranking information of all the subsections on the preset questions according to the characteristic data, wherein the ranking information is used for representing the recognition/popularity of each subsection on answering the preset questions.
Evaluation process of neural network model as shown in fig. 3, in the preferred embodiment, in the process of extracting feature data from the input data by the neural network model, the text data from the measure and the text data from the preset question are processed by an attention mechanism (attention), and ranking information is output based on the processed feature data.
206. And acquiring the highest ranking information of each measure belonging to the same lecture manuscript for each preset problem from the ranking information of all the measures for each preset problem.
In the embodiment of the present invention, each piece of sorting information corresponds to a preset score, for example, the first sorting condition corresponds to 10 scores, the second sorting condition corresponds to 8 scores, and the like, and the corresponding relationship between the ranking and the score is specifically set according to the actual requirement.
207. And obtaining an evaluation result aiming at the lecture draft according to each piece of highest ranking information of the same lecture draft.
In the embodiment of the present invention, for each question in the same lecture, a most relevant answer should be found in the lecture content to determine a score, which is exemplified by: assuming that there are two predetermined questions, paragraph C11 … … C1n has corresponding ordering for both questions, here taking the highest ordering, say that the ordering of C11 for question 1 is second (highest), then the score is 8; the ranking of C14 for problem 2 is rank third (highest), with a score of 6.
It should be noted that the algorithm of the composite score may be to directly add the scores of each question to obtain the composite score, or may set a weight according to the importance degree of the question, which is as an example: as in fig. 1, each answer score may be summed to obtain a final score of 35+ 30. The weight may also be set according to the importance degree of the question, for example, if the weight of question 1 is set to 1.2 and the weight of question 2 is set to 0.8, the final score is 1.2 × 35+0.8 × 30, and the specific calculation method of the comprehensive score and the weight setting are not limited.
The embodiment of the invention migrates the technology and data of the knowledge question-answering system, constructs a method for evaluating the lecture which lacks data and is difficult to score, has high accuracy and strong interpretability of the evaluation model, and can give out scores and similar cases with corresponding scores.
An embodiment of the present invention further provides a speech manuscript evaluation device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the method of the above embodiments.
Embodiments of the present invention also provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the method of the above embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (9)

1. A speech manuscript evaluation method is characterized by comprising the following steps:
acquiring a plurality of lecture manuscripts;
respectively dividing each lecture manuscript into a plurality of sections;
identifying all the subsections and a plurality of different preset questions by using a neural network model, wherein all the preset questions and all the subsections are sequentially used as input data, extracting characteristic data from the input data by using the neural network model, and outputting ranking information of all the subsections on the preset questions according to the characteristic data, wherein the ranking information is used for representing the recognition degree of each subsection on answering the preset questions;
and determining the evaluation result of each lecture draft according to the ranking information of all the sections on each preset problem.
2. The method of claim 1, further comprising, prior to identifying all of the subsections and the plurality of different pre-set questions using a neural network model:
acquiring a plurality of training data, wherein each training data respectively comprises a plurality of sample answers, a preset question and sequencing information of each sample answer to the preset question;
and training the neural network model by using the plurality of training data, outputting sequencing information by the neural network according to a plurality of sample answers and a preset problem, and optimizing model parameters according to the difference between the output sequencing information and the sequencing information in the training data.
3. The method of claim 2, wherein obtaining a plurality of training data specifically comprises:
crawling contexts relevant to the preset questions and corresponding answer contents in a plurality of specified webpages;
and obtaining the sequencing information according to the sequencing condition of each answer content in the webpage.
4. The method according to any one of claims 1 to 3, wherein determining the evaluation result of each lecture draft according to the ranking information of all the sections for each preset question specifically comprises:
acquiring the highest ranking information of each measure belonging to the same lecture manuscript for each preset problem from the ranking information of all the measures for each preset problem;
and obtaining an evaluation result aiming at the lecture draft according to each piece of highest ranking information of the same lecture draft.
5. The method of claim 4, the ranking information corresponding to a preset score; the evaluation result is a score obtained according to each preset score.
6. The method of claim 1, wherein the lecture drafts are based on text data obtained by speech recognition of a recording of the lecture, and the length of the pause time of the speech is recorded during the speech recognition;
in the step of dividing each lecture draft into a plurality of sections, the lecture draft is divided into a plurality of sections according to the semantic meaning of the lecture draft and the length of the pause time in the lecture voice.
7. The method according to claim 1 or 2, wherein the ranking information comprises empty ranking and/or parallel ranking information.
8. The method according to claim 1 or 2, wherein the neural network model processes the text data from the measure and the text data from the predetermined question using an attention mechanism in extracting the feature data from the input data, and outputs ranking information based on the processed feature data.
9. An lecture manuscript evaluation device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the method of any one of claims 1-8.
CN202110759496.0A 2021-07-06 2021-07-06 Speech manuscript evaluation method and device Active CN113255843B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110759496.0A CN113255843B (en) 2021-07-06 2021-07-06 Speech manuscript evaluation method and device
PCT/CN2021/133041 WO2023279631A1 (en) 2021-07-06 2021-11-25 Speech manuscript evaluation method and device
JP2023577794A JP2024527185A (en) 2021-07-06 2021-11-25 Method and equipment for evaluating speech manuscripts

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110759496.0A CN113255843B (en) 2021-07-06 2021-07-06 Speech manuscript evaluation method and device

Publications (2)

Publication Number Publication Date
CN113255843A true CN113255843A (en) 2021-08-13
CN113255843B CN113255843B (en) 2021-09-21

Family

ID=77190758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110759496.0A Active CN113255843B (en) 2021-07-06 2021-07-06 Speech manuscript evaluation method and device

Country Status (3)

Country Link
JP (1) JP2024527185A (en)
CN (1) CN113255843B (en)
WO (1) WO2023279631A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115545042A (en) * 2022-11-25 2022-12-30 北京优幕科技有限责任公司 Speech manuscript quality evaluation method and device
WO2023279631A1 (en) * 2021-07-06 2023-01-12 北京优幕科技有限责任公司 Speech manuscript evaluation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304587A (en) * 2018-03-07 2018-07-20 中国科学技术大学 A kind of community's answer platform answer sort method
CN108604240A (en) * 2016-03-17 2018-09-28 谷歌有限责任公司 The problem of based on contextual information and answer interface
US20180365220A1 (en) * 2017-06-15 2018-12-20 Microsoft Technology Licensing, Llc Method and system for ranking and summarizing natural language passages
CN110210301A (en) * 2019-04-26 2019-09-06 平安科技(深圳)有限公司 Method, apparatus, equipment and storage medium based on micro- expression evaluation interviewee
CN110874716A (en) * 2019-09-23 2020-03-10 平安科技(深圳)有限公司 Interview evaluation method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255843B (en) * 2021-07-06 2021-09-21 北京优幕科技有限责任公司 Speech manuscript evaluation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108604240A (en) * 2016-03-17 2018-09-28 谷歌有限责任公司 The problem of based on contextual information and answer interface
US20180365220A1 (en) * 2017-06-15 2018-12-20 Microsoft Technology Licensing, Llc Method and system for ranking and summarizing natural language passages
CN108304587A (en) * 2018-03-07 2018-07-20 中国科学技术大学 A kind of community's answer platform answer sort method
CN110210301A (en) * 2019-04-26 2019-09-06 平安科技(深圳)有限公司 Method, apparatus, equipment and storage medium based on micro- expression evaluation interviewee
CN110874716A (en) * 2019-09-23 2020-03-10 平安科技(深圳)有限公司 Interview evaluation method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023279631A1 (en) * 2021-07-06 2023-01-12 北京优幕科技有限责任公司 Speech manuscript evaluation method and device
CN115545042A (en) * 2022-11-25 2022-12-30 北京优幕科技有限责任公司 Speech manuscript quality evaluation method and device

Also Published As

Publication number Publication date
JP2024527185A (en) 2024-07-22
WO2023279631A1 (en) 2023-01-12
CN113255843B (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN110175227B (en) Dialogue auxiliary system based on team learning and hierarchical reasoning
CN109871439B (en) Question-answer community question routing method based on deep learning
CN110188272B (en) Community question-answering website label recommendation method based on user background
CN117009490A (en) Training method and device for generating large language model based on knowledge base feedback
CN110569356B (en) Interviewing method and device based on intelligent interviewing interaction system and computer equipment
CN113255843B (en) Speech manuscript evaluation method and device
CN110321421B (en) Expert recommendation method for website knowledge community system and computer storage medium
CN113656687B (en) Teacher portrait construction method based on teaching and research data
CN111552773A (en) Method and system for searching key sentence of question or not in reading and understanding task
CN108509588B (en) Lawyer evaluation method and recommendation method based on big data
US10380490B1 (en) Systems and methods for scoring story narrations
Tóth et al. Towards an accurate prediction of the question quality on stack overflow using a deep-learning-based nlp approach.
CN113065757A (en) Method and device for evaluating on-line course teaching quality
CN112579666A (en) Intelligent question-answering system and method and related equipment
CN115617960A (en) Post recommendation method and device
CN116796802A (en) Learning recommendation method, device, equipment and storage medium based on error question analysis
CN112100464A (en) Question-answering community expert recommendation method and system combining dynamic interest and professional knowledge
CN114416969A (en) LSTM-CNN online comment sentiment classification method and system based on background enhancement
CN113569112A (en) Tutoring strategy providing method, system, device and medium based on question
Simon Using Artificial Intelligence in the Law Review Submissions Process
CN111583363A (en) Visual automatic generation method and system for image-text news
CN114416914B (en) Processing method based on picture question and answer
CN109254993B (en) Text-based character data analysis method and system
CN115292489A (en) Enterprise public opinion analysis method, device, equipment and storage medium
CN110334204A (en) A kind of exercise similarity calculation recommended method based on user record

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant