CN109189895B - Question correcting method and device for oral calculation questions - Google Patents

Question correcting method and device for oral calculation questions Download PDF

Info

Publication number
CN109189895B
CN109189895B CN201811125659.4A CN201811125659A CN109189895B CN 109189895 B CN109189895 B CN 109189895B CN 201811125659 A CN201811125659 A CN 201811125659A CN 109189895 B CN109189895 B CN 109189895B
Authority
CN
China
Prior art keywords
searched
question
calculation
oral
test paper
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811125659.4A
Other languages
Chinese (zh)
Other versions
CN109189895A (en
Inventor
石凡
何涛
罗欢
陈明权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dana Technology Inc
Original Assignee
Hangzhou Dana Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dana Technology Inc filed Critical Hangzhou Dana Technology Inc
Priority to CN201811125659.4A priority Critical patent/CN109189895B/en
Publication of CN109189895A publication Critical patent/CN109189895A/en
Priority to US16/756,468 priority patent/US11721229B2/en
Priority to EP19865656.3A priority patent/EP3859558A4/en
Priority to JP2021517407A priority patent/JP7077483B2/en
Priority to PCT/CN2019/105321 priority patent/WO2020063347A1/en
Application granted granted Critical
Publication of CN109189895B publication Critical patent/CN109189895B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a subject correction method and a subject correction device for a mouth calculation subject, which are characterized in that firstly, a feature vector of a subject to be searched is obtained according to the text content of a subject stem of each subject to be searched, then a target test paper matched with the test paper to be searched is searched from a subject library by utilizing the feature vector of each subject to be searched, for the subject to be searched with the subject type as the mouth calculation subject, secondary search is carried out in the target test paper based on the feature vector of the subject, the search standard is the shortest editing distance and is the smallest, if the subject type of the matched target subject is also the mouth calculation subject, the subject to be searched is confirmed as the mouth calculation subject to be corrected, then a preset mouth calculation engine is utilized to calculate the mouth calculation subject to be corrected, and a calculation result is output as the answer of the mouth calculation subject to be corrected. The scheme provided by the invention can improve the accuracy of correction of the oral problems.

Description

Question correcting method and device for oral calculation questions
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a question correcting method and device for a mouth calculation question, electronic equipment and a computer readable storage medium.
Background
With the continuous advance of computer technology and education informatization, computer technology has been gradually applied to various activities of daily education and teaching, for example, the computer technology is correspondingly applied in teaching evaluation scenes. The main investigation forms of the existing basic education and the learning conditions of students in China are still various types of examinations or tests, and under the condition, teachers bear great work pressure for correcting test papers.
At present, intelligent terminal products have a plurality of problem searching APPs for solving correction operation and test paper, and images containing the test paper to be corrected are input into the problem searching APPs so that the problem searching APPs can search problems corresponding to all the problems in the images of the test paper from a problem library according to the image content of the test paper.
The existing topic searching method can generate a feature vector of a topic according to the text content of a topic stem of the topic, and search from a topic library according to the feature vector. In generating the feature vector, the weights generated by different words (tokens) based on word frequency are different, and the more frequent occurrence in the text content of the stem indicates that the word (token) is less important (if the word (token) occurs in the stem a large number of times, the word (token) is considered to be less important), the lower the weight of the word (token) is set.
However, for the oral problems, most of the text contents of the stem of the oral problems are numbers and calculation symbols, and the word frequency of the numbers and the calculation symbols is relatively high, that is, the text contents of the stem of the oral problems lack high-weight words (tokens) with differentiation, which results in that the differentiation between feature vectors corresponding to different oral problems is small, and once a small recognition error occurs in the recognition engine, the oral problems are matched to another different oral problem, thereby causing a modification error of the oral problems. Therefore, the correction of the subjects of the oral calculation subjects is easy to be wrong, and the accuracy is not high.
Disclosure of Invention
The invention aims to provide a question correcting method, a question correcting device, electronic equipment and a computer readable storage medium for a spoken question, so as to solve the problems that the correction of the spoken question in the existing question correcting mode is easy to make mistakes and is low in accuracy.
In order to solve the technical problem, the invention provides a question correcting method for a mouth calculation question, which comprises the following steps:
step S11: detecting an image of a test paper to be searched, detecting the area of each topic to be searched on the test paper to be searched, determining the topic type of each topic to be searched, and identifying the text content of the topic stem in the area of each topic to be searched;
step S12: obtaining a feature vector of each topic to be searched according to the text content of the topic stem of each topic to be searched, searching in a topic library according to the feature vector of the topic to be searched, and searching for the closest topic of the topic to be searched;
step S13: summarizing the searched test paper with the nearest question of all the questions to be searched, and determining the test paper meeting the preset conditions as the target test paper matched with the test paper to be searched;
step S14: under the condition that the test paper to be searched contains the to-be-searched questions with the topic types of oral questions, aiming at the to-be-searched questions with each topic type of oral questions, carrying out shortest editing distance matching on the feature vector of the to-be-searched question and the feature vector of each topic in the target test paper, determining the target question matched with the to-be-searched question in the target test paper, and if the topic types of the target questions are oral questions, determining the to-be-searched question as the to-be-corrected oral questions;
step S15: aiming at each to-be-corrected oral calculation question, a preset oral calculation engine is used for calculating the to-be-corrected oral calculation question, a calculation result of the oral calculation engine is output to serve as an answer of the to-be-corrected oral calculation question, and correction of the to-be-corrected oral calculation question on the to-be-searched test paper is completed.
Optionally, in step S14, when the topic type of the target topic is a spoken topic, and the position of the target topic in the target test paper is the same as the position of the topic to be searched in the test paper to be searched, determining that the topic to be searched is a spoken topic to be corrected.
Optionally, in the case that no target test paper meeting the preset condition exists in step S13, when the test paper to be searched includes a to-be-searched topic with a topic type of a spoken question, determining the to-be-searched topic with the topic type of the spoken question as a to-be-corrected spoken question, for each to-be-corrected spoken question, calculating the to-be-corrected spoken question by using a preset spoken engine, outputting a calculation result of the to-be-corrected spoken question as an answer of the to-be-corrected spoken question, and completing correcting the to-be-corrected spoken question on the test paper to be searched.
Optionally, step S15 further includes: and checking whether the calculation result of the calculation engine is consistent with the corresponding reference answer of the to-be-corrected calculation subject on the target test paper, and if so, outputting the calculation result of the calculation engine as the answer of the to-be-corrected calculation subject.
Optionally, when the calculation result of the calculation engine is inconsistent with the reference answer of the to-be-corrected calculation subject on the target test paper, outputting prompt information for indicating that the reference answer of the to-be-corrected calculation subject is inconsistent so as to prompt the test paper corrector to pay attention to the to-be-corrected calculation subject.
Optionally, the preset oral calculation engine includes a pre-trained first recognition model, and the first recognition model is a model based on a neural network;
in step S15, the calculation of the to-be-corrected oral calculation topic by using a preset oral calculation engine includes:
identifying numbers, letters, characters and calculation types in the to-be-corrected oral calculation subject through the pre-trained first identification model, wherein the calculation types comprise: mixing operation, estimation, division with remainder, fraction calculation, unit conversion, vertical calculation and separate calculation;
and calculating according to the recognized numbers, letters, characters and calculation types to obtain a calculation result of the to-be-corrected oral calculation subject.
Optionally, the step S12 further includes:
step S121, inputting the text content of the question stem of each question to be searched into a pre-trained question stem vectorization model to obtain a feature vector of the question stem of each question to be searched as a feature vector of each question to be searched, wherein the question stem vectorization model is a model based on a neural network;
and S122, searching in the question bank aiming at each question to be searched, searching for a feature vector matched with the feature vector of the question to be searched, and determining the question corresponding to the matched feature vector in the question bank as the question closest to the question to be searched.
Optionally, the topic stem vectorization model is obtained by training through the following steps:
labeling each topic sample in the first topic sample training set to label the text content of the topic stem in each topic sample;
and performing two-dimensional feature vector extraction on the text content of the question stem in each question sample by using a neural network model, thereby training to obtain the question stem vectorization model.
Optionally, an index information table is established in advance for the feature vectors of all questions on the test paper in the question bank;
step S122 further includes:
for each topic to be searched, searching a characteristic vector matched with the characteristic vector of the topic to be searched in the index information table;
and determining the corresponding topic of the matched feature vector in the index information table as the topic closest to the topic to be searched.
Optionally, before the index information table is established, the feature vectors with different lengths are grouped according to the length;
for each topic to be searched, searching a feature vector matched with the feature vector of the topic to be searched in the index information table, including:
and aiming at each topic to be searched, searching a characteristic vector matched with the characteristic vector of the topic to be searched in a group with the same or similar length to the characteristic vector of the topic to be searched in the index information table.
Optionally, in step S13, determining a test paper meeting a preset condition as a target test paper matching the test paper to be searched, where the step includes:
and determining the test paper with the maximum occurrence frequency and larger than a first preset threshold value as the target test paper matched with the test paper to be searched.
Optionally, step S11, detecting an image of a test paper to be searched, and detecting an area of each topic to be searched on the test paper to be searched, includes:
and detecting the image of the test paper to be searched by using a pre-trained detection model, and detecting the area of each question to be searched on the test paper to be searched, wherein the detection model is a model based on a neural network.
Optionally, the step S11 identifies the text content of the question stem in the area of each question to be searched, including:
and recognizing the text content of the question stem in the area of each question to be searched by using a pre-trained second recognition model, wherein the second recognition model is a model based on a neural network.
In order to achieve the above object, the present invention further provides a title correcting device for oral calculation titles, the device comprising:
the detection and identification module is used for detecting the image of the test paper to be searched, detecting the area of each question to be searched on the test paper to be searched, determining the question type of each question to be searched, and identifying the text content of the question stem in the area of each question to be searched;
the question searching module is used for obtaining the characteristic vector of each question to be searched according to the text content of the question stem of each question to be searched, searching in the question bank according to the characteristic vector of the question to be searched, and searching for the question which is closest to the question to be searched;
the test paper determining module is used for summarizing the test paper where the nearest question of all the searched questions to be searched is located, and determining the test paper meeting the preset conditions as the target test paper matched with the test paper to be searched;
the oral calculation question determining module is used for matching the feature vector of the to-be-searched question with the feature vector of each question in the target test paper by the shortest editing distance aiming at the to-be-searched question with each question type as the oral calculation question in the test paper to be searched, determining the target question matched with the to-be-searched question in the target test paper, and determining the to-be-searched question as the oral calculation question to be corrected if the question type of the target question is the oral calculation question;
and the oral calculation question correcting module is used for calculating the oral calculation questions to be corrected by using a preset oral calculation engine aiming at each oral calculation question to be corrected, outputting the calculation result of the oral calculation engine as the answer of the oral calculation questions to be corrected, and finishing correcting the oral calculation questions to be corrected on the test paper to be searched.
Optionally, the oral calculation question determining module is further configured to determine that the target topic to be searched is the oral calculation question to be corrected under the condition that the topic type of the target topic is the oral calculation question, and the position of the target topic in the target test paper is the same as the position of the topic to be searched in the test paper to be searched.
Optionally, the test paper determining module is further configured to determine, when the test paper to be searched includes a to-be-searched question with a question type being a spoken question, the to-be-searched question with the question type being a spoken question, as a to-be-corrected spoken question, and for each to-be-corrected spoken question, calculate the to-be-corrected spoken question by using a preset spoken engine and output a calculation result of the to-be-corrected spoken question as an answer of the to-be-corrected spoken question, so as to complete correction of the to-be-corrected spoken question on the test paper to be searched.
Optionally, the oral calculation topic correction module is further configured to check whether a calculation result of the oral calculation engine is consistent with a reference answer of the oral calculation topic to be corrected, which corresponds to the target test paper, and if so, output the calculation result of the oral calculation engine as an answer of the oral calculation topic to be corrected.
Optionally, the oral computation question correction module is further configured to, when a computation result of the oral computation engine is inconsistent with the reference answer of the oral computation question to be corrected on the target test paper, output prompt information indicating that the reference answer of the oral computation question to be corrected is inconsistent, so as to prompt a test paper corrector to pay attention to the oral computation question to be corrected.
Optionally, the preset oral calculation engine includes a pre-trained first recognition model, and the first recognition model is a model based on a neural network;
the oral calculation topic correction module is specifically configured to recognize numbers, letters, characters, and calculation types in the oral calculation topic to be corrected through the pre-trained first recognition model, where the calculation types include: mixing operation, estimation, division with remainder, fraction calculation, unit conversion, vertical calculation and separate calculation; and calculating according to the recognized numbers, letters, characters and calculation types to obtain a calculation result of the to-be-corrected oral calculation subject.
Optionally, the topic searching module includes:
the characteristic vector obtaining unit is used for inputting the text content of the question stem of each question to be searched into a pre-trained question stem vectorization model to obtain the characteristic vector of the question stem of each question to be searched as the characteristic vector of each question to be searched, wherein the question stem vectorization model is a model based on a neural network;
and the question searching unit is used for searching in the question bank aiming at each question to be searched, searching the characteristic vector matched with the characteristic vector of the question to be searched, and determining the question corresponding to the matched characteristic vector in the question bank as the question closest to the question to be searched.
Optionally, the topic stem vectorization model is obtained by training through the following steps:
labeling each topic sample in the first topic sample training set to label the text content of the topic stem in each topic sample;
and performing two-dimensional feature vector extraction on the text content of the question stem in each question sample by using a neural network model, thereby training to obtain the question stem vectorization model.
Optionally, the apparatus further comprises:
the preprocessing module is used for establishing an index information table for the characteristic vector of each question on the test paper in the question bank in advance;
the title searching unit is specifically configured to search, for each title to be searched, a feature vector matched with the feature vector of the title to be searched in the index information table; and determining the corresponding topic of the matched feature vector in the index information table as the topic closest to the topic to be searched.
Optionally, the preprocessing module is further configured to group the feature vectors with different lengths according to length before establishing the index information table;
the topic searching unit is specifically configured to search, for each topic to be searched, a feature vector matched with the feature vector of the topic to be searched in a group of the index information table, where the length of the group of the feature vector is the same as or similar to the length of the feature vector of the topic to be searched.
Optionally, the test paper determining module is specifically configured to determine the test paper with the largest frequency of occurrence and larger than a first preset threshold as the target test paper matched with the test paper to be searched.
Optionally, the detection and identification module is specifically configured to detect the image of the test paper to be searched by using a pre-trained detection model, and detect the area of each topic to be searched on the test paper to be searched, where the detection model is a model based on a neural network.
Optionally, the detection and recognition module is specifically configured to recognize the text content of the question stem in the region of each question to be searched by using a second recognition model trained in advance, where the second recognition model is a model based on a neural network.
In order to achieve the above object, the present invention further provides an electronic device, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the method steps of the title correcting method for the oral calculation questions when the program stored in the memory is executed.
To achieve the above object, the present invention further provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when being executed by a processor, the computer program implements the method steps of any of the above method for correcting a subject for a spoken language.
Compared with the prior art, the method is used for searching the test paper to be searched, firstly, the characteristic vector of the subject to be searched is obtained according to the text content of the question stem of each subject to be searched, then the target test paper matched with the test paper to be searched is searched from the question library by utilizing the characteristic vector of each subject to be searched, for the subject to be searched with the subject type as the oral calculation subject, secondary searching is carried out on the subject-based characteristic vector in the target test paper, the searching standard is the shortest editing distance and is the smallest, if the subject type of the matched target subject is also the oral calculation subject, the subject to be searched is confirmed as the oral calculation subject to be corrected, then the oral calculation subject to be corrected is calculated by utilizing a preset oral calculation engine, and the calculation result is output as the answer of the oral calculation subject to be corrected. Therefore, for the oral calculation questions to be corrected, the feature vectors obtained according to the text content of the question stem have low degree of distinction, so that the possibility that the reference answers in the target test paper searched from the question bank are not matched with the oral calculation questions to be corrected is high, the oral calculation questions to be corrected are searched and determined secondarily and calculated through the oral calculation engine, and the accuracy of correcting the oral calculation questions can be improved.
Drawings
FIG. 1 is a schematic flow chart of a topic correction method for a spoken question according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a topic modification device for oral questions according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The subject correcting method and device for the oral calculation subjects provided by the invention are further described in detail below with reference to the accompanying drawings and specific embodiments. The advantages and features of the present invention will become more fully apparent from the appended claims and the following description.
In order to solve the problems in the prior art, embodiments of the present invention provide a method and an apparatus for correcting a subject for a spoken question, an electronic device, and a computer-readable storage medium.
It should be noted that the title correcting method for the oral calculation questions according to the embodiment of the present invention can be applied to the title correcting device for the oral calculation questions according to the embodiment of the present invention, and the title correcting device for the oral calculation questions can be configured on the electronic device. The electronic device may be a personal computer, a mobile terminal, and the like, and the mobile terminal may be a hardware device having various operating systems, such as a mobile phone and a tablet computer.
Fig. 1 is a schematic flow chart of a topic correcting method for a spoken question according to an embodiment of the present invention. Referring to fig. 1, a title correction method for a oral calculation title may include the following steps:
step S11: detecting the image of the test paper to be searched, detecting the area of each topic to be searched on the test paper to be searched, determining the topic type of each topic to be searched, and identifying the character content of the topic stem in the area of each topic to be searched.
The image of the test paper to be searched can be an image containing the test paper to be searched. Specifically, the image of the test paper to be searched can be detected by using a detection model, and the area of each topic to be searched on the test paper to be searched is detected, wherein the detection model is a model based on a neural network. The detection model may be obtained by training samples in a test paper sample training set based on a deep Convolutional Neural Network (CNN), for example. Extracting a two-dimensional characteristic vector from an image of a test paper to be searched by using a trained detection model, generating anchor points with different shapes in each grid of the two-dimensional characteristic vector, labeling the detected regions of each question to be searched by using a labeling frame (group characters), and performing regression (regression) processing on the labeling frame and the generated anchor points to enable the labeling frame to be closer to the actual position of the question. After the topic areas are identified, each topic to be searched is cut into a single image or not cut actually, each topic area to be searched is distinguished into a single area image for processing during processing, and sequencing can be carried out according to the topic position information.
After the area of each topic to be searched is detected, the topic type of each topic to be searched can be determined by utilizing a classification identification model, and the classification identification model is a model based on a neural network. The classification recognition model may be obtained by training samples in a test paper sample training set based on a deep convolutional neural network, for example, and the questions in each sample are labeled with a question type. The question types can be divided into operation questions, oral calculation questions, blank filling questions, selection questions, application questions and the like.
Meanwhile, the text content of the question stem in the area of the question to be searched can be identified by utilizing a second identification model, wherein the second identification model is a model based on a neural network. Firstly, all components in the question to be searched are marked, wherein the components can comprise a question stem, an answer and/or a picture, and then the text content of the question stem in the question is identified through a second identification model. The second recognition model can be established based on a hole convolution and an attention model, specifically, the hole convolution is adopted to extract features of labeling frames corresponding to the question stem, the answer and/or the picture, and the extracted features are decoded into characters through the attention model.
Step S12: and obtaining the characteristic vector of each topic to be searched according to the text content of the topic stem of each topic to be searched, searching in a topic library according to the characteristic vector of the topic to be searched, and searching for the topic closest to the topic to be searched.
Specifically, the step S12 may further include:
step S121, inputting the text content of the question stem of each question to be searched into a pre-trained question stem vectorization model to obtain a feature vector of the question stem of each question to be searched as the feature vector of each question to be searched, wherein the question stem vectorization model is a model based on a neural network.
For example, the text content of the question stem in the question to be searched is "4. small distance of 3 minutes to half of the whole distance, how many meters from the school? (6 min) ", inputting the text into the pre-trained stem vectorization model-sent 2vec model to obtain the feature vector of the stem, which can be expressed as [ x0, x1, x2 … xn ].
The topic stem vectorization model may be a neural network-based model, such as a CNN model, and may be obtained through the following training steps: labeling each topic sample in the first topic sample training set to label the text content of the topic stem in each topic sample; and performing two-dimensional feature vector extraction on the text content of the question stem in each question sample by using a neural network model, thereby training to obtain the question stem vectorization model. The specific training process belongs to the prior art, and is not described herein.
And S122, searching in the question bank aiming at each question to be searched, searching for a feature vector matched with the feature vector of the question to be searched, and determining the question corresponding to the matched feature vector in the question bank as the question closest to the question to be searched.
The feature vector matched with the feature vector of the question to be searched can be searched in the question bank in a vector approximate search mode, specifically, the feature vector closest to the feature vector of the question to be searched is searched in the question bank. It can be understood that the Similarity measure (Similarity measure) between different vectors usually adopts a method of calculating a "Distance" between vectors, and the common Distance calculation method includes: euclidean distance, manhattan distance, Cosine of angle (Cosine), etc. The calculation method adopted in this embodiment is the cosine of the included angle.
Preferably, in order to facilitate the search of the feature vector, an index information table may be established in advance for the feature vector of each question on the test paper in the question bank. The index information table can store the feature vector of each topic in the topic library, the specific content of the topic, the ID of the test paper where the topic is located, and the like.
Accordingly, step S122 may further include: for each topic to be searched, searching a characteristic vector matched with the characteristic vector of the topic to be searched in the index information table; and determining the corresponding topic of the matched feature vector in the index information table as the topic closest to the topic to be searched.
It can be understood that after finding the matched feature vector in the index information table, finding the closest topic in the index information table, the specific content (including the stem, answer and/or picture of the topic) of the closest topic and the ID information of the test paper where the closest topic is located can be obtained.
Preferably, before the index information table is established, feature vectors with different lengths may be grouped according to length, so that when a feature vector matched with the feature vector of the topic to be searched is searched in the index information table, a group with the length the same as or similar to that of the feature vector of the topic to be searched may be first located in the index information table, and then a feature vector matched with the feature vector of the topic to be searched is searched in a group with the length the same as that of the feature vector of the topic to be searched in the index information table. In the grouping, the feature vectors with the same length may be grouped into one group, or the feature vectors with the length within a certain range may be grouped into one group, which is not limited in the present invention. Therefore, the feature vectors with different lengths are grouped according to the lengths, so that the questions can be inquired in corresponding groups according to the lengths of the feature vectors when being searched in the later period, and the searching speed of the questions is improved. It is understood that the length of the feature vectors is different because of the different number of words of the stem.
Step S13: summarizing the searched test paper with the nearest question of all the questions to be searched, and determining the test paper meeting the preset conditions as the target test paper matched with the test paper to be searched.
The test paper meeting the preset condition is determined as the target test paper matched with the test paper to be searched, and the method specifically includes: and determining the test paper with the maximum occurrence frequency and larger than a first preset threshold value as the target test paper matched with the test paper to be searched. In practice, during processing, each question in the question bank has corresponding test paper ID information and position information in the current test paper, so that the test paper to which the closest question belongs can be judged according to the test paper ID of the closest question, and then the test paper ID with the largest occurrence frequency and larger than a first preset threshold can be determined, so that the test paper ID is determined as the matched target test paper. Wherein, the frequency of occurrence of a certain test paper can be calculated by the following method: the number of the questions to be searched in the test paper which is closest to the question is the ratio of the number of the questions to be searched in the test paper to the total number of the questions to be searched in the test paper to be searched, or the ratio of the number of the questions matched with the test paper to be searched to the total number of the questions to be searched in the test paper to be searched. It can be understood that, if the occurrence frequency of the test paper with the maximum occurrence frequency is less than the first preset threshold, it indicates that the number of questions matched between the test paper with the maximum occurrence frequency and the test paper to be searched is too small, and at this time, it may be considered that the target test paper matched with the test paper to be searched does not exist in the question bank.
Further, under the condition that no target test paper meeting the preset condition exists in step S13, when the test paper to be searched includes a to-be-searched topic with a topic type of a spoken question, the to-be-searched topic with the topic type of the spoken question may be determined as a to-be-corrected spoken question, for each to-be-corrected spoken question, a preset spoken calculation engine is used to calculate the to-be-corrected spoken question and output a calculation result of the to-be-corrected spoken question as an answer of the to-be-corrected spoken question, thereby completing correction of the to-be-corrected spoken question on the test paper to be searched.
Step S14: under the condition that the test paper to be searched contains the to-be-searched questions with the topic types of the oral questions, aiming at the to-be-searched questions with each topic type of the oral questions, carrying out shortest editing distance matching on the feature vector of the to-be-searched questions and the feature vector of each topic in the target test paper, determining the target questions matched with the to-be-searched questions in the target test paper, and if the topic types of the target questions are the oral questions, determining the to-be-searched questions as the to-be-corrected oral questions.
Specifically, for the topic to be searched whose topic type is a buccal topic, the process of performing the shortest editing distance matching may be referred to as a secondary search process, and the buccal topic in the test paper to be searched may be further confirmed by the secondary search. During the secondary search, for the topic to be searched, which is a buccal topic in each topic type, the topic with the shortest editing distance to the topic to be searched, which is the smallest distance and smaller than the second preset threshold value, in the target test paper can be used as the search result of the topic to be searched, that is, the target topic matched with the topic to be searched in the target test paper. If the topic type of the target topic is also a spoken topic, the topic to be searched can be confirmed to be actually the spoken topic, so that the topic to be searched is determined to be the spoken topic to be corrected. The algorithm for matching the shortest edit distance to the feature vector belongs to a conventional calculation method in the field, and is not described herein again.
For example, for oral topic a: "385 × 8-265 ═ ()" and oral title B: since the two topics are very similar to the feature vector obtained by the topic vectorization, "375 × 8-265 ()", if a certain topic in the test paper to be searched is "385 × 8-265 ()", the spoken topic B in the topic library can be easily determined as the closest topic of the topic in step S12, i.e., the search result for the topic is inaccurate. In order to improve the accuracy, the subject is searched for in the target test paper for the second time, the searched standard is that the shortest editing distance of characters is the smallest, because the shortest editing distance does not calculate the weight, the target subject corresponding to the subject in the target test paper, namely the oral calculation subject A, can be easily found, and because the subject type of the oral calculation subject A is marked as the oral calculation subject, the subject is determined to be the oral calculation subject indeed.
Further, in step S14, when the topic type of the target topic is a spoken topic, and the position of the target topic in the target test paper is the same as the position of the topic to be searched in the test paper to be searched, the topic to be searched can be determined as a spoken topic to be corrected. It can be understood that, the positions of the to-be-searched item and the target item are confirmed, that is, the position of the item identified as the oral calculation item in the to-be-searched test paper in the to-be-searched item is compared with the position of the target item in the target test paper, and the positions of the two are the same, which indicates that the target item is really the correct search result of the to-be-searched item, so that the problem that the to-be-searched item is erroneously identified as another similar item in the target test paper due to vector difference during identification can be avoided. For example, if the area of the to-be-corrected oral calculation subject in the test paper to be searched is consistent with the area of the target subject in the target test paper, the positions of the two are the same.
Step S15: aiming at each to-be-corrected oral calculation question, a preset oral calculation engine is used for calculating the to-be-corrected oral calculation question, a calculation result of the oral calculation engine is output to serve as an answer of the to-be-corrected oral calculation question, and correction of the to-be-corrected oral calculation question on a to-be-searched test paper is completed.
The preset oral calculation engine can comprise a first recognition model trained in advance, the first recognition model is a model based on a neural network and is the same as the second recognition model, the first recognition model can be established based on a cavity convolution and an attention model, specifically, the cavity convolution is adopted to extract the features of the oral calculation subject to be corrected, and then the extracted features are decoded into characters through the attention model.
In step S15, the calculating the to-be-corrected oral calculation topic by using a preset oral calculation engine may include: firstly, identifying numbers, letters, words, characters and calculation types in the to-be-corrected oral calculation subject through the pre-trained first identification model, wherein the calculation types can include: mixing operation, estimation, division with remainder, fraction calculation, unit conversion, vertical calculation and separate calculation; and then, calculating according to the recognized numbers, letters, characters and calculation types to obtain a calculation result of the oral calculation subject to be corrected. For example, the oral calculation to be modified is "385 × 8-265 ()", the oral calculation engine can recognize "3", "8", "5", "x", "8", "-", "2", "6", "5", "get", "(", "" "," ")" through the first recognition model, the calculation type is four mixing operations, and then the calculation result is obtained through automatic calculation.
Further, to ensure that the correction result of the oral subjects is accurate, step S15 may further include: and checking whether the calculation result of the calculation engine is consistent with the corresponding reference answer of the to-be-corrected calculation subject on the target test paper, and if so, outputting the calculation result of the calculation engine as the answer of the to-be-corrected calculation subject.
Further, when the calculation result of the calculation engine is inconsistent with the reference answer of the to-be-corrected calculation subject on the target test paper, outputting prompt information for indicating that the reference answer of the to-be-corrected calculation subject is inconsistent so as to prompt a test paper corrector to pay attention to the to-be-corrected calculation subject.
For example, if the calculation result of the oral calculation engine is consistent with the corresponding reference answer of the oral calculation subject to be corrected on the target test paper, the calculation result of the oral calculation engine is displayed in the area of the oral calculation subject to be corrected, and if the calculation result of the oral calculation engine is not consistent with the corresponding reference answer of the oral calculation subject to be corrected, prompt information is displayed in the area of the oral calculation subject to be corrected, and the prompt information may be: "answer is to be confirmed, please manually correct" the typeface.
In summary, compared with the prior art, the method and the device for searching for the test paper to be searched obtain the feature vector of the topic to be searched according to the text content of the topic stem of each topic to be searched, then use the feature vector of each topic to be searched to search the target test paper matched with the test paper to be searched from the question library, and for the topic to be searched with the topic type of the oral calculation topic, perform secondary search based on the feature vector of the topic in the target test paper, where the search criterion is the shortest editing distance and is the smallest, if the topic type of the matched target topic is also the oral calculation topic, confirm that the topic to be searched is the oral calculation topic to be corrected, then use the preset oral calculation engine to calculate the oral calculation topic to be corrected and output the calculation result as the answer of the oral calculation topic to be corrected. Therefore, for the oral calculation questions to be corrected, the feature vectors obtained according to the text content of the question stem have low degree of distinction, so that the possibility that the reference answers in the target test paper searched from the question bank are not matched with the oral calculation questions to be corrected is high, the oral calculation questions to be corrected are searched and determined secondarily and calculated through the oral calculation engine, and the accuracy of correcting the oral calculation questions can be improved.
Corresponding to the embodiment of the title correcting method for the oral calculation questions, the invention provides a title correcting device for the oral calculation questions, referring to fig. 2, the device may include:
the detection and identification module 21 may be configured to detect an image of a test paper to be searched, detect an area of each topic to be searched on the test paper to be searched, determine a topic type of each topic to be searched, and identify text content of a topic stem in the area of each topic to be searched;
the question searching module 22 is configured to obtain a feature vector of each question to be searched according to the text content of the question stem of each question to be searched, search in the question bank according to the feature vector of the question to be searched, and search for a question that is closest to the question to be searched;
the test paper determining module 23 may be configured to summarize the test paper where the closest question of all the to-be-searched questions is located, and determine the test paper meeting the preset condition as the target test paper matched with the to-be-searched test paper;
the calculation topic determining module 24 is configured to, under the condition that the test paper to be searched includes a to-be-searched topic with a topic type of calculation topic, perform shortest editing distance matching on a feature vector of the to-be-searched topic and a feature vector of each topic in the target test paper for the to-be-searched topic with the topic type of calculation topic, determine a target topic matched with the to-be-searched topic in the target test paper, and if the topic type of the target topic is a calculation topic, determine the to-be-searched topic as a to-be-modified calculation topic;
the oral calculation question correcting module 25 is configured to calculate, for each oral calculation question to be corrected, the oral calculation question to be corrected by using a preset oral calculation engine, output a calculation result of the oral calculation engine as an answer of the oral calculation question to be corrected, and complete correction of the oral calculation question to be corrected on the test paper to be searched.
Optionally, the oral calculation question determining module 25 may be further configured to determine that the to-be-searched topic is the to-be-modified oral calculation question under the condition that the topic type of the target topic is the oral calculation question, and the position of the target topic in the target test paper is the same as the position of the to-be-searched topic in the to-be-searched test paper.
Optionally, the test paper determining module 23 may be further configured to, in the absence of a target test paper meeting a preset condition, determine, as a to-be-corrected mouth calculation question, a to-be-searched question with a question type of mouth calculation question when the to-be-searched test paper includes the to-be-searched question with the question type of mouth calculation question, calculate, by using a preset mouth calculation engine, the to-be-corrected mouth calculation question for each to-be-corrected mouth calculation question, output a calculation result of the to-be-corrected mouth calculation question as an answer of the to-be-corrected mouth calculation question, and complete correction of the to-be-corrected mouth calculation question on the to-be-searched test paper.
Optionally, the oral calculation topic correction module 25 may be further configured to check whether a calculation result of the oral calculation engine is consistent with a reference answer of the oral calculation topic to be corrected, which corresponds to the target test paper, and if so, output the calculation result of the oral calculation engine as an answer of the oral calculation topic to be corrected.
Optionally, the oral computation question correcting module 25 may be further configured to output a prompt message indicating that the reference answer of the oral computation question to be corrected is inconsistent when the computation result of the oral computation engine is inconsistent with the reference answer of the oral computation question to be corrected on the target test paper, so as to prompt the test paper corrector to pay attention to the oral computation question to be corrected.
Optionally, the preset oral calculation engine may include a pre-trained first recognition model, where the first recognition model is a neural network-based model;
the oral calculation topic correction module 25 can be specifically configured to recognize, through the pre-trained first recognition model, numbers, letters, characters, and calculation types in the oral calculation topic to be corrected, where the calculation types include: mixing operation, estimation, division with remainder, fraction calculation, unit conversion, vertical calculation and separate calculation; and calculating according to the recognized numbers, letters, characters and calculation types to obtain a calculation result of the to-be-corrected oral calculation subject.
Optionally, the topic searching module 22 may include:
the feature vector obtaining unit can be used for inputting the text content of the question stem of each question to be searched into a pre-trained question stem vectorization model to obtain the feature vector of the question stem of each question to be searched as the feature vector of each question to be searched, wherein the question stem vectorization model is a model based on a neural network;
the topic searching unit can be used for searching in the topic library aiming at each topic to be searched, searching for a feature vector matched with the feature vector of the topic to be searched, and determining the topic corresponding to the matched feature vector in the topic library as the topic closest to the topic to be searched.
Optionally, the topic stem vectorization model may be obtained by training through the following steps:
labeling each topic sample in the first topic sample training set to label the text content of the topic stem in each topic sample;
and performing two-dimensional feature vector extraction on the text content of the question stem in each question sample by using a neural network model, thereby training to obtain the question stem vectorization model.
Optionally, the apparatus may further include:
the preprocessing module can be used for establishing an index information table for the characteristic vector of each question on the test paper in the question bank in advance;
the topic searching unit may be specifically configured to search, for each topic to be searched, a feature vector matched with the feature vector of the topic to be searched in the index information table; and determining the corresponding topic of the matched feature vector in the index information table as the topic closest to the topic to be searched.
Optionally, the preprocessing module may be further configured to group the feature vectors with different lengths according to the lengths before establishing the index information table;
the topic searching unit may be specifically configured to search, for each topic to be searched, a feature vector that matches the feature vector of the topic to be searched in a group of the index information table that has the same length as or is close to the feature vector of the topic to be searched.
Optionally, the test paper determining module 23 may be specifically configured to determine the test paper with the largest occurrence frequency and larger than a first preset threshold as the target test paper matched with the test paper to be searched.
Optionally, the detection and identification module 21 may be specifically configured to detect the image of the test paper to be searched by using a pre-trained detection model, and detect the area of each topic to be searched on the test paper to be searched, where the detection model is a model based on a neural network.
Optionally, the detection and recognition module 21 may be specifically configured to recognize the text content of the question stem in the area of each question to be searched by using a second recognition model trained in advance, where the second recognition model is a model based on a neural network.
The embodiment of the present invention further provides an electronic device, as shown in fig. 3, which includes a processor 301, a communication interface 302, a memory 303, and a communication bus 304, where the processor 301, the communication interface 302, and the memory 303 complete mutual communication through the communication bus 304,
a memory 303 for storing a computer program;
the processor 301, when executing the program stored in the memory 303, implements the following steps:
step S11: detecting an image of a test paper to be searched, detecting the area of each topic to be searched on the test paper to be searched, determining the topic type of each topic to be searched, and identifying the text content of the topic stem in the area of each topic to be searched;
step S12: obtaining a feature vector of each topic to be searched according to the text content of the topic stem of each topic to be searched, searching in a topic library according to the feature vector of the topic to be searched, and searching for the closest topic of the topic to be searched;
step S13: summarizing the searched test paper with the nearest question of all the questions to be searched, and determining the test paper meeting the preset conditions as the target test paper matched with the test paper to be searched;
step S14: under the condition that the test paper to be searched contains the to-be-searched questions with the topic types of oral questions, aiming at the to-be-searched questions with each topic type of oral questions, carrying out shortest editing distance matching on the feature vector of the to-be-searched question and the feature vector of each topic in the target test paper, determining the target question matched with the to-be-searched question in the target test paper, and if the topic types of the target questions are oral questions, determining the to-be-searched question as the to-be-corrected oral questions;
step S15: aiming at each to-be-corrected oral calculation question, a preset oral calculation engine is used for calculating the to-be-corrected oral calculation question, a calculation result of the oral calculation engine is output to serve as an answer of the to-be-corrected oral calculation question, and correction of the to-be-corrected oral calculation question on the to-be-searched test paper is completed.
For specific implementation and related explanation of each step of the method, reference may be made to the method embodiment shown in fig. 1, which is not described herein again.
In addition, other implementation manners of the title correcting method for the oral calculation title, which are realized by the processor 301 executing the program stored in the memory 303, are the same as the implementation manners mentioned in the foregoing method embodiment section, and are not described herein again.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the steps of the title correction method for the oral calculation questions are realized.
It should be noted that, in the present specification, all the embodiments are described in a related manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, and the computer-readable storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and for the relevant points, reference may be made to the partial description of the embodiments of the method.
The above description is only for the purpose of describing the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention, and any variations and modifications made by those skilled in the art based on the above disclosure are within the scope of the appended claims.

Claims (18)

1. A topic batching method for a spoken topic, the method comprising:
step S11: detecting an image of a test paper to be searched, detecting the area of each topic to be searched on the test paper to be searched, determining the topic type of each topic to be searched, and identifying the text content of the topic stem in the area of each topic to be searched;
step S12: obtaining a feature vector of each topic to be searched according to the text content of the topic stem of each topic to be searched, searching in a topic library according to the feature vector of the topic to be searched, and searching for the closest topic of the topic to be searched;
step S13: summarizing the searched test paper with the nearest question of all the questions to be searched, and determining the test paper meeting the preset conditions as the target test paper matched with the test paper to be searched;
step S14: under the condition that the test paper to be searched contains the to-be-searched questions with the topic types of oral questions, aiming at the to-be-searched questions with the topic types of oral questions, carrying out shortest editing distance matching on the feature vectors of the to-be-searched questions and the feature vectors of all the topics in the target test paper, determining the target questions matched with the to-be-searched questions in the target test paper, if the topic types of the target questions are oral questions and the positions of the target questions in the target test paper are the same as the positions of the to-be-searched questions in the test paper to be searched, determining the to-be-searched questions as the to-be-corrected oral questions, wherein if the regions of the to-be-corrected oral questions in the test paper to be searched are the same as the regions of the target questions in the target test paper, the positions of the to-be-corrected oral questions are the same;
step S15: aiming at each to-be-corrected oral calculation question, a preset oral calculation engine is used for calculating the to-be-corrected oral calculation question, a calculation result of the oral calculation engine is output to serve as an answer of the to-be-corrected oral calculation question, and correction of the to-be-corrected oral calculation question on the to-be-searched test paper is completed.
2. The title correcting method for the oral calculation questions as claimed in claim 1, wherein in the absence of a target test paper satisfying preset conditions in step S13, when the test paper to be searched includes a to-be-searched title with a title type of oral calculation question, the to-be-searched title with the title type of oral calculation question is determined as the to-be-corrected oral calculation title, and for each to-be-corrected oral calculation title, a preset oral calculation engine is used to calculate the to-be-corrected oral calculation title and output a calculation result of the to-be-corrected oral calculation title as the to-be-corrected oral calculation title, thereby completing the correction of the to-be-corrected oral calculation title on the test paper to be searched.
3. The title batching method for the oral subjects as recited in claim 1, wherein the step S15 further comprises: and checking whether the calculation result of the calculation engine is consistent with the corresponding reference answer of the to-be-corrected calculation subject on the target test paper, and if so, outputting the calculation result of the calculation engine as the answer of the to-be-corrected calculation subject.
4. The title correction method for the oral calculation questions as claimed in claim 3, wherein when the calculation result of the oral calculation engine is inconsistent with the reference answer of the oral calculation title to be corrected on the target test paper, a prompt message indicating that the reference answer of the oral calculation title to be corrected is inconsistent is output to prompt the test paper corrector to pay attention to the oral calculation title to be corrected.
5. The topic batching method for the oral subjects of claim 1, wherein said preset oral engine comprises a pre-trained first recognition model, said first recognition model being a neural network based model;
in step S15, the calculation of the to-be-corrected oral calculation topic by using a preset oral calculation engine includes:
identifying numbers, letters, characters and calculation types in the to-be-corrected oral calculation subject through the pre-trained first identification model, wherein the calculation types comprise: mixing operation, estimation, division with remainder, fraction calculation, unit conversion, vertical calculation and separate calculation;
and calculating according to the recognized numbers, letters, characters and calculation types to obtain a calculation result of the to-be-corrected oral calculation subject.
6. The title batching method for the oral subjects of claim 1, wherein said step S12 further comprises:
step S121, inputting the text content of the question stem of each question to be searched into a pre-trained question stem vectorization model to obtain a feature vector of the question stem of each question to be searched as a feature vector of each question to be searched, wherein the question stem vectorization model is a model based on a neural network;
and S122, searching in the question bank aiming at each question to be searched, searching for a feature vector matched with the feature vector of the question to be searched, and determining the question corresponding to the matched feature vector in the question bank as the question closest to the question to be searched.
7. The question batching method for the oral questions of claim 6, wherein said question stem vectorization model is trained by the steps of:
labeling each topic sample in the first topic sample training set to label the text content of the topic stem in each topic sample;
and performing two-dimensional feature vector extraction on the text content of the question stem in each question sample by using a neural network model, thereby training to obtain the question stem vectorization model.
8. The question approval method for oral problems according to claim 6, wherein an index information table is established in advance for the feature vector of each question on the test paper in the question bank;
step S122 further includes:
for each topic to be searched, searching a characteristic vector matched with the characteristic vector of the topic to be searched in the index information table;
and determining the corresponding topic of the matched feature vector in the index information table as the topic closest to the topic to be searched.
9. The title batching method for oral subjects of claim 8, wherein feature vectors of different lengths are grouped by length before establishing said index information table;
for each topic to be searched, searching a feature vector matched with the feature vector of the topic to be searched in the index information table, including:
and aiming at each topic to be searched, searching a characteristic vector matched with the characteristic vector of the topic to be searched in a group with the same or similar length to the characteristic vector of the topic to be searched in the index information table.
10. The title batching method for the oral subjects as recited in claim 1, wherein the step S13 determining the test paper satisfying the preset condition as the target test paper matching with the test paper to be searched comprises:
and determining the test paper with the maximum occurrence frequency and larger than a first preset threshold value as the target test paper matched with the test paper to be searched.
11. The title correcting method for oral subjects as claimed in claim 1, wherein the step S11 of detecting the image of the test paper to be searched and detecting the area of each title to be searched on the test paper to be searched comprises:
and detecting the image of the test paper to be searched by using a pre-trained detection model, and detecting the area of each question to be searched on the test paper to be searched, wherein the detection model is a model based on a neural network.
12. The title batching method for the oral subjects as recited in claim 1, wherein the step S11 for identifying the text content of the stem in the area of each title to be searched comprises:
and recognizing the text content of the question stem in the area of each question to be searched by using a pre-trained second recognition model, wherein the second recognition model is a model based on a neural network.
13. A title correction device for a mouth calculation title, the device comprising:
the detection and identification module is used for detecting the image of the test paper to be searched, detecting the area of each question to be searched on the test paper to be searched, determining the question type of each question to be searched, and identifying the text content of the question stem in the area of each question to be searched;
the question searching module is used for obtaining the characteristic vector of each question to be searched according to the text content of the question stem of each question to be searched, searching in the question bank according to the characteristic vector of the question to be searched, and searching for the question which is closest to the question to be searched;
the test paper determining module is used for summarizing the test paper where the nearest question of all the searched questions to be searched is located, and determining the test paper meeting the preset conditions as the target test paper matched with the test paper to be searched;
the oral calculation question determining module is used for matching the feature vector of the to-be-searched question with the feature vector of each question in the target test paper by the shortest editing distance aiming at the to-be-searched question with each question type as the oral calculation question in the test paper to be searched, determining the target question matched with the to-be-searched question in the target test paper, and determining the to-be-searched question as the oral calculation question to be corrected if the question type of the target question is the oral calculation question and the position of the target question in the target test paper is the same as the position of the to-be-searched question in the test paper to be searched;
and the oral calculation question correcting module is used for calculating the oral calculation questions to be corrected by using a preset oral calculation engine aiming at each oral calculation question to be corrected, outputting the calculation result of the oral calculation engine as the answer of the oral calculation questions to be corrected, and finishing correcting the oral calculation questions to be corrected on the test paper to be searched.
14. The title correction device for the oral calculation questions as claimed in claim 13, wherein the test paper determining module is further configured to determine the to-be-searched question with the title type as the oral calculation question to be corrected as the to-be-corrected oral calculation question when the to-be-searched test paper contains the to-be-searched question with the title type as the oral calculation question to be corrected under the condition that no target test paper meeting a preset condition exists, calculate the to-be-corrected oral calculation question by using a preset oral calculation engine for each to-be-corrected oral calculation question, output a calculation result of the to-be-corrected oral calculation question as an answer of the to-be-corrected oral calculation question, and complete correction of the to-be-corrected oral calculation question on the to-be-searched test paper.
15. The title correcting device for the oral calculation questions of claim 13, wherein the oral calculation question correcting module is further configured to check whether a calculation result of the oral calculation engine is consistent with a reference answer of the oral calculation question to be corrected, corresponding to the target test paper, and if so, output the calculation result of the oral calculation engine as the answer of the oral calculation question to be corrected;
the oral calculation question correcting module is further used for outputting prompt information for indicating that the reference answer of the oral calculation question to be corrected is inconsistent when the calculation result of the oral calculation engine is inconsistent with the reference answer of the oral calculation question to be corrected on the target test paper, so as to prompt a test paper corrector to pay attention to the oral calculation question to be corrected.
16. The title correcting device for oral problems of claim 13, wherein the preset oral calculation engine comprises a pre-trained first recognition model, and the first recognition model is a neural network-based model;
the oral calculation topic correction module is specifically configured to recognize numbers, letters, characters, and calculation types in the oral calculation topic to be corrected through the pre-trained first recognition model, where the calculation types include: mixing operation, estimation, division with remainder, fraction calculation, unit conversion, vertical calculation and separate calculation; and calculating according to the recognized numbers, letters, characters and calculation types to obtain a calculation result of the to-be-corrected oral calculation subject.
17. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1-12 when executing a program stored in the memory.
18. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1-12.
CN201811125659.4A 2018-09-26 2018-09-26 Question correcting method and device for oral calculation questions Active CN109189895B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201811125659.4A CN109189895B (en) 2018-09-26 2018-09-26 Question correcting method and device for oral calculation questions
US16/756,468 US11721229B2 (en) 2018-09-26 2019-09-11 Question correction method, device, electronic equipment and storage medium for oral calculation questions
EP19865656.3A EP3859558A4 (en) 2018-09-26 2019-09-11 Answer marking method for mental calculation questions, device, electronic apparatus, and storage medium
JP2021517407A JP7077483B2 (en) 2018-09-26 2019-09-11 Problem correction methods, devices, electronic devices and storage media for mental arithmetic problems
PCT/CN2019/105321 WO2020063347A1 (en) 2018-09-26 2019-09-11 Answer marking method for mental calculation questions, device, electronic apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811125659.4A CN109189895B (en) 2018-09-26 2018-09-26 Question correcting method and device for oral calculation questions

Publications (2)

Publication Number Publication Date
CN109189895A CN109189895A (en) 2019-01-11
CN109189895B true CN109189895B (en) 2021-06-04

Family

ID=64906199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811125659.4A Active CN109189895B (en) 2018-09-26 2018-09-26 Question correcting method and device for oral calculation questions

Country Status (1)

Country Link
CN (1) CN109189895B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020063347A1 (en) * 2018-09-26 2020-04-02 杭州大拿科技股份有限公司 Answer marking method for mental calculation questions, device, electronic apparatus, and storage medium
CN109815955B (en) * 2019-03-04 2021-09-28 杭州大拿科技股份有限公司 Question assisting method and system
CN111666799A (en) * 2019-03-08 2020-09-15 小船出海教育科技(北京)有限公司 Method and terminal for checking oral calculation questions
CN112307858A (en) * 2019-08-30 2021-02-02 北京字节跳动网络技术有限公司 Image recognition and processing method, device, equipment and storage medium
CN110929582A (en) * 2019-10-25 2020-03-27 广州视源电子科技股份有限公司 Automatic correction method and device for oral calculation questions, storage medium and electronic equipment
CN112396009A (en) * 2020-11-24 2021-02-23 广东国粒教育技术有限公司 Calculation question correcting method and device based on full convolution neural network model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164994A (en) * 2013-03-15 2013-06-19 南京信息工程大学 Operation exercise correction and feedback method
CN103646244A (en) * 2013-12-16 2014-03-19 北京天诚盛业科技有限公司 Methods and devices for face characteristic extraction and authentication
CN105955962A (en) * 2016-05-10 2016-09-21 北京新唐思创教育科技有限公司 Method and device for calculating similarity of topics
CN106060172A (en) * 2016-07-21 2016-10-26 北京华云天科技有限公司 Method for judging answer to test question and server
CN106251725A (en) * 2016-07-21 2016-12-21 北京华云天科技有限公司 Examination question corrects method and server
CN107818168A (en) * 2017-11-10 2018-03-20 广东小天才科技有限公司 Topic searching method, device and equipment
CN108537014A (en) * 2018-04-04 2018-09-14 深圳大学 A kind of method for authenticating user identity and system based on mobile device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164994A (en) * 2013-03-15 2013-06-19 南京信息工程大学 Operation exercise correction and feedback method
CN103646244A (en) * 2013-12-16 2014-03-19 北京天诚盛业科技有限公司 Methods and devices for face characteristic extraction and authentication
CN105955962A (en) * 2016-05-10 2016-09-21 北京新唐思创教育科技有限公司 Method and device for calculating similarity of topics
CN106060172A (en) * 2016-07-21 2016-10-26 北京华云天科技有限公司 Method for judging answer to test question and server
CN106251725A (en) * 2016-07-21 2016-12-21 北京华云天科技有限公司 Examination question corrects method and server
CN107818168A (en) * 2017-11-10 2018-03-20 广东小天才科技有限公司 Topic searching method, device and equipment
CN108537014A (en) * 2018-04-04 2018-09-14 深圳大学 A kind of method for authenticating user identity and system based on mobile device

Also Published As

Publication number Publication date
CN109189895A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109271401B (en) Topic searching and correcting method and device, electronic equipment and storage medium
CN109284355B (en) Method and device for correcting oral arithmetic questions in test paper
CN109189895B (en) Question correcting method and device for oral calculation questions
CN109583429B (en) Method and device for correcting application questions in test paper in batches
CN109670504B (en) Handwritten answer recognition and correction method and device
US11508251B2 (en) Method and system for intelligent identification and correction of questions
CN109817046B (en) Learning auxiliary method based on family education equipment and family education equipment
CN109712043B (en) Answer correcting method and device
US20220067416A1 (en) Method and device for generating collection of incorrectly-answered questions
US11721229B2 (en) Question correction method, device, electronic equipment and storage medium for oral calculation questions
CN109902285B (en) Corpus classification method, corpus classification device, computer equipment and storage medium
CN113111154B (en) Similarity evaluation method, answer search method, device, equipment and medium
CN106919551B (en) Emotional word polarity analysis method, device and equipment
CN112347997A (en) Test question detection and identification method and device, electronic equipment and medium
CN110852071B (en) Knowledge point detection method, device, equipment and readable storage medium
US11749128B2 (en) Answer correction method and device
CN112116181B (en) Classroom quality model training method, classroom quality evaluation method and classroom quality evaluation device
CN110895924B (en) Method and device for reading document content aloud, electronic equipment and readable storage medium
WO2023024898A1 (en) Problem assistance method, problem assistance apparatus and problem assistance system
CN114691907B (en) Cross-modal retrieval method, device and medium
CN111078921A (en) Subject identification method and electronic equipment
CN111079486A (en) Method for starting dictation detection and electronic equipment
CN113850235B (en) Text processing method, device, equipment and medium
CN108304366B (en) Hypernym detection method and device
CN111079489A (en) Content identification method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant