WO2018094925A1 - 一种填空题试题的生成和判卷的方法及系统 - Google Patents

一种填空题试题的生成和判卷的方法及系统 Download PDF

Info

Publication number
WO2018094925A1
WO2018094925A1 PCT/CN2017/077790 CN2017077790W WO2018094925A1 WO 2018094925 A1 WO2018094925 A1 WO 2018094925A1 CN 2017077790 W CN2017077790 W CN 2017077790W WO 2018094925 A1 WO2018094925 A1 WO 2018094925A1
Authority
WO
WIPO (PCT)
Prior art keywords
blank
answer
test
fill
question
Prior art date
Application number
PCT/CN2017/077790
Other languages
English (en)
French (fr)
Inventor
刘佳
卢启伟
Original Assignee
深圳市鹰硕技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市鹰硕技术有限公司 filed Critical 深圳市鹰硕技术有限公司
Publication of WO2018094925A1 publication Critical patent/WO2018094925A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education

Definitions

  • the invention relates to the application of computer technology in the field of teaching, in particular to a method and a system for automatically editing a fill-in-the-blank problem by using computer technology, automatically generating a fill-in-the-blank question, and automatically judging a fill-in-the-blank test paper, which can be applied to network teaching and multimedia teaching. , question bank management, online evaluation, after-school exercises, etc.
  • CN101587512A and CN101587514A respectively disclose a method and a system for judging the fill-in-the-blank questions for the computer-aided examination system.
  • the test answers that are different from the preset standard answers are processed in batches.
  • the method is judged, and all the judgments that need to be divided into test papers are divided and then divided, instead of one by one, each paper is sequentially judged, which enables the judges to reduce labor intensity, improve work efficiency, and improve judgments. Process consistency.
  • the above two applications are disclosed, and the blanks are compared with the standard answers.
  • the standard answers can also be added according to the actual answer, so that a blank can correspond to multiple standard answers, but such correspondence is in the judgment.
  • it is not preset in advance, and it does not pay attention to the idea of setting the wrong answer.
  • CN102542068A discloses a cloud storage-based electronic test storage management method, which can complete a test paper storage by storing an index table for the test question database and storing only one test paper to the test database in the database, and storing the test questions in the storage learner.
  • the error answer is stored, and only an error test is stored in the database to the storage location index to complete a test record, thereby reducing the storage space of the electronic test paper.
  • the above application aims to solve the storage problem of the test paper content of the test, and adopts the mapping relationship between the answer and the test question and the blank fill item, and only saves the answer information, greatly reducing the amount of stored information, and the concern is to reduce the amount of information storage.
  • CN102591956A discloses a method for constructing a power system test question bank, which classifies the knowledge involved in the test question library according to chapters, sections, and sections, and creates a test list for each section of each chapter in each chapter.
  • a test question number field is created in the test list.
  • the test questions are stored in the corresponding test questions according to the test point knowledge points, and a unique test number is prepared for the test questions according to the test feature.
  • the test question number of each test question can be conveniently and quickly. Retrieving test questions with specific characteristics, bringing convenience to the test search, and reducing the time spent on selecting the questions.
  • the above application aims to solve the problem of the test mark number, improve the search efficiency through the test ID information, and facilitate the topic selection, but does not involve how to update and perfect the content.
  • CN103761101A discloses a method for calculating a variable parameter in a power test system.
  • a method for calculating a variable parameter in a power test system By generating different test papers according to preset conditions, an examination of the same knowledge point is realized, and different students can obtain different questions to prevent cheating. It is more conducive to examining the degree of flexibility of the students' knowledge points.
  • the concern is to generate different test papers by different editing sequences of the questions, the purpose of which is to prevent cheating.
  • CN105118348A discloses a subject selection method based on knowledge point system. Through systematic evaluation of knowledge points of various disciplines, a systematic automatic evaluation system is adopted, and an objective evaluation of learning state is formed according to test results, so that learners can benefit. Shallow knowledge point content learning to achieve targeted training goals.
  • CN102157084A discloses a method and a device for setting a fill-in-the-blank problem of an electronic whiteboard, comprising the steps of: creating a module to create problem data of a fill-in-the-blank question, storing the problem data in a database, the problem data including a title and a blank area, and filling in the blank question
  • the data is also composed of the following items in the database: the test ID, the test type ID, whether to give the correct answer, the empty ID and the correct answer.
  • the user has to enter such a fill-in-the-blank question: "This is a fill-in-the-blank test.”
  • the database automatically assigns a question ID to the fill-in-the-blank question.
  • the value of the title is "This is a ___ “Test", the blank parameter records the location of the blank area and is assigned a unique identifier such as A, the number of blanks (Blank Count) is 1, and the correct answer (Correct Answer) is the value of "fill in the blank”. , Is there a correct answer (Has Correct Answer) with a value of 1. Of course, if the user chooses not to enter the correct answer for the fill-in-the-blank question temporarily, then the correct answer (Correct Answer) is still null (null), then the value of Has Correct Answer is 0.
  • the conversion module converts the problem data into an electronic whiteboard object and loads it into an electronic whiteboard, the electronic whiteboard object includes a title object and a blank object.
  • the creation module corresponds the parameter value of each input box to the The name of the item combined with the test parameters of this fill-in-the-blank question jointly generated the problem data of this fill-in-the-blank question and stored it in the database, thus completing the creation of a fill-in-the-blank question.
  • the answer unit includes an answer comparison unit for comparing the answer input by the user with the correct answer and outputting the comparison result, thereby realizing the integrity of the interactive whiteboard examination system and expanding the interactive whiteboard examination system. Application range.
  • the setting of the blank-filled item and its answer for a fill-in-the-blank question is fixed.
  • the setting of the fill-in-the-blank item is fixed.
  • the setting of the answer is only concerned with the correct answer, making the editing of the fill-in-the-blank question complicated, time-consuming, and unable to quickly achieve multiple answers. Editing and matching, once it becomes a problem, it is difficult to adjust and change as needed, and intelligent editing cannot be achieved.
  • the problem with the fill-in-the-blank question is that the standard correct answer may not be unique. Only a single standard answer is set, which is not conducive to the improvement of automatic judgment efficiency.
  • the expression of each standard answer reflects the learners' more or less problems and preferences in this aspect of knowledge, and discovering and understanding such issues can better carry out teaching activities.
  • the difficulty of automatic judgment is that even if it is different from the set multiple correct answers, it is not necessarily wrong.
  • the present invention is directed to the above problems existing in the prior art, and is mainly for satisfying the needs of K12 education, online education, online examination, online question bank for intelligent filling of blanks, automatic problem solving and automatic judgment, and providing a quick and intelligent fill-in-the-blank question.
  • the title is edited, the test attribute information is edited, the fill-in-fill item is automatically generated, and the fill-in-the-blank test questions are automatically generated, forming a fill-in-fill item corresponding to multiple different answers, thereby realizing wisdom.
  • the problem is to solve the problem of intelligent editing of fill-in-the-blank questions, automatic generation of fill-in-the-blank questions, and automatic judgment of fill-in-the-blank questions, especially for a fill-in-the-blank question including at least two blank-filling items, and each blank-filling item
  • the improvements of the present invention include the following:
  • the purpose of setting the wrong answer is as long as the test answer is compared with the wrong answer. Yes, if they are the same, it can be judged that the test answer is definitely wrong, which not only facilitates the statistical test error, but also improves the efficiency of automatic judgment.
  • Each question setting includes a fill-in-the-blank ID identifier of the editable attribute information, and each fill-in-the-moment item sets an empty-fillable item ID identifier of the editable attribute information.
  • the rule may be generated according to the set test question generation rule.
  • a fill-in-the-blank question containing at least one of the fill-in-fill items is generated.
  • a fill-in-the-blank question has two or three blank-filled items
  • the test questions when generating the test questions, only one or two of the blank-filled items can be retained according to the generation rule, so that the two consecutively-filled fill-in-the-blank questions can be different.
  • information such as the generation time and the cumulative number of generations in the ID ID is updated, and the identification information is used to match the test-making rule, so that the fill-in-the-blank items of the fill-in-the-blank test questions generated twice in succession can be different.
  • test answers After completing the test of the fill-in-the-blank question, extract the test answer of each blank-filled item, and fill in the blank and fill-in-the-blank ID, and compare it with the set correct answer and the wrong answer, as long as at least one wrong answer is the same or the test answer is empty.
  • the data is judged to be wrong, and the answer is correct when it is the same as at least one correct answer.
  • test answers that are still undecidable they can be sorted according to the cumulative number order, and the position of the corresponding blank-filled item in the fill-in-the-blank question is displayed for the judge to judge.
  • the wrong answer and correct answer after the judge made the judgment can be added to the existing wrong answer and the correct answer, and the relevant relationship data is updated, and the judge does not need to specifically judge the correctness of a test paper. Whether or not these answers are judged, whether they are correct or incorrect answers; after the completion of such a determination, the system automatically re-examines the unfinished test papers according to the updated answer test correspondence database.
  • a method for generating a fill-in-the-blank question comprising the steps of:
  • the dry editing step editing the dry content of the fill-in-the-blank question, generating at least one blank-filling item, adding a fill-in-the-blank ID and a blank-filling item ID to the title and the blank-filling item respectively;
  • the answer assigning step setting at least one answer for the blanking entry, establishing a correspondence between the answer and the above two ID identifiers, and saving to the fill-in-the-blank database;
  • the test question generating step according to the test rule generation rule, by comparing with the above two ID identifiers of the fill-in-the-blank question in the data, extracting the fill-in-the-blank question and the corresponding answer corresponding to the generation rule, and generating the test question, and saving to the test question Question database.
  • the at least one answer includes at least one correct answer and at least one wrong answer.
  • the blank-filling item is at least two, and the blank-filling item ID identifier is associated with the fill-in-the-blank ID, and is established according to the fill-in-the-blank ID and the blank-filling ID identifier. Corresponding association between the blanking entry and the answer, and saved to the fill-in-the-blank database.
  • the preferred answer of the at least one correct answer is the content of the title corresponding to the blank entry, the other answer of the at least one correct answer is an answer equivalent to or similar to the preferred answer, the at least one error
  • the answer is a typical wrong answer.
  • each time at least one of the at least two blank fill items of each fill-in-the-blank question may be selected to generate a fill-in-the-blank test question, for the blank fill item that is not selected, Filling the corresponding preferred answer back to the blank fill item that is not selected, and no blank fill item is formed in the generated fill-in-the-blank question.
  • the attribute information in the fill-in-the-blank ID in the fill-in-the-blank database is updated according to the content of the fill-in-the-money item actually generated.
  • the attribute information includes: a sequence number, a knowledge point involved, a standard score, a difficulty coefficient, a number of times used, a previous error rate, and a time when the most recent one was adopted.
  • the test result generation rule is set according to the test result generation, wherein the rule content corresponds to the fill-in-the-blank question and/or the at least one attribute information of the fill-in-the-blank item.
  • the editing of the title content includes: defining a data source rule to standardize the title content data expression, or editing the title content by using a middleware button integrated into the text editor .
  • the definition data source rule is a specification defined by any symbol, number, letter, or text, including (), ##,
  • a system for generating a fill-in-the-blank question comprising:
  • a processor for executing the program code.
  • a method for judging a fill-in-the-blank test paper composed of a fill-in-the-blank test question generated according to the above-described method for generating a blank-filled question comprising the following steps:
  • Filling in the blank test paper generation step From the test database, select at least one fill-in-the-blank test question to form a fill-in-the-blank test paper or a fill-in-the-blank test question portion of the test paper;
  • Test answer extraction step after completing the test of the generated fill-in-the-blank test paper, extract the fill-in-the-blank ID and the blank-filled item ID identifier of the test question in the test paper, and the test answer of the corresponding blank-filled item;
  • test answer comparison step comparing the above information of the extracted test questions with the answer, if the test answer is the same as the at least one wrong answer, or the test answer is null data, the answer answer is wrong, if the test answer is Said at least one correct answer is the same, and the answer is correct;
  • Test answer confirmation step for the case that the test answer cannot be determined by the above steps, the test answer sequence is displayed on the corresponding blank fill item of the corresponding question, for the judge to verify and confirm one by one, and according to the confirmation result, first update The wrong answer and the correct answer, and then repeat the above test answer comparison step;
  • Test answer statistics step statistical analysis of the test answer, update the corresponding fill-in-the-mail ID ID and fill-in-item ID ID in the relevant attribute information.
  • the order of the same test answer is sorted and displayed in the corresponding blank fill item, so that the judge can combine the questions in the process of verifying the correctness of the test answers.
  • the content is judged.
  • a judgment system for a blank-filled test paper including:
  • a memory for storing program code for performing the method of determining a volume as described above;
  • a processor for executing the program code.
  • a computer program comprising computer program code for performing the steps of the method according to the above described when loaded into a computer system and executed.
  • a computer readable storage medium comprising the aforementioned computer program is provided.
  • the invention realizes the intelligent editing of the fill-in-the-blank question, in particular, the blank-filling question including multiple blank-filling items and the standard answer expression of each blank-filling item, and sets multiple correct answers and sets Multiple error answers, and with the help of the editable attribute information identifier, by comparing with the test rule generation rule, at least one of the multiple blank fill items can be selected, that is, multiple blank fill items can be set when editing the fill-in-the-blank question, but When generating the fill-in-the-blank questions, you can include only one or a few of them, and you don't have to fill in all the blanks. By setting the rules, you can make the fill-in questions that are generated twice in succession.
  • At least one wrong answer and at least one correct answer subvert the prior art, only right or wrong, only focus on the correct answer, and do not care about the typical wrong answer, thus facilitating quick and effective grasp of the correct answer and answer.
  • fast and efficient automatic judgment through the further verification of the test answer that can not be automatically judged, timely and more Fill in the blanks and answer the corresponding item relational database so that the system can continue to improve the efficiency of automatic grading gradual.
  • FIG. 1 is a schematic diagram of functional modules of the present invention.
  • FIG. 2 is a schematic flow chart showing the operation of the present invention.
  • the constituent function modules of the present invention mainly include an editing module 10, a test result generating module 20, and a judgment module 30.
  • the editing module 10 is configured to edit a fill-in-the-blank question to generate a fill-in-the-blank question including a stem and a fill-in-the-blank item, and specifically includes an edit start module 101, a stem edit module 102, an answer assignment module 103, and a fill-in-the-blank storage module 104.
  • the editing startup module 101 is configured to start the editing of the fill-in-the-blank question, and retrieve the original dry information for editing the fill-in-the-blank question.
  • the title editing module 102 is configured to edit the original title information, generate an appropriate number of fill-in-the-blank items, and generate a fill-in-the-blank question consisting of the title (filling in the subject) and the blank-filling item, the fill-in-the-blank item being at least two. Add an ID identifier containing attribute information for each fill-in-the-blank question and each fill-in-the-blank item. These attribute information are editable, including sequential number, knowledge points involved, standard score, difficulty factor, number of times used, past errors. Rate, time of the most recent adoption, etc.
  • the answer assigning module 103 is configured to set at least one answer for the fill-in-the-blank item, establish a correspondence relationship between the answer and the above two ID identifiers, and form a corresponding relationship between the fill-in-the-blank question, the blank-filled item, and the answer.
  • the fill-in-the-blank question storage module 104 is configured to store the data of the fill-in-the-blank question in the form of a database, including the fill-in-the-blank question and its answer and correspondence.
  • the test result generating module 20 is configured to generate a fill-in-the-blank question according to the production condition, and specifically includes: a rule setting module 201, a comparison module 202, a generating module 203, and a test question storage module 204.
  • the rule setting module 201 is configured to set a generation rule for generating a test question, where the generation rule corresponds to related attribute information in the above two ID identifiers, such as a standard score, a knowledge point, a difficulty coefficient, and, for example, avoiding two Generate the same test questions, or generate different fill-in-the-blank items for the same test questions. You can use the rules to set and then compare the attribute information of the fill-in-the-blank questions to generate the required fill-in-the-blank questions. The last fill-in-the-blank item used in this test avoids duplication.
  • the comparison module 202 is configured to compare the test rule generation rule with the fill-in-the-blank question and the fill-in-the-blank item attribute information to generate a fill-in-the-blank test question that meets the condition.
  • the generating module 203 is configured to read the relevant information from the fill-in-the-blank database to generate an answer corresponding to the currently generated test question and the blank-filled item, and generate the fill-in-the-blank test paper or the fill-in-the-blank part of the test paper.
  • the test question storage module 204 is configured to store the information of the current fill-in-the-blank question and its answer in the manner of the test question database.
  • the judgment module 30 is configured to generate a test paper and determine the test answer according to the answer after the tester completes the test paper, and specifically includes a test paper generation module 301, a test answer extraction module 302, a test answer comparison module 303, and a test answer confirmation module 304.
  • the test answer statistics module 305 is configured to generate a test paper and determine the test answer according to the answer after the tester completes the test paper, and specifically includes a test paper generation module 301, a test answer extraction module 302, a test answer comparison module 303, and a test answer confirmation module 304.
  • the test answer statistics module 305 is configured to generate a test paper and determine the test answer according to the answer after the tester completes the test paper, and specifically includes a test paper generation module 301, a test answer extraction module 302, a test answer comparison module 303, and a test answer confirmation module 304.
  • the test answer statistics module 305 is configured to generate a test paper and determine the test answer according to the answer after the tester completes the test paper, and specifically
  • the test paper generating module 301 is configured to generate a fill-in-the-blank test paper, which may be a test paper composed of a simple fill-in-the-blank test question or a fill-in-the-blank portion of a comprehensive test paper.
  • the test answer extraction module 302 is configured to extract data information of the test questions, the blank-filled items, and the test answers and corresponding relationship information after the tester completes the test.
  • the test answer comparison module 303 is configured to compare the extracted information with the answer information of the test question, if the test answer is the same as the at least one wrong answer, or the test answer is null data, the answer is incorrect, if The test answer is the same as the at least one correct answer, and the answer is correct. If the decision cannot be made, the answer information is extracted.
  • the test answer confirmation module 304 is configured to display the extracted undecided answer information to the corresponding blank fill item for the judge to make a positive or negative determination.
  • each test answer includes a correct or wrong one.
  • Two buttons, if judged to be correct, click the correct button, these answers that are determined to be correct will be added to the original answer, and the corresponding answer database will be updated. Do the same for the test answer that is determined to be erroneous.
  • the statistic module 305 is configured to perform statistical analysis on the test answer according to the updated answer database, and update relevant attribute information in the corresponding fill-in-the-blank ID identifier and the blank-fill item ID identifier.
  • the statistical module can count the number of times each test question is used and answered.
  • the so-called adoption refers to the use of a test paper, which is once, the number of answers refers to the number of tests used for the test paper, that is, How many people answered.
  • the number of adoptions and the number of answers and the error rate are counted, and updated to the attribute information of each test question and each filled-in blank, and the difficulty of dynamically adjusting the test questions according to the error rate of historical big data. coefficient.
  • This kind of management of the attribute information dynamically enables the fine management of fill-in-the-blank questions and fill-in-the-blank items.
  • the fill-in-the-blank database and the test-question database can be integrated into one data, marked and distinguished by data items, and the above functions are implemented by the same module.
  • FIG. 2 a schematic diagram of the operational flow of the present invention is implemented.
  • the method of generating the fill-in-the-blank questions and the method of judging the fill-in-the-blank test papers can realize quick editing of the fill-in-the-blank questions, quick insertion/adding of blank-filled items, automatic generation of fill-in-the-blank item serial numbers, etc., and definition/setting for each blank-filled item
  • the editing step 100 includes an editing initiation step 1001, a stem editing step 1002, and an answer assigning step 1003.
  • Editing start step 1001 Retrieving the original question dry information for editing the fill-in-the-blank question, the original question dry information is unedited information including the content of the title and the blank-filled item, and the so-called blank-filling item is usually after removing the original question The dry part of the information is formed empty.
  • the fill-in-the-blank question includes the content of the title and the corresponding blank-filling item.
  • the question is sometimes used to fill in the blank or the fill-in-the-blank question, just for the convenience of description. There is no essential difference, which is intended to indicate that the title or fill-in-the-blank questions are not generated for the exam or can be edited or filled in before the test papers can be selected.
  • the dry editing step 1002 editing the dry content of the fill-in-the-blank question, generating at least one blank-filling item, and adding a fill-in-the-blank ID and a blank-filling item ID to the title and the blank-filling item respectively.
  • the editing of the title content includes: defining a data source rule to standardize the title content data expression, or editing the title content by using a middleware button integrated into the text editor .
  • Each empty format can be defined as: "", “ ⁇ ", each internal answer is separated by '
  • the data source format parsing rules for the entire fill-in-the-blank question can be defined;
  • Middleware button You can define a button in the editor, click to implement the insertion of a new fill-in item, such as defining [] as the insert button, click to trigger the current insert data source rule.
  • the data source rule is defined by any symbol, number, Specifications defined by letters, or texts, including (), ##,
  • the fill-in-the-blank item has at least two, and the fill-in-fill item ID identifier has an association relationship with the fill-in-the-blank question ID identifier, and the fill-in-the-blank item and the answer are established according to the fill-in-the-blank question ID identifier and the blank-fill item ID identifier.
  • the corresponding association relationship between the two is saved to the fill-in-the-blank database.
  • the answer is given to step 1003: setting at least one answer for the blanking entry, establishing a correspondence between the answer and the above two ID identifiers, forming a fill-in-the-blank question; the at least one answer includes at least one correct answer and at least one error answer.
  • the preferred answer of the at least one correct answer is the content of the title corresponding to the blank entry, the other answer of the at least one correct answer is an answer equivalent to or similar to the preferred answer, the at least one error
  • the answer is a typical wrong answer.
  • the test question generation step 200 includes a rule setting step 2001, a comparison step 2002, and a generation step 2003.
  • the rule setting step 2001 is configured to generate a test result generation rule, which is set according to the test result generation, and the rule content corresponds to the title attribute and/or the related attribute information of the fill-in-the-blank item. For example, to avoid generating the same test questions twice, or to generate different fill-in-the-blank items for the same test questions, you can use the rules to set and then compare the attribute information of the fill-in-the-blank questions to generate the required fill-in-the-blank questions. For example, it can be realized that the last time the blank fill item is used, avoid duplication when generating the test questions this time. If the error rate exceeds a certain ratio, it will not be used when generating the test questions.
  • the comparison step 2002 is configured to generate a fill-in-the-blank question that meets the generation rule by comparing the two types of the ID identifiers of the fill-in-the-blank question according to the test rule generation rule.
  • each time at least one of the at least two blank fill items of each fill-in-the-blank question may be selected to generate a fill-in-the-blank question, and for the blank fill item that is not selected, the corresponding preferred answer is filled back.
  • the blanks that are not selected are no longer filled in the blanks in the generated fill-in-the-blank questions.
  • the generating step 2003 is configured to update the attribute information in the fill-in-the-blank ID in the fill-in-the-blank database according to the content of the fill-in-the-money item that is generated after the fill-in-the-blank question is generated, where the attribute information includes: Knowledge points, standard scores, difficulty factors, number of times used, past error rates, time of the most recent adoption.
  • the related attribute information in the ID identifier is updated once, including the time of the last time used and the number of times used.
  • the decision step 300 includes a test paper generation step 3001, a test answer extraction step 3002, a test answer comparison step 3003, a test answer confirmation step 3004, and a test answer statistics step 3005.
  • the test paper generating step 3001 selecting at least one fill-in-the-blank test question to form a fill-in-the-blank test paper or a fill-in-the-blank test part of the test paper;
  • the test answer extraction step 3002 after completing the test of the generated fill-in-the-blank test paper, extracting the fill-in-the-blank ID mark and the blank-fill item ID identifier of the test question in the test paper, and the test answer of the corresponding blank-filled item;
  • the test answer comparison step 3003 comparing the above information of the extracted test questions with the answer, if the test answer is the same as the at least one wrong answer, or the test answer is null data, the answer answer is wrong, if the test The answer is the same as the at least one correct answer, and the answer is correct;
  • test answer confirmation step 3004 for the case that the test answer cannot be determined by the above steps, the test answer sequences are sequentially displayed on the corresponding blanks of the corresponding questions, for the judges to verify and confirm one by one, and according to the confirmation result First, update the wrong answer and the correct answer, and then repeat the above test answer comparison step; for the case where the test answer cannot be determined by the above steps, the order of the same test answer is sorted, and the corresponding blank fill is displayed.
  • the judger can judge the content of the test in the process of verifying the correctness of the test answers;
  • the test answer statistics step 3005 after completing the test answer confirmation, performing statistical analysis on the test answer, updating the relevant fill-in-the-money question ID and/or the related attribute information in the blank-fill item ID identifier in the test question database.
  • the statistical step 3005 can count the number of times each question is used and answered.
  • the so-called adoption refers to the use of a test paper, which is once, the number of answers refers to the number of tests used by the test paper, that is, How many people answered.
  • the number of adoptions and the number of answers and the error rate are counted, and updated to the attribute information of each test question and each filled-in-fill item, and can also be dynamically adjusted according to the error rate statistics of the continuously improved historical big data.
  • the difficulty factor of the test This kind of management of the attribute information dynamically enables the fine management of fill-in-the-blank questions and fill-in-the-blank items.
  • the original information of the test is “This year is the Year of the Monkey 2016, and Zhang San and Li Si were born this year.”
  • two 'empty' can be generated for the stem.
  • This interface can be used to set multiple standard correct answers and typical wrong answers for the current one. After completion, for example, "2016” “monkey” can be set to the standard correct answer, "2015” “sheep” can be set as a typical wrong answer, "Zhang San” “Li Si” can be set as the standard correct answer, "Wang Five” can be set as a typical wrong answer.
  • the display effect of the stem is automatically generated. At this time, the ID identifier of the corresponding attribute information is added.
  • the fill-in-the-blank question can be kept in the memory in the form of a fill-in-the-blank database, which is convenient for editing and management.
  • the attribute information may include: the question number, the knowledge point involved, the difficulty coefficient, the number of times used, the time of the last use, and the like.
  • each empty time of the editing question is implemented, and each space is marked with a natural number.
  • the program uses the function of string replacement, and the replacement is made; for example, the first '[]' (array is included) All the answers in the array), replace with 1, replace the second '[]' (all the answers in the array) array with 2, (the serial number of the tag can be any defined rule)... and so on;
  • the content of the replacement item is automatically hidden by the defined generation rule (for example, the underline and each empty serial number).
  • the form of production is not limited to the current form, such as: 1, 2, 1., 2., ⁇ , ⁇ , etc. can be any form of fill-in .
  • the method and system of the present invention can be integrated and used to any client, web page, and various systems, projects, or products that require online editing and processing involving fill-in-the-blank questions, volumes, and judgments.
  • the methods and systems of the present invention can be implemented by computer program code, which includes a memory and a processor for storing code and executing code, respectively.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Primary Health Care (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种填空题试题的生成和判卷的方法及系统,旨在解决填空题的智能编辑、填空题试题的自动生成和填空题试卷的自动判卷等问题,特别是针对一道填空题包括至少两个填空项,并且每个填空项的正确答案的表述方式有多种的情况,所述填空题包括题干和嵌入其中的填空项,对于每个题干,通过定义数据源规则编辑填空项,对于每个填空项设置至少一个标准的正确答案和至少一个典型的错误答案,对于每个题干设置包括可编辑的属性信息的ID标识,每个填空项设置可编辑的属性信息的ID标识,在生成填空题试题时,根据设定的试题生成规则,通过与以上两种ID标识对比,生成具有至少一个填空项的填空题试题,至少一道填空题试题构成填空题试卷或者试卷的填空题试题部分,根据答案完成对试题的自动判卷,并且进行快速有效的统计,包括答对情况和答错情况。

Description

一种填空题试题的生成和判卷的方法及系统 技术领域
本发明涉及计算机技术在教学领域的应用,特别是涉及一种利用计算机技术智能编辑填空题、自动生成填空题试题以及对填空题试卷自动判卷的方法及系统,可以应用于网络教学、多媒体教学、题库管理、在线评测、课后练习等。
背景技术
在利用计算机进行出题和判卷的过程中,相比于已经比对智能化的选择题和判断题的编辑、生成和判卷,有关填空题的编辑、试题的生成和试卷的判卷的智能化和自动化一直不能令人完全满意。在计算机技术迅猛发展的今天,教师和其他出题人员对于填空题的处理,还主要停留在人工操作的层面,由出题者手动编辑试题,设置填空项,判卷者对试卷进行逐份判断,计算机主要作为媒介工具使用,智能化和自动化处理方面还有改进的余地。
随着这方面需求的增加,围绕快速高效的处理填空题这个主题,现有技术中已经有一些专利申请进行了有益的尝试,比如:
CN101587512A和CN101587514A分别公开了一种用于计算机辅助考试系统的填空题的判分方法和判分系统,对填空题进行判分的过程中,对于与预置标准答案不相同的测试答案采用批量处理的方法判分,对所有需判分试卷的判分结束后再进行统分,而不是逐一对每份试卷进行顺序判分,这使得判卷者可以减轻劳动强度、提高工作效率,提高判卷过程的一致性。上述两件申请公开了,将填空项与标准答案比对判断,还可以根据实际回答的情况,增加标准答案,使得一个填空项可以对应多个标准答案,但是这样的对应关系是在判卷的过程中,事后人工确认后添加的,不是事先预设的,而也没有关注到设置错误答案的构思。
CN102542068A公开了一种基于云存储的电子试题存储管理方法,通过为试题库建立索引表,仅在数据库中存储一张试卷到题库的映射表便可完成一份试卷的存储,在存储学习者答题时,将学习者答题与标准答案对比,将错误答题存储,仅在数据库中存储一张错误试题到存储位置的索引完成一份试题的记录,从而降低了电子试卷的存储空间。上述申请旨在解决完成测试的试卷内容的存储问题,采用的是建立答案与试题和填空项的映射关系,只保存答案信息,大大降低存储信息的数量,其关心的是降低信息存储数量。
CN102591956A公开了一种电力系统试题库的构建方法,将试题库所涉及的知识按章、节、小节进行分类,并为每一章中的每一节的每一小节创建一个试题表,在每个试题表中创建一个试题编号字段,试题录入时根据试题知识点出处将试题存入相应试题表,并根据试题特征为试题编制一个唯一的试题编号,通过各试题的试题编号,能方便快速的检索出包含特定特征的试题,给试题检索带来方便,能减少出题人员选题所花费的时间。上述申请旨在解决试题标记编号的问题,通过试题ID信息,提高检索效率,方便出题人选题,但未涉及这些内容如何进行更新完善的构思。
CN103761101A公开了一种电力考试系统中计算题参数可变的方法,通过根据预设条件生成不同的试题试卷,实现了对于同一个知识点的考察,不同学员可得到不一样的题目,防止作弊,更利于考察学员对知识点的灵活掌握程度,其关注的是通过试题的编辑顺序不同生成不同的试卷,其目的在于防作弊。
CN105118348A公开了一种基于知识点体系的学科选题方法,通过对各学科知识点的系统性测评,采用系统自动评判体系,根据测试结果,形成学习状态的客观评价,使得学习者能获得受益匪浅的知识点内容学习,达到有的放矢的专项训练目标。
CN102157084A公开了一种电子白板的填空题设置方法及装置,包括以下步骤:创建模块创建填空题的问题数据,将该问题数据存入数据库,所述问题数据包括题目和空白区域,填空题的问题数据在数据库中还由以下表项构成:试题ID、试题类型ID、是否给出正确答案、空ID及正确答案的表项。例如,用户要输入这样一道填空题:“这是一个填空题测试”,数据库自动为这道填空题分配一个试题参数(Question ID),题目(Title)的值就为“这是一个___题测试”,空参数(Blank ID)记录空白区域的位置并被数据库分配了唯一标识如A,空的个数(Blank Count)的值为1,正确答案(Correct Answer)的值为“填空”,是否有正确答案(Has Correct Answer)的值为1。当然,如果用户选择暂时不输入该道填空题的正确答案,则设置正确答案(Correct Answer)依然为空(null),那么是否有正确答案(Has Correct Answer)的值为0。转化模块将所述问题数据转化为电子白板对象,并加载到电子白板中,所述电子白板对象包括题目对象和空白对象,当用户点击完成时,创建模块将各个输入框中的参数值对应其表项名称并结合这道填空题的试题参数共同生成了这道填空题的问题数据,并存入数据库中,从而完成了一道填空题的创建。答毕单元包括答案比较单元,用于将所述用户输入的答案与正确答案进行比较,并输出比较结果,由此实现了交互式电子白板考试系统的完整性,扩大了交互电子白板考试系统的应用范围。
在现有技术中,关于填空题的编辑、填空题试题的生成和填空题试卷的判卷的处理存在以下问题:
对于一道填空题的填空项及其答案的设置是固定的,填空项的设置是固定,答案的设置只是关注到了正确答案,使得编辑填空题步骤复杂、耗时长且无法快速实现一空多个答案的编辑和匹配,一旦成题,很难根据需要进行调整和改变,无法实现智能编辑。
由于每道填空题的填空项是固定不变的,生成的填空题也是一成不变的,这样的填空题可编辑性差,只能人工重新设定填空项。另一方面,学习者在答题时,有时候只是记住了答案,并没有真正了解问题,没有掌握应该掌握的知识点,这样的情况在平时的练习和训练中尤其突出,到了真正考试或者应用知识点解决问题时,即使题干内容不变,稍微改变填空项,就变得不知所措,这不利于提高学习和教学效率,不能实现通过做试题巩固知识点的目的。
填空项与标准答案的设置只是考虑了试题的考试属性,实际上大部分试题都是在平时的练习或测试中,不但需要测试掌握知识的情况,更关心的是学习者出错的地方在哪里?错误的内容是什么?了解这些没有掌握的内容。
填空题的本身问题还有就是,其标准的正确答案的表达方式可能不唯一,只设置单一的标准答案,不利于自动判卷效率的提高。每种标准答案的表达方式都反映了学习者掌握这方面知识的或多或少的问题和偏好,发现和了解这样的问题才能更好的开展教学活动。除此之外,自动判卷的难点还在于,即使与设定的多个正确答案不同,也不一定是错误的。
本发明针对现有技术存在的上述问题,主要为了满足K12教育、在线教育、在线考试、在线题库对于填空题的智能化、自动化出题和自动化判卷的需求,提供一种快速智能对填空题的题干进行编辑,试题属性信息进行编辑,填空项自动生成,填空题试题自动生成,形成一填空项对应多个不同答案,从而实现智 能出题自动判卷的方法或系统。
发明内容
根据本发明的技术方案,旨在解决填空题的智能编辑、填空题试题的自动生成和填空题试卷的自动判卷问题,特别是针对一道填空题包括至少两个填空项,并且每个填空项的答案的表述方式有多种的情况,本发明的改进包括以下内容:
对于每个题干,通过定义数据源规则编辑填空项,对于每个填空项设置至少一个标准的正确答案和至少一个典型的错误答案,设定错误答案的目的在于,只要测试答案与错误答案比对,如果相同,就可以判定测试答案肯定是错误的,这样不但方便统计测试答错的情况,还可以提高自动判卷的效率。
每个题干设置包括可编辑的属性信息的填空题ID标识,每个填空项设置可编辑的属性信息的填空项ID标识,在生成填空题试题时,可以根据设定的试题生成规则,通过与两种所述ID标识进行对比,生成包含至少一个所述填空项的填空题。也就是说,如果一道填空题具有两个或者三个填空项,在生成试题时,可以根据生成规则,只保留其中一个或两个填空项,从而使得两次连续生成的填空题可以不同。每生成一次填空题,就更新有关的ID标识中的生成时间和累计生成的次数等信息,这些标识信息用于匹配试题生成规则,使得连续两次生成的填空题试题的填空项可以不同。
在填空题试题完成测试后,提取各个填空项的测试答案、以及填空项和填空题ID标识,与设定的正确答案和错误答案进行比对,只要与至少一个错误答案相同或者测试答案为空数据就判定答题错误,与至少一个正确答案相同就判断答题正确,对于仍旧不能判定的测试答案,可以按照累计个数顺序排序,显示到填空题的相应填空项的位置,供判卷者对照判断,确定正确与否,判卷者做出判断之后的错误答案和正确答案可以添加到已有的错误答案和正确答案中,更新有关对应关系数据,判卷者不需要具体判断某份试卷的正确与否,只需要对这些答案进行判断,是否属于正确还是错误答案;在完成这样的判定完成后,系统根据更新的答案试题对应关系数据库,对未完成的试卷进行重新的自动判卷。
根据本发明的一个方面,提供一种填空题试题的生成方法,包括以下步骤:
题干编辑步骤:对填空题的题干内容进行编辑,产生至少一个填空项,对所述题干和所述填空项分别添加包含属性信息的填空题ID标识和填空项ID标识;
答案赋予步骤:为所述填空项设定至少一个答案,建立所述答案与以上两种所述ID标识的对应关系,保存至填空题数据库;
试题生成步骤:根据试题生成规则,通过与所述数据中的填空题的以上两种ID标识进行比对,提取符合所述生成规则的填空题及其对应的答案,并且生成试题,并保存至试题数据库。
在所述答案赋予步骤中,所述至少一个答案包括至少一个正确答案和至少一个错误答案。
在所述题干编辑步骤中,所述填空项至少为两个,所述填空项ID标识与所述填空题ID标识存在关联关系,并根据所述填空题ID标识和填空项ID标识,建立所述填空项和所述答案之间的对应关联关系,并且保存至所述填空题数据库。
所述至少一个正确答案的首选答案是所述填空项原先对应的所述题干的内容,所述至少一个正确答案的其他答案是与所述首选答案等同或近似的答案,所述至少一个错误答案为典型的错误答案。
在所述试题生成步骤中,根据不同的试题生成规则,每次可以从每道填空题的所述至少两个填空项中选择至少一个填空项生成填空题试题,对于没有被选中的填空项,将相应的首选答案填充回所述没有被选中的填空项,在所述生成的填空题试题中不再构成填空项。
在生成填空题试题后,根据实际生成的填空项内容,更新所述填空题数据库中的填空题ID标识中的属性信息。
所述属性信息包括:顺序编号、涉及的知识点、标准分值、难度系数、被采用的次数、以往的错误率、最近一次被采用的时间。
所述试题生成规则是根据试题生成需要设定的,其中的规则内容对应于所述填空题和/或所述填空项的至少一个属性信息。
在所述题干编辑步骤中,对所述题干内容的编辑包括,采用定义数据源规则以规范题干内容数据表达,或者,采用集成至文字编辑器的中间件按钮对题干内容进行编辑。
所述定义数据源规则是由任意符号、数字、字母、或文字等定义的规范,包括()、##、||、<>、《》、{}、「」、〖〗、『』、〈〉、和AA等。
根据本发明的另一方面,提供一种填空题试题的生成系统,包括:
存储器,用于存储用来执行如上所述的方法的程序代码;
处理器,用于执行所述程序代码。
根据本发明的再一方面,提供一种由根据上述的填空题试题的生成方法生成的填空题试题构成的填空题试卷的判卷方法,包括以下步骤:
填空题试卷生成步骤:从所述试题数据库中,选择至少一道填空题试题构成填空题试卷或者试卷的填空题试题部分;
测试答案提取步骤:在生成的填空题试卷完成测试之后,提取试卷中试题的填空题ID标识和填空项ID标识、以及对应的填空项的测试答案;
测试答案比对步骤:将提取的试题的上述信息与所述的答案进行比对,如果测试答案与所述至少一个错误答案相同,或者测试答案为空数据,判定答题错误,如果测试答案与所述至少一个正确答案相同,判定答题正确;
测试答案确认步骤:对于不能通过上述步骤判定测试答案正误的情况,将这些测试答案顺序显示到相应题干的相应填空项上,以供判卷者一一核实确认,并且根据确认结果,首先更新所述的错误答案和正确答案,然后重复上述测试答案比对步骤;
测试答案统计步骤:对测试答案进行统计分析,更新相应的填空题ID标识和填空项ID标识中的有关属性信息。
对于不能通过上述步骤判定测试答案正误的情况,按照同样的测试答案的数量多少顺序排序,并显示在相应的填空项中,从而使得判卷者在核实这些测试答案正误的过程中,可以结合题干内容进行判定。
根据本发明的又一方面,提供一种填空题试卷的判卷系统,包括:
存储器,用于存储用来执行如上所述的判卷方法的程序代码;
处理器,用于执行所述程序代码。
根据本发明的还一方面,提供一种计算机程序,包括被加载至计算机系统并被执行时执行根据以上所述的方法的步骤的计算机程序代码。
根据本发明的还一方面,提供一种计算机可读存储介质,包含前述的计算机程序。
通过上述技术方案,本发明实现了对填空题,特别是包括多个填空项且每个填空项的标准答案表述方式多样的填空题,进行智能编辑,既设定多个正确答案,又设定多个错误答案,并且借助可编辑的属性信息标识,通过与试题生成规则比对,可以从多个填空项中选择至少一个,也就是说,在编辑填空题时可以设置多个填空项,但是在具体生成填空题试题时,可以只包含其中一个或几个填空项,不必包含所有填空项,通过设定出题规则,使得连续两次生成的填空题试题可以不同,通过每个填空项对应至少一个错误答案和至少一个正确答案,颠覆了现有技术中,只判对错,只关注正确答案,没有关心典型的错误答案的情况,从而有利于快速有效的掌握答对和答错的情况,同时快速高效自动判卷,通过不能自动判断的测试答案的进一步核实确认,及时更新填空项与答案对应的关系数据库,使得系统可以逐步的持续的提高自动判卷的效率。
附图说明
图1是本发明的功能模块示意图;和
图2是本发明的操作流程示意图。
具体实施方式
以下将结合附图,对本发明的具体实施方式进行进一步详细的描述。
如图1所示,本发明的构成功能模块,主要包括:编辑模块10、试题生成模块20、判卷模块30。
所述编辑模块10用于对填空题进行编辑生成包括题干和填空项的填空题,具体包括编辑启动模块101、题干编辑模块102、答案赋予模块103、填空题存储模块104。
所述编辑启动模块101用于启动填空题的编辑,调取用于编辑成填空题的原始题干信息。
所述题干编辑模块102用于对原始题干信息进行编辑,产生适当数量的填空项,产生由题干(填空题主体)和填空项组成的填空题,所述填空项至少为两个,对于每个填空题和每个填空项添加包含属性信息的ID标识,这些属性信息是可编辑的,包括顺序编号、涉及的知识点、标准分值、难度系数、被采用的次数、以往的错误率、最近一次被采用的时间等。
所述答案赋予模块103用于为所述填空项设定至少一个答案,建立所述答案与以上两种所述ID标识的对应关系,形成填空题、填空项与答案的对应关系。
所述填空题存储模块104用于以数据库的形式存储填空题的数据,包括填空题及其答案和对应关系等。
所述试题生成模块20用于根据生产条件生成填空题试题,具体包括:规则设定模块201、比对模块202、生成模块203和试题存储模块204。
所述规则设定模块201用于设定生成试题的生成规则,这些生成规则与以上两种所述ID标识中的有关属性信息对应,比如标准分值、知识点、难度系数,还比如避免两次生成同样的试题,或者同一道试题生成不同的填空项,可以通过规则的设定,然后与填空题的属性信息进行比对,来生成需要的填空题试题。最近一次被采用的填空项在本次生成试题时,避免重复。
比对模块202用于将试题生成规则与填空题和填空项的属性信息进行比对,产生符合条件的填空题试题。
生成模块203用于从所述填空题数据库中读取有关信息产生与本次生成的试题和填空项对应的答案,生成填空题试卷或者构成试卷的填空题部分时调用。
试题存储模块204用于以试题数据库的方式存储当前填空题试题及其答案的信息。
所述判卷模块30用于生成试卷并且在测试者完成试卷之后根据答案对测试答案进行判定,具体包括试卷生成模块301、测试答案提取模块302、测试答案比对模块303、测试答案确认模块304、测试答案统计模块305。
所述试卷生成模块301用于生成填空题试题试卷,所述试卷可以是单纯的填空题试题组成的试卷或者综合试卷的填空题部分。
所述测试答案提取模块302用于在测试者完成测试之后,提取测试试题、填空项及其测试答案的数据信息及其对应的关系信息。
所述测试答案比对模块303用于将提取的上述信息与本次试题的答案信息进行比对,如果测试答案与所述至少一个错误答案相同,或者测试答案为空数据,判定答题错误,如果测试答案与所述至少一个正确答案相同,判定答题正确,如果无法进行判定,将这些答案信息提取出来。
所述测试答案确认模块304用于将提取的不能判定的答案信息显示到相应的填空项上,供判卷者做出正误的判定,在显示时,每个测试答案的后面包括正确或错误的两个按钮,如果判定为正确,点击正确按钮即可,这些被判定为正确的答案将添加到原先的答案中,并且更新相应的答案数据库。对于判定为错误的测试答案,进行同样的操作。
所述统计模块305用于根据更新的答案数据库对测试答案进行统计分析,更新相应的填空题ID标识和填空项ID标识中的有关属性信息。所述的统计模块可以统计每道试题被采用和回答的次数,所谓的采用是指一份试卷中采用了,算是一次,所谓的回答次数是指这份试卷用于多少人的测试,也就是被多少人回答了。同时将这样的采用次数和回答次数及其错误率统计出来,并且更新到每道试题和每个涉及的填空项的属性信息中,还可以根据历史大数据的错误率统计,动态调整试题的难度系数。这种通过对属性信息的动态的管理,由此可以实现对于填空题和填空项的精细化管理。
所述填空题数据库和试题数据库可以集成到一个数据中,通过数据项进行标示和区分,通过同一个模块实现上述功能。
如图2所示,实施本发明的操作流程示意图。所述填空题试题的生成和填空题试卷的判卷方法可以实现对填空题进行快速编辑,快速插入/添加填空项、自动生成题干的填空项序号等信息,对每个填空项定义/设定多个答案,包括至少一个正确答案和至少一个错误答案,包括主要步骤:编辑步骤100、试题生成步骤200和判卷步骤300:
所述编辑步骤100包括编辑启动步骤1001、题干编辑步骤1002、答案赋予步骤1003。
编辑启动步骤1001:调取用于编辑成填空题的原始题干信息,原始题干信息是未经编辑的包含题干和填空项上内容的信息,所谓的填空项通常是经过去除掉原始题干的部分信息形成的空。一般说来,填空题包括题干内容及相应的填空项,本发明中有时用题干代表填空题或者等同于填空题,只是为了便于描述, 无本质差异,旨在表示处于题干或填空题未生成用于考试或者组成试卷填空题试题前可以编辑或者填空项可以选择的状态。
题干编辑步骤1002:对填空题的题干内容进行编辑,产生至少一个填空项,对所述题干和所述填空项分别添加包含属性信息的填空题ID标识和填空项ID标识。在所述题干编辑步骤中,对所述题干内容的编辑包括,采用定义数据源规则以规范题干内容数据表达,或者,采用集成至文字编辑器的中间件按钮对题干内容进行编辑。
数据源规则:1)可定义每个空的格式如:「」、〖〗包含,内部每个答案用’|’分隔,2)可定义属性标识如:题目困难定义N进行标识,分类用F进行标识等,答案1用D1标识,空1用nu1标识等3)可定义整个填空题的数据源格式解析规则;
中间件按钮:可在编辑器定义一个按钮,可点击实现新填空项的插入,如定义【】为插入按钮,可点击触发当前插入数据源规则所述定义数据源规则是由任意符号、数字、字母、或文字等定义的规范,包括()、##、||、<>、《》、{}、「」、〖〗、『』、〈〉、和AA等。所述填空项至少为两个,所述填空项ID标识与所述填空题ID标识存在关联关系,并根据所述填空题ID标识和填空项ID标识,建立所述填空项和所述答案之间的对应关联关系,并且保存至所述填空题数据库。
答案赋予步骤1003:为所述填空项设定至少一个答案,建立所述答案与以上两种所述ID标识的对应关系,形成填空题;所述至少一个答案包括至少一个正确答案和至少一个错误答案。所述至少一个正确答案的首选答案是所述填空项原先对应的所述题干的内容,所述至少一个正确答案的其他答案是与所述首选答案等同或近似的答案,所述至少一个错误答案为典型的错误答案。
所述试题生成步骤200包括规则设定步骤2001、比对步骤2002、生成步骤2003。
所述规则设定步骤2001,用于产生试题生成规则,其是根据试题生成需要设定的,规则内容对应于所述题干和/或所述填空项的有关属性信息。比如避免两次生成同样的试题,或者同一道试题生成不同的填空项,可以通过规则的设定,然后与填空题的属性信息进行比对,来生成需要的填空题试题。例如可以实现,最近一次被采用的填空项,在本次生成试题时,避免重复。错误率超过一定比率的,在本次生成试题时不采用等。
所述比对步骤2002用于,根据试题生成规则,通过与所述填空题的以上两种所述ID标识进行比对,生成符合所述生成规则的填空题试题。根据不同的试题生成规则,每次可以从每道填空题的所述至少两个填空项中选择至少一个填空项生成填空题试题,对于没有被选中的填空项,将相应的首选答案填充回所述没有被选中的填空项,在所述生成的填空题试题中不再构成填空项。
所述生成步骤2003用于在生成填空题试题后,根据实际生成的填空项内容,更新所述填空题数据库中的填空题ID标识中的属性信息,所述属性信息包括:顺序编号、涉及的知识点、标准分值、难度系数、被采用的次数、以往的错误率、最近一次被采用的时间。在所述试题生成步骤中,每生成一次填空题,就更新一次所述ID标识中的有关属性信息,包括最近一次被采用的时间、被采用的次数。
所述判卷步骤300包括试卷生成步骤3001、测试答案提取步骤3002、测试答案比对步骤3003、测试答案确认步骤3004、测试答案统计步骤3005。
所述试卷生成步骤3001:选择至少一道填空题试题构成填空题试卷或者试卷的填空题试题部分;
所述测试答案提取步骤3002:在生成的填空题试卷完成测试之后,提取试卷中试题的填空题ID标识和填空项ID标识、以及对应的填空项的测试答案;
所述测试答案比对步骤3003:将提取的试题的上述信息与所述的答案进行比对,如果测试答案与所述至少一个错误答案相同,或者测试答案为空数据,判定答题错误,如果测试答案与所述至少一个正确答案相同,判定答题正确;
所述测试答案确认步骤3004:对于不能通过上述步骤判定测试答案正误的情况,将这些测试答案顺序显示到相应题干的相应填空项上,以供判卷者一一核实确认,并且根据确认结果,首先更新所述的错误答案和正确答案,然后重复上述测试答案比对步骤;对于不能通过上述步骤判定测试答案正误的情况,按照同样的测试答案的数量多少顺序排序,并显示在相应的填空项中,从而使得判卷者在核实这些测试答案正误的过程中,可以结合题干内容进行判定;
所述测试答案统计步骤3005:在完成测试答案确认之后,对测试答案进行统计分析,更新填空题数据库和/或试题数据库中相应的填空题ID标识和填空项ID标识中的有关属性信息。所述统计步骤3005可以统计每道试题被采用和回答的次数,所谓采用是指一份试卷中采用了,算是一次,所谓的回答次数是指这份试卷用于多少人的测试,也就是被多少人回答了。同时将这样的采用次数和回答次数及其错误率统计出来,并且更新到每道试题和每个涉及的填空项的属性信息中,还可以根据不断完善的历史大数据的错误率统计,动态调整试题的难度系数。这种通过对属性信息的动态的管理,由此可以实现对于填空题和填空项的精细化管理。
对于试题的编辑方面,示例如下:调取的原始题干信息是“今年是2016猴年,张三和李四在今年出生。”
对于原始题干信息,通过定义数据源规则(规范数据表达)进行编辑,所述数据源规则是由任意符号、数字、字母、文字等定义的规范,如:()、##、||、<>、《》、{}、「」、〖〗、『』、〈〉、AA……。或者通过设定为中间件按钮形式集成至文字编辑器,方便更多的用户快速使用和操作。
选中“2016猴”和“张三和李四”,当前在文字编辑器增加‘[]’按钮实现中间件触发,点击文字编辑器中的‘[]’弹出当填空答案插入窗口,答案插入窗口中的首选的标准的正确答案是上述选中的原始题干信息中的相应内容。
在弹出的填空项设置界面中,可为题干生成两个‘空’(填空项),此界面可为当前一个‘空’根据真实情况设置多个标准的正确答案和典型的错误答案,填写完成后,比如“2016”“猴”可以设置为标准的正确答案,“2015”“羊”可以设置为典型的错误答案,“张三”“李四”可以设置为标准的正确答案,“王五”可以设置为典型的错误答案。按照数据源规则自动生成题干的显示效果,此时添加相应的属性信息的ID标识,通过这样的标识可以将填空题以填空题数据库的形式保持到存储器中,便于进行编辑和管理。属性信息可以包括:试题编号、涉及的知识点、难度系数、被使用的次数、上次使用的时间等。
编辑题干的每个空时,实现每个空进行自然数标记,当触发生成题干按钮时,程序采用字符串替换的功能,先生成替换项;如将第一个‘[]’(数组包含内的所有答案)数组,替换为①,将第二个‘[]’(数组包含内的所有答案)数组,替换为②,(标记的序号可以是任意定义的规则)……依次类推;
接着在进行字符串替换,从而实现填空题编辑后,保存时通过定义的生成规则,自动隐藏答案内容生成替换项内容(如:下划线和每个空的序号)。
例如:“今年是(1)年,(2)是年度吉祥物?”,生成形式不局限于当前形式,比如:①、②、⒈、⒉、α、β等可以是填空项的任意表现形式。
本发明的方法和系统可以集成和使用到任意客户端、网页、及各种需要在线编辑和处理涉及填空题出题、成卷和判卷的系统、项目或产品。
本发明的方法和系统可以通过计算机程序代码进行执行,执行计算机程序代码的设备包括存储器和处理器,分别用于存储代码和执行代码。
以上介绍了本发明的较佳实施方式,旨在使得本发明的精神更加清楚和便于理解,并不是为了限制本发明,凡在本发明的精神和原则之内,所做的修改、替换、改进,均应包含在本发明所附的权利要求概括的保护范围之内。

Claims (16)

  1. 一种填空题试题的生成方法,其特征在于,包括以下步骤:
    题干编辑步骤:对填空题的题干内容进行编辑,产生至少一个填空项,对所述题干和所述填空项分别添加包含属性信息的填空题ID标识和填空项ID标识;
    答案赋予步骤:为所述填空项设定至少一个答案,建立所述答案与以上两种所述ID标识的对应关系,保存至填空题数据库;
    试题生成步骤:根据试题生成规则,通过与所述数据中的填空题的以上两种ID标识进行比对,提取符合所述生成规则的填空题及其对应的答案,并且生成试题,并保存至试题数据库。
  2. 根据权利要求1的方法,其特征在于,在所述答案赋予步骤中,所述至少一个答案包括至少一个正确答案和至少一个错误答案。
  3. 根据权利要求2的方法,其特征在于,在所述题干编辑步骤中,所述填空项至少为两个,所述填空项ID标识与所述填空题ID标识存在关联关系,并根据所述填空题ID标识和填空项ID标识,建立所述填空项和所述答案之间的对应关联关系,并且保存至所述填空题数据库。
  4. 根据权利要求3的方法,其特征在于,所述至少一个正确答案的首选答案是所述填空项原先对应的所述题干的内容,所述至少一个正确答案的其他答案是与所述首选答案等同或近似的答案,所述至少一个错误答案为典型的错误答案。
  5. 根据权利要求4的方法,其特征在于,在所述试题生成步骤中,根据不同的试题生成规则,每次可以从每道填空题的所述至少两个填空项中选择至少一个填空项生成填空题试题,对于没有被选中的填空项,将相应的首选答案填充回所述没有被选中的填空项,在所述生成的填空题试题中不再构成填空项。
  6. 根据权利要求5的方法,其特征在于,在生成填空题试题后,根据实际生成的填空项内容,更新所述填空题数据库中的填空题ID标识中的属性信息。
  7. 根据权利要求6的方法,其特征在于,所述属性信息包括:顺序编号、涉及的知识点、标准分值、难度系数、被采用的次数、以往的错误率、最近一次被采用的时间。
  8. 根据权利要求7的方法,其特征在于,所述试题生成规则是根据试题生成需要设定的,其中的规则内容对应于所述填空题和/或所述填空项的至少一个属性信息。
  9. 根据权利要求8的方法,其特征在于,在所述题干编辑步骤中,对所述题干内容的编辑包括,采用定义数据源规则以规范题干内容数据表达,或者,采用集成至文字编辑器的中间件按钮对题干内容进行编辑。
  10. 根据权利要求9的方法,其特征在于,所述定义数据源规则是由任意符号、数字、字母、或文字定义的规范,包括()、##、||、<>、《》、{}、「」、〖〗、『』、〈〉、和AA。
  11. 一种由权利要求2-10之一的方法生成的填空题试题构成的填空题试卷的判卷方法,其特征在于,包括以下步骤:
    填空题试卷生成步骤:从所述试题数据库中,选择至少一道填空题试题构成填空题试卷或者试卷的填空题试题部分;
    测试答案提取步骤:在生成的填空题试卷完成测试之后,提取试卷中试题的填空题ID标识和填空项ID标识、以及对应的填空项的测试答案;
    测试答案比对步骤:将提取的试题的上述信息与所述的答案进行比对,如果测试答案与所述至少一个错误答案相同,或者测试答案为空数据,判定答题错误,如果测试答案与所述至少一个正确答案相同,判定答题正确;
    测试答案确认步骤:对于不能通过上述步骤判定测试答案正误的情况,将这些测试答案顺序显示到相应题干的相应填空项上,以供判卷者一一核实确认,并且根据确认结果,首先更新所述的错误答案和正确答案,然后重复上述测试答案比对步骤;
    测试答案统计步骤:对测试答案进行统计分析,更新相应的填空题ID标识和填空项ID标识中的有关属性信息。
  12. 根据权利要求11的判卷方法,其特征在于,对于不能通过上述步骤判定测试答案正误的情况,按照同样的测试答案的数量多少顺序排序,并显示在相应的填空项中,从而使得判卷者在核实这些测试答案正误的过程中,可以结合题干内容进行判定。
  13. 一种填空题试题的生成系统,包括:
    存储器,用于存储用来执行权利要求1-10之一的方法的程序代码;
    处理器,用于执行所述程序代码。
  14. 一种填空题试卷的判卷系统,包括:
    存储器,用于存储用来执行权利要求11-12之一的方法的程序代码;
    处理器,用于执行所述程序代码。
  15. 一种计算机程序,包括被加载至计算机系统并被执行时执行根据权利要求1-12中任一项的方法的步骤的计算机程序代码。
  16. 一种计算机可读存储介质,包含权利要求15的计算机程序。
PCT/CN2017/077790 2016-11-22 2017-03-23 一种填空题试题的生成和判卷的方法及系统 WO2018094925A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2016110404640 2016-11-22
CN201611040464.0A CN106409041B (zh) 2016-11-22 2016-11-22 一种填空题试题的生成和判卷的方法及系统

Publications (1)

Publication Number Publication Date
WO2018094925A1 true WO2018094925A1 (zh) 2018-05-31

Family

ID=58082718

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/077790 WO2018094925A1 (zh) 2016-11-22 2017-03-23 一种填空题试题的生成和判卷的方法及系统

Country Status (2)

Country Link
CN (1) CN106409041B (zh)
WO (1) WO2018094925A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446483A (zh) * 2018-09-30 2019-03-08 大连海事大学 一种用于包含主观信息的客观题的机器判卷方法
CN110377689A (zh) * 2019-06-17 2019-10-25 深圳壹账通智能科技有限公司 试卷智能生成方法、装置、计算机设备及存储介质
CN110413973A (zh) * 2019-07-26 2019-11-05 浙江蓝鸽科技有限公司 计算机自动生成套卷的方法及其系统
CN110443427A (zh) * 2019-08-12 2019-11-12 浙江蓝鸽科技有限公司 基于认知知识谱的成绩预测方法及其系统
CN110674722A (zh) * 2019-09-19 2020-01-10 浙江蓝鸽科技有限公司 一种试卷拆分方法及其系统
CN110727360A (zh) * 2019-09-10 2020-01-24 深圳市壹箭教育科技有限公司 一种错题管理方法、系统及存储介质和终端设备
CN110765752A (zh) * 2019-10-29 2020-02-07 北京字节跳动网络技术有限公司 试题的生成方法、装置、电子设备及计算机可读存储介质
CN111915463A (zh) * 2020-08-21 2020-11-10 广州云蝶科技有限公司 试题知识点的管理方法
CN112069815A (zh) * 2020-09-04 2020-12-11 平安科技(深圳)有限公司 成语填空题的答案选择方法、装置和计算机设备
CN113010655A (zh) * 2021-03-18 2021-06-22 华南理工大学 一种机器阅读理解的回答与干扰项生成方法、装置
CN113268970A (zh) * 2021-06-07 2021-08-17 武汉华工智云科技有限公司 一种在线考试试卷生成的方法和装置
CN113450006A (zh) * 2021-07-02 2021-09-28 作业帮教育科技(北京)有限公司 一种自动分配题目生产任务的方法、装置及存储介质
CN113656443A (zh) * 2021-08-24 2021-11-16 北京百度网讯科技有限公司 数据拆解方法、装置、电子设备和存储介质
CN113918609A (zh) * 2021-10-12 2022-01-11 平安国际智慧城市科技股份有限公司 试卷创建方法、装置、计算机设备和存储介质

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106409041B (zh) * 2016-11-22 2020-05-19 深圳市鹰硕技术有限公司 一种填空题试题的生成和判卷的方法及系统
CN107862927A (zh) * 2017-11-23 2018-03-30 北京青果时代教育科技有限公司 一种错题整理方法及装置
CN108109454A (zh) * 2018-01-08 2018-06-01 上海连梦文化传播有限公司 一种学习和竞赛系统
CN110737378B (zh) * 2018-07-20 2023-02-10 颜厥护 一种文本数据交互方法及系统
CN109753656A (zh) * 2018-12-29 2019-05-14 咪咕互动娱乐有限公司 一种数据处理方法、装置及存储介质
CN112287659B (zh) * 2019-07-15 2024-03-19 北京字节跳动网络技术有限公司 一种信息生成方法、装置、电子设备及存储介质
CN110706535A (zh) * 2019-10-29 2020-01-17 长沙理工大学 一种在线题库的参数化设计及智能统计的方法
CN112017497A (zh) * 2020-07-15 2020-12-01 李帮军 一种基于选择题编辑器的辅助复习系统
CN112017263B (zh) * 2020-08-24 2021-04-30 上海松鼠课堂人工智能科技有限公司 基于深度学习的试卷智能生成方法和系统
CN112380873B (zh) * 2020-12-04 2024-04-26 鼎富智能科技有限公司 一种规范文书中被选中项确定方法及装置
CN112687140A (zh) * 2021-01-06 2021-04-20 北京智联友道科技有限公司 考核自动评分方法、装置和系统
CN112800182A (zh) * 2021-02-10 2021-05-14 联想(北京)有限公司 试题生成方法及装置
CN113506052B (zh) * 2021-09-10 2021-11-23 北京世纪好未来教育科技有限公司 能力评测方法及相关装置
CN117257304B (zh) * 2023-11-22 2024-03-01 暗物智能科技(广州)有限公司 一种认知能力测评方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261617A (zh) * 2008-04-08 2008-09-10 黎章 机器命题的方法与系统
CN101587512A (zh) * 2008-05-23 2009-11-25 北京智慧东方信息技术有限公司 一种计算机辅助考试系统中填空题的判分方法
US20100062410A1 (en) * 2008-09-11 2010-03-11 BAIS Education & Technology Co., Ltd. Computerized testing device with a network editing interface
CN106409041A (zh) * 2016-11-22 2017-02-15 深圳市鹰硕技术有限公司 一种填空题试题的生成和判卷的方法及系统

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942994B (zh) * 2014-04-22 2015-11-11 济南大学 一种主观性试题的计算机考核方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261617A (zh) * 2008-04-08 2008-09-10 黎章 机器命题的方法与系统
CN101587512A (zh) * 2008-05-23 2009-11-25 北京智慧东方信息技术有限公司 一种计算机辅助考试系统中填空题的判分方法
US20100062410A1 (en) * 2008-09-11 2010-03-11 BAIS Education & Technology Co., Ltd. Computerized testing device with a network editing interface
CN106409041A (zh) * 2016-11-22 2017-02-15 深圳市鹰硕技术有限公司 一种填空题试题的生成和判卷的方法及系统

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446483B (zh) * 2018-09-30 2022-09-30 大连海事大学 一种用于包含主观信息的客观题的机器判卷方法
CN109446483A (zh) * 2018-09-30 2019-03-08 大连海事大学 一种用于包含主观信息的客观题的机器判卷方法
CN110377689A (zh) * 2019-06-17 2019-10-25 深圳壹账通智能科技有限公司 试卷智能生成方法、装置、计算机设备及存储介质
CN110413973A (zh) * 2019-07-26 2019-11-05 浙江蓝鸽科技有限公司 计算机自动生成套卷的方法及其系统
CN110413973B (zh) * 2019-07-26 2023-04-18 浙江蓝鸽科技有限公司 计算机自动生成套卷的方法及其系统
CN110443427A (zh) * 2019-08-12 2019-11-12 浙江蓝鸽科技有限公司 基于认知知识谱的成绩预测方法及其系统
CN110443427B (zh) * 2019-08-12 2023-11-07 浙江蓝鸽科技有限公司 基于认知知识谱的成绩预测方法及其系统
CN110727360B (zh) * 2019-09-10 2023-11-24 深圳市壹箭教育科技有限公司 一种错题管理方法、系统及存储介质和终端设备
CN110727360A (zh) * 2019-09-10 2020-01-24 深圳市壹箭教育科技有限公司 一种错题管理方法、系统及存储介质和终端设备
CN110674722B (zh) * 2019-09-19 2023-04-07 浙江蓝鸽科技有限公司 一种试卷拆分方法及其系统
CN110674722A (zh) * 2019-09-19 2020-01-10 浙江蓝鸽科技有限公司 一种试卷拆分方法及其系统
CN110765752B (zh) * 2019-10-29 2023-09-01 抖音视界有限公司 试题的生成方法、装置、电子设备及计算机可读存储介质
CN110765752A (zh) * 2019-10-29 2020-02-07 北京字节跳动网络技术有限公司 试题的生成方法、装置、电子设备及计算机可读存储介质
CN111915463A (zh) * 2020-08-21 2020-11-10 广州云蝶科技有限公司 试题知识点的管理方法
CN111915463B (zh) * 2020-08-21 2023-12-01 广州云蝶科技有限公司 试题知识点的管理方法
CN112069815A (zh) * 2020-09-04 2020-12-11 平安科技(深圳)有限公司 成语填空题的答案选择方法、装置和计算机设备
CN113010655A (zh) * 2021-03-18 2021-06-22 华南理工大学 一种机器阅读理解的回答与干扰项生成方法、装置
CN113268970A (zh) * 2021-06-07 2021-08-17 武汉华工智云科技有限公司 一种在线考试试卷生成的方法和装置
CN113450006A (zh) * 2021-07-02 2021-09-28 作业帮教育科技(北京)有限公司 一种自动分配题目生产任务的方法、装置及存储介质
CN113656443A (zh) * 2021-08-24 2021-11-16 北京百度网讯科技有限公司 数据拆解方法、装置、电子设备和存储介质
CN113656443B (zh) * 2021-08-24 2023-08-04 北京百度网讯科技有限公司 数据拆解方法、装置、电子设备和存储介质
CN113918609A (zh) * 2021-10-12 2022-01-11 平安国际智慧城市科技股份有限公司 试卷创建方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN106409041A (zh) 2017-02-15
CN106409041B (zh) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2018094925A1 (zh) 一种填空题试题的生成和判卷的方法及系统
CN106846203B (zh) 一种智能阅卷方法及系统
Isotalo Basics of statistics
CN108520662B (zh) 一种基于知识点分析的教学反馈系统
CN108734379B (zh) 一种对客服人员实现差异化的线上培训方法
US20110270883A1 (en) Automated Short Free-Text Scoring Method and System
CN107657559A (zh) 一种中文阅读能力测评方法及系统
WO2022170985A1 (zh) 选题方法、装置、计算机设备和存储介质
Tack et al. Human and automated CEFR-based grading of short answers
Li A study on the influence of non-intelligence factors on college students’ English learning achievement based on C4. 5 algorithm of decision tree
CN112925919A (zh) 一种知识图谱驱动的个性化作业布置方法
CN113408253A (zh) 一种作业评阅系统及方法
CN110390032B (zh) 一种手写作文的批阅方法及系统
CN112579886A (zh) 一种个性化定制学生学情分析的软件
KR20090001485A (ko) 주관식 문항 자동 채점을 통한 자가학습 방법
Lin et al. Association between test item’s length, difficulty, and students’ perceptions: Machine learning in schools’ term examinations
Milanovic Language examining and test development
Li et al. Research on the cognitive diagnosis of Chinese listening comprehension ability based on the G-DINA model
Batmaz et al. A web-based semi-automatic assessment tool for conceptual database diagrams
JP2004046255A (ja) コンピュータ適応型テスト装置、コンピュータ適応型テストシステム、コンピュータ適応型テスト方法およびコンピュータ適応型テストプログラムを格納する記録媒体
CN113127769B (zh) 基于标签树和人工智能的习题标签预测系统
Pishghadam et al. On validation of concept map as an assessment tool of L2 reading comprehension: A triangulation approach
Pourshahian A Corpus‐Based Study of the Linguistic Errors Committed by the Iranian EFL Learners in English Translation of MA Research Abstracts of Educational Management Based on Liao’s (2010) Model
PABILLO et al. Error Analysis on Subject-Verb Agreement in the Cambridge Checkpoint Writing Exam of Indonesian Secondary 2 Students
Abdukakhorovna Specific Advantages of Using Pedagogical Technologies in the Organization of Uzbek Language Lessons

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17874236

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17874236

Country of ref document: EP

Kind code of ref document: A1