WO2017105518A1 - Question assessment - Google Patents

Question assessment

Info

Publication number
WO2017105518A1
WO2017105518A1 PCT/US2015/066904 US2015066904W WO2017105518A1 WO 2017105518 A1 WO2017105518 A1 WO 2017105518A1 US 2015066904 W US2015066904 W US 2015066904W WO 2017105518 A1 WO2017105518 A1 WO 2017105518A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
response
responses
questions
set
plurality
Prior art date
Application number
PCT/US2015/066904
Other languages
French (fr)
Inventor
Robert B. Taylor
Ehud Chatow
Bruce Williams
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/06Electrically-operated teaching apparatus or devices working with questions and answers of the multiple-choice answer-type, i.e. where a given question is provided with a series of answers and a choice has to be made from the answers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Abstract

Examples disclosed herein relate to capturing a set of responses to a plurality of questions, scanning a machine-readable link comprising a unique identifier associated with the plurality of questions, and associating the set of responses with the unique identifier.

Description

QUESTION &$$Ε$δϋεΝΪ

BACKGROUND

[0001 j In some situations,, a set of questions may bo created, such as for a test or survey. The questions may also be paired with an answer key and/or may be associated with free-form answer areas, For example, some questions may be multiple choice while others may be fii-in-the-biank and/or essay type questions. The questions may then be submitted for evaluation and/or assessment.

BRIEF DESCRIPTION OF THE DRAWINGS

[0002] in trie accom anyi g drawings, like numerals refer to like components or blocks. The following detailed description references the drawings, wherein:

f0003] Fie, 1 is a block diagram of an example questio assessment device;

[0004] FIGs. 2A-2C are illustrations of example machine-readable codes;

[0005] FIGs, 3A-3B are illustrations of example generated tests;

[0006] FIG. 4 is a flowchart of an example of a method for providing question assessment; and

[0007] FIG. 5 is a block diagram of an example system for providing question assessments.

DETAILED DESCRIPTION

[0008] In some situations, a set of questions may be prepared to be presented and answered by one and/or more recipients. The questions may comprise multiple choice, fil-in-the-blank, essay, short answer, survey, rating, math problems, and/or other types of questions. For example, a teacher may prepare a set of 25 questions of various types for a quiz,

[0009] Conventional automated scoring systems, such as Scaniron® testing systems, ma compare answers on a carefully formatted answer sbeef to an existing answer key, but such sheets must be precisely filled in with the correct type of pencil. Further, such sheets rely on a known order of the questions. This allows for easy copying of answers from one student to another and lso introduces errors when a student fails to completely fill out the bubble to mark their answers.

[0010] Randomizing th question ord r will greatly reduce the incidence of cheating and copying among students. Further, the ability to recognize which questions appear in any order allows fo automated collection of answers to each question. In some implementations, not only multiple choice answers may be graded, but textual answers, such as ill in the blank responses, may be recognized using optical character recognition (OCR) and compared to stored answers.

[0011] Each student may be associated with a unique identifier that may be embedded in the test paper. Such embedding may comprise an overt (plain-text) and/or covert signal such as a watermark or matrix code. Since ever paper may comprise a unique code wth a student identifier and/or a test version #:i a different test sequence may be created per student, making it hard or impossible to cop from student neighbors while still enabling an automated scan and assessment solution. The automated assessment may give immediate feedback some and/or all of the questions, such as by comparing a multiple choice or OCR'd short text answer to a correct answe key. These results may, for example, be sen by email and/or to a application.

[0012] In some implementations, the test will have a combination of choosing the correct or best answer and also requesting to show and include the process of getting to the answer chosen. In other words, in some cases the form will have a question, with a set of multiple choice answers for the student to choose from and also a box to eta borate on how the student arrived at the answer. In this way, there may be an Immediate response and assessment / evaluation for the student based on the multiple choice answers and a deeper feedback from th teacher that can request to evaluate all the student who had a mistake in answer #4 to see what the common mistake were.

[0013] The paper test form may be captured in a way that each answer can be individually sent for analysis directly to the instructor / teacher or to a student's file. This may include multiple choice answers as well as the text box with the free- response text answer artd/or sketc which is positioned In a predefined area and positioning on ie paper test form, A scanning device may be used to capture the paper test form, such as smartphone, tablet or similar device wit a camera fiat can scan and capture an image of the test form and/or a standalone scanner. Upon scanning, the paper's imiqye machine-readable code (e.g., watermark) may be identified and associates the answers with the student ID and the specific test sequence expected. The answers and the immediate results of the multiple choice answers may be presented and/or delivered to the student. In cases where mistakes were made, the student may receive a recommendatio of content to c!ose the knowledge gap, A teacher / instructor, in class or remotely, may review tie answers and give the siodent additional personal feedback. In some cases, teachers would like to understand class trends and gaps by analyzing ail answers to a particular question to see what common mistakes were made to help the teacher focus on the areas of weakness. The association of assessment scores to particular student may be made via a unique and anonymized identifier associated with the test paper, which can tell which student completed an assessment via the unique identifier embedded in the assessment's machine-readable code. Since the teacher / instructor no longer has to associate an assessment with a particular student, the identit of the student who completed the assessment can be kept hidden, greatl minimizing the chance of the teacher applying personal bias while grading. Further, the teacher may choose to review all students5 responses to a particular question, such as question 4, in order to focus on that answer. The teacher may then move on to reviewing all students' responses to the next question, rather than grading all of the questions on the assessment / test for each student in turn,

[0014] Referring now to the drawings, FIG. 1 is a block diagram of an example question assessment device 100 consistent with disclosed implementations. Question assessment device 100 may comprise a processor 110 and a non-transitory machine-readable storage medium 120. Question assessment device 100 may comprise a computing device such as a server computer, a desktop computer, a lapto computer, handheld computng devce, a smart phone, a tablet computing device, a mobile phone, network device (e.g., a sw¾ch and/or footer), or fie like, 015J Processor 110 may comprise a central processing unit (CPU), a semiconductor-based microprocessor, programmable component such as a complex programmable logic device (CPLD) and/or field-programmable gate array (FPGA), or any other hardware device suitable for retrieval and execution of instructions stored in machine-readable storage medium 120. In particular, processor 110 ma fetch, decode, and execute a plurality of capture response instructions 132, generate scan link instructions 134, and associate unique identifier insinjciions 138 to implement the functionality described in detail below.

£0016] Executable instructions may comprise logic stored i an portion and/or component of machine-readable storage medium 1 0 and executable by processor 110. Hie machine-readable storage medium 120 may comprise both volatile and/or nonvolatile memory and data storage components. Volatile components are those that do not retain data values upon loss of power, Nonvolatile components are those that retain data upon a loss of power.

[0017] The machine-readable storage medium 120 may comprise, for example, random access memory (RAM), read-only memory (ROM), hard disk drives, solid- state drives, USB flash drives, memory cards accessed via a memory card reader, floppy disks accessed via an associated floppy disk drive, optical discs accessed via an optical disc drive, magnetic tapes accessed via an appropriate tape drive, and/or other memory components, arid/or a combination of any two and/or more of these memory components. In addition, the RAM may comprise, for example, static random access memory (SRAM), dynamic random access memory (DRAW), and/o magnetic random access memory RAM) and other such devices. The ROM may comprise, for example, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROfV1}a an electrically erasable programmable read-only memory (EEPROM), and/or other like memory device.

[0018] Capture response instructions 132 may capture a set of responses to a plurality of questions, wherein the set of responses comprises at least one free- form response. C pture response Instructions 132 may, In some implementations, recognize a plurality of markup styles associated with a multiple choic type question. Fo example, a multiple choice response style may comprise a whole and/or partially li!led-in circle, an X and/or other marking o the answer arid/or the circle associated with the answer, and/or circling the answer.

[0019] Capture response instructions 132 may, fo example, detect the pen pencii marks that have been added to the responses by differentiating between the layout of the question before and after the responses have bean written in. A pixeS-by-pixel comparison, for example, may compare a color vaiue for each relative pixel to determine if new writing has been added. A white pixel may read as a hex value of #FFFFFF, while a grey pixel (representing a pencil mark in this example) may read as a hex value of #47474?. These values are only examples, as numerou othe values may be represented, as the detection may rely on a threshold difference in the values to determine that a mark has been made. In some implementations, larger sample areas than a single pixel may be compared, such as b averaging the color values of the area and comparing between the before and after layouts. Once areas of writing have been detected, they may be assembled into shapes, such as by connecting marked pixels into an 'X' or circle shape and then Identifying the relative location of the shape to associate that shape with a particular answer. Comparison of pixel value differences is offered as an example only, and other methods of scanning and detection of markings on the responses are contemplated.

[0020] The questions may be stored in a question database associated with a teaching / instructional application. Such questions and their layout may be retrieved to compare to the marked up version to aid in capturing the responses. For example, an instructor may enter the questions in an app on their tablet and/or smart device, through a web-based user interface, through an application on a deskto or laptop, etc. Each question may comprise the actual display information of the question (text, figures, drawings, references, tables, etc.), a question type (e.g., short answer, multiple choice, sketch, essay, etc), and/or any constraint rules, as described ove. For multiple-choice type questions, the answer choices may also be entered. The uestion type may be then be used to define an amount of space needed on a page. For example, multiple choic question may require two lines for the question, an empty space line, and Sine for the list of possible answers. For free-form and/or essay type questions, the instructor may enter a recommended amount of answer space (e.g., three lines, half a page, a full page, etc.). The instructor / teacher may also enter the correct answers and/or keywords into the application for later grading.

[0021] In some implementations, capture response instructions 132 ma further compare at least one response of the se of responses to an answer key of correct responses. For example, once filled -in circle has been identified and located next to answer choice B, the correct answer for the question may be retrieved and compared, if the correct answer is B, then the question may be scored as correct; otherwise the question may be scored as incorrect. In some implementations, the correct answer may fee displayed next to the captured answer for verification by an instaictor. For example, for a short answer response, the text of the response may be displayed next to an expected answer. In other examples, stored answer keywords may be compared to the captured response, such as via optical character recognition (OCR). The keywords may be used to mark the response as correct or incorrect, and/or may be used to highlight appropriate words in the response to aid an instructor when reviewing the responses. For example, certain names may be highlighted in a history essay response.

[0022] Upon detection of a correct and/or incorrect response, an indication of the correctness may be provided, f or example, capture response instructions 132 may provide a printout and/or display of all scored responses and/or an indication of which response should have been entered. For another example, capture response instructions 132 may provide a count of correct and/or incorrect responses.

[0023] Scan link instructions 134 ma scan a machine- eadable link comprising a unique identifier associated with the plurality of questions. The unique identifier may identify a student associate with the responses and/or ma provide layout information for the test. Fo example, the unique identifier may specify that of 10 possible questions, the associated test presented the questions in the order 3, 7, 1 , 2, 9, 10, 4, 6, S. This may be used to retrieve and/or recreate the layout of the unmarked questions to aid in comparison and detection of the response markings. The captured questions may be associated with a machine-readable code of the unique identifier. The machine-readable code may comprise, for example, a bar code, a matrix code, a text string, and a watermark. The machine-readable code ma be visible to a person, such as a large fear code, and/or may not be readily visible, such as a translucent watermark and/or a set of steganography dots. The code may be used to identify the selected questions, a class period, a student, and/or additional information. In some Implementations, the code may be added in multiple sections, such as a small matrix code at one and/or more of the corners of the page,

[0024] Associate unique identifier instructions 138 may associate the set of responses with the unique identifier. The unique identifier may be used to associate the responses with a particular student. For example, each test paper may have a different identifier even when the questions appear in the same order. This identifier may be associated with a particular student's name and/or student identifier. For example, OCR may be used to recognize the student's written name on the paper. In some implementations, only the unique identifier ma be used during assessment and scoring by the instructor in orde to anonymize the responses and prevent grading bias. The unique identifier and student name may be associated without being visible, such as by storing th relationshi i a database, such that the grades, comments, and any other assessments may be provided to the student,

[0025] FIG. 2A is m lustration of an example machine-readable code comprising a matrix code 21 .

[0026J FIG. 28 is an illustration an example machine-readable code comprising a bar code 220. [0027] FIG, 2C is a Illustratio of an example machine-readable cod© comprising watermark 230.

[0028] FIG. 3A is an illustration of an example generated test 300. Generated test 300 ma comprise a plurality of different question types, such as a multipl choice question 310, a free-form answer question 315, a short answer question 320 with a pre-defined answer area 325 s such as may be used fo a sketch or to show work, and an essa question 330. Generated test 300 may furthe comprise a machine-readable code 335 comprising a unique identifier. Machine-readable code 335 may be displayed anywhere on the page and may comprise multiple machine- readable codes, such as a small bar or matrix code at each comer and/or a watermark associated with one. some, and/or ail of the questions. Generated tes 300 may further comprise a name block 350.

[0029] In some implementations, name block 340 may be omitted when a student identifier is already assigned to the generated test 300. The student identifier may, for example, be encoded Into machine-readable code 335, In some implementations, nam block 340 may fee scanned along with the answered questions and the student's name and/or othe information may be extracted and associated with the answers.

[0030] FIG. 38 Is an illustration of an example completed test 350, Completed test 350 may comprise a marked multiple choice answer bubble 355, a free-form answe 380, a short answer 385, a sketch / work response 370, an essay answer 375, and a completed name block 380, Completed test 350 may also comprise the machine-readable Sink 335 comprising the test's unique identifier.

[0031] Capture response instructions 132 may, for example, recognize the bubbles for multiple choice responses by retrieving a stored position on the page layout. For example, a stored question may have a know number of possible multiple chance answers (e.g., four - A 8, C, and D), The position for a bubble associated with each possible answer may be stored in an absolute location (e.g., relative to a corner and/or other fixed position on the page) and/or a relative location (e.g. , relative to the associated question text and/or question number). For example, the position for the bubble for choice A may be d fned a 100 pixels over from the side of the page and 300 pixels down from the top of the page. The position for the bubble for choice B may be defined: as 200 pixels over from the side of the page and 300 pixels down from the top. In some Implementations, Bss bubble may be defined relative to A¾ bubble, such as 100 pixels right of the bubble for choice A. Such positions may be stored when th page layout for the test is generated and/or the page may be scanned when the answers are submitted and the positions of the bubbles stored as they are recognized (such as b an OCR process).

[0032] The recognition process may use multiple passes to identify marked and/or unmarked multiple choice answer bubbles. For example, a scanner may detect any markings of an expected bobble size (e.g., 80-160% of a known bubble siz based on pixel width). The scanner may the perform an analysis of each detected potential bubble to detect whether the bubble has been filled in by comparing the colors and isolating filled circles (or other regular and/or irregular) shapes and/or markings (e.g., crosses). In some Implementations, a marked bubble may be defected when a threshold number of pixels of the total number of pixels in the answer bubble have been marked. For example, marked multiple choice answer 355 has a bubble that has been approximately 90% filed In, which may be determined to be a selection of that response.

[0033] FIG. 4 Is a flowchart of an example method 400 for providing question assessment consistent with disclosed implementations. Although execution of method 400 is described below with reference to device 100, other suitable components for execution of method 400 may be used.

[0034] Method 400 ma begin in stag© 405 and proceed to stage 410 where device 100 may capture a set of responses associated with a printed plurality of questions, wherein the plurality of questions comprise a plurality of question types. Question types may comprise, for example, multiple choice, essay, short answer, free-form, mathematical, sketch, etc. For example, capture response instructions 132 may capture a set of responses to a plurality of questions, wherein the set of responses comprises at least one free-form response. Capture response instructions 132 may, in some implementations, recognize a plurality of markup styles associated with a multiple choice type question. For example, a multiple choice response style may comprise a whole and/or partially filled* circle, an X and/or other marking on the answer and/or the circle associated with the answer, and/or circling the answer,

[0035J Capture response instructions 132 may, fo x m le, de ect the pen/pencil marks that hav been added to the responses by diferentiating between the layout of the question before and after the responses have bean written In, A pixel-by-pixel comparison, for example, may compare a color value for each relative pixel to determine if new writing has been added. A white pixel may read as hex value of #Ff FFFF, while a grey pixel (representing a pencil mark in this example) may read as a hex value of #47474?. These values are only examples, a numerous othe values may be represented, as the detection may rely on a threshold difference in the values to determine that a mark has been made. In some implementations, larger sample areas than a single pixel may be compared, such as by averaging the color values of the area and comparing between the before and after layouts. Once areas of writing have been defected, they may be assembled into shapes, such as be connecting marked pixels Into an 'X' or circle shape and then Identifying the relative location of the shape to associate that shape with a particular answer. Comparison of pixel value differences is offered as an example only, and other methods of scanning and detection of markings on the responses are contemplated.

[0036] In some implementations, capturing the responses may comprise scanning the printed plurality of questions, recognizing a layout of each of the plurality of questions, and capturing a response in a response area associated with each of the plurality of questions, Capturing the response In the response area associated with each of the plurality of questions may comprise recognizing at least one printed indicator of the response area for at least one of the questions. For example, the boundary Sines of pre-defined answer area 325 may be used to limit the area scanned for a response to question 320. £0037] Ivtethod 400 may then advance to stage 416 where device 100 may associate the set of responses with a parson aocordino; to a unique identifier encoded in a machine-readable code associated with the printed plurality of questions, For example, scan link instructions 134 may scan a machine-readable link comprising a unique identifier associated with the plurality of questions. Trie unique identifier may identify a student associated with the responses and/or may provide layout information fo the test. For example ( the unique identifier may specify that of 10 possible questions, the associated test presented the questions in the order 3, 7, 1 2, 9, 10, 8, 4, 6, 5. This may he used to retrieve and/or recreate the layout of the unmarked questions to aid in comparison and detection of the response markings,

038] Th captured questions may be associated with a machine-readable code of the unique identifier. The machine-readable' code ma comprise, for example, a bar code, a matrix code, a text siring, and a watermark. The machine- readable code may be visible to a person, such as a large bar code, and/or may not be readily visible, such as a transiucent watermark and/or a set of stegartography dots. The code may be used to identify the selected questions, a class period, a student, and/or additional information. In some implementations, the code may be added in multiple sections, such as a small matrix code at one and/or more of the corners of the page,

[0039] Associate unique identifier instructions 136 may associate the set of responses with the unique identifier. The unique identifier may be used to associate the responses with a particular student. For example, each test paper may have a different Identifier even whe the questions appear in the same order. This identifier may he associated with a particular student's name and/or studen identifier. For example, OCR may be used to recognize the students written name on the paper, in some implementations, only the unique identifier may be used during assessment and scoring by the instructor in orde to anonymize the responses and prevent grading bias. The unique identifier and student name may be associated without being visible, such as by storing the relationship in a database, such that the grades, comments, and any other assessments may be provided to the student

[0040] Met od 400 ma then advance to stage 420 where device 00 may compare first response of the set of responses to an answer key to determine whether the first response of the set of responses comprises a correct response. In some implementations, capture response instructions 132 may further compare at least one response of the set of responses to an answer key of correct responses. For example, once a fiSied~in circle next has been Identified and located next to answer choice B, the correct answer for the questio may be retrieved and compared. If the correc answer is B» the th question ma be scored as correct; otherwise the question ma be scored as incorrect, in some implementations, the correct answer may fee displayed next to the captured answer for verification by an instructor. For example, for a short answer response, the text of the response may be displayed next to an expected answer. In other examples, stored answer keywords may be compared to the captured response, such as via optical character recognition (OCR), The keywords may be used to mark the response as correct or incorrect, and/or may be used to highlight appropriate words in the response to aid an instructor when reviewing the responses. For example, certain names may be highlighted in a history essay response.

[0041] Upon detection of a correct and/or incorrect response, an indication of the correctness ma he provided. For example, captur response Instructions 132 may provide a printout and/or display of all scored responses and/or an indication of which response should have been entered. For another example, capture response Instructions 32 may provide a count of correct and/or incorrect responses,

[0042] Method 400 may than advance to stage 425 where device 100 ma receive a analysis of a second response of the set of responses. For example, device 100 ma display one of the questions and the captured response from one and/or a plurality of students. An instructor may review the displayed responses via a user interface and provide analysis, feedback, and/or assessment. For example, the instructor may use grading software to mark a response as correct or incorrect and/or to provide comments on the response. The provided analysis may he stored, such as in a database, and presented to the student, such as via email, display on a screen, and/or printout to some Implementations, the user interface may display each response to a first question of the plurality of questions in a random order. For example, the user interface may display each student's response to question 2 in succession and/or at least partially simultaneously (e.g., multiple responses at once). The responses may be displayed in a randomized order rather than in an order received, identifier, name, and/or otherwis sorted order. The responses may be displayed in an anonymized fashion, absent an identification of the person associated with the set of responses. In some implementations, no identifiers may be shown such tha no indication is given that the same use submitted any two particular responses. In other Implementations, the unique identifier (or other consistent identifier) may be displayed such that an Instructor may kno that different responses are associated with the same student without knowing which student that is.

[0043] In some implementations, the comparisons and/o received analyses may be aggregated into a plurality of determinations of whether the set of responses are correct into a score for the person. For example, a particular student's set of responses may comprise five multiple choice answers of which four were determined to be correct by comparison and five short-answer responses, of which four were determined to be correct according to assessments received from the instructor. These evaluations may thus be aggregated into a total score of 8/10 correct. I some Implementations, different questions may be stored as having different weights. For example, short answer questions may count twice as much as multiple choice, such that 4/5 correct short answer responses effectively count as 8/10 possible points to be ά ύ to 4/5 correct multiple choice answers before calculating a final score,

[0044] Method 400 may then end at stage 450. [0045] FIG, 5 Is a block diagram of an example system SOD for providing question assessment System 500 may comprise a computing device 510 comprising an extraction engine 520, a scoring engine 525 and a display engine 530. Engines 520, 525, and 530 may foe associated with a single computing device 510 and/or may be communicatively coupled among different devices such as via a direct connection, bus, or network. Each of engines 520, 525, and 530 may comprise hardware and/or software associated with computing devices.

[0846] Extraction engine 520 may extract a set of responses associated with a plurality of questions from a printed iayout of the plurality of questions, wherein the plurality of questions comprise a plurality of question types, and associate the set of responses with a person according to a unique identifier encoded in a machine- readable cod associated with the printed plurality of questions.

[0047] In some implementations, extraction engine 520 may capture a set of responses to a plurality of questions, wherein the set of responses comprises at least one free-form response. Extraction engine 520 may, in some implementations, recognize a plurality of markup styles associated with a multiple choice type question. For example, a multiple choice response style may comprise a whole and/or partially filled -in circle, an X and/or other marking on the answer and/or the circle associated with the answer, and/or circling the answer.

[0048] Extraction engine 520 may, for example, detect the pen/pencil marks that have been added to the responses by differentiating between the layout of the question before and after the responses have been writte in, A pixe!~by-pixei comparison, for example, may compare a color value fo each relative pixel to determine if new writing has been added. A. white pixel may read as a hex value of #FFFFFFS while a grey pixel {representing a pencil mark in this example) may read as a hex value of #474747. These values are only examples, as numerous other values may be represented, as the detection may rely ors a threshold difference in the value to determine that a mark has been made. In some implementations, larger sample areas than a single pixel may be compared, such as by averaging the color values of the area and comparing between the before and after layouts. Once areas of writing have been detec ed s they may b© assembled Into shapes, •such as be connecting marked pixels into an "X* or circle shape and then identifying the relative location of the shape to associate that shape with a particular answer. Comparison of pixel value differences is offered as m example only, and other methods of scanning and detection of markings on the responses are contemplated.

[0049] The questions may be stored in a question database associated with a teaching / instructional application. Such questions and their layout may he retrieved to compare to the marked up version to aid in capturing the responses. For example, an instructor may enter the questions in an app on their tablet and/or smart device, through a web»based .user interface, through an applicatio on a desktop or laptop, etc. Each question may comprise the actual display information of the question (text, figures, drawings, references, tables, etc.), a question type (e.g., short answer, multiple choice, sketch, essay, etc.), and/or any constraint rules, as described above. For multiple-choice type questions, the answer choices may also be entered. The question type may be then be used to define an amount of space needed on a page. For example, a multiple choice question may require two lines for the question, an empty space line, and a line for the list of possible answers. For free-form and/or essay type questions, the instructor may enter a recommended amount of answer space {e.g., three lines, half a page, a full page, etc.). The instructor / teacher may also enter the correct answers and/or keywords info the application for later grading.

[0050] Extraction engine 520 may, for example, scan a machine-readable link comprising a unique identifier associated with the plurality of questions. The unique identifie may identify a student associated with the responses and/or may provide layout information for the test. For example, the unique identifier may specify that of 10 possible questions, the associated test presented the questions in the order 3, 7, 1, 2, 9, 10, 8, 4, 6, 5. This may be used to retrieve and/or recreate the layout of the unmarked questions to aid in comparison and detection of the response markings. The captured questions may be associated with a machine-readable code of the unique identifier. The machine-readable code may comprise, for example, a bar code, a matrix code, a text string, and a watermark. The machine-readable code may e visible to person, such as a larg bar code, and/or ma not be readily visible, such as a translucent watermark and/or a set of steganography dots. The code may be used to identify the selected questions, a class period, a student, and/or additional information. In some implementations, the code may be added in multiple sections, such as a small matrix code at one and/or more of the comers of the page.

[0051] Extraction engine 520 may, for example, associate the set of responses with the unique identifier. The unique identifier may be used to associate the responses with a particular student. For example, each test paper may have a different identifier even when the questions appear in the same order. This Identifier may be associated with a particular student's name and/or student identifier. For example, OCR may be used to recognize the student's written name on the paper. In some Implementations, only the unique identifier may be used during assessment and scoring by the instructor in order to anonymize the responses and prevent grading bias. The unique identifier and student name ma be associated without being visible, such as by storing the relationship In a database, such that the grades, comments, and any other assessments may be provided to th student,

[0052] Scoring engine 525 may compare a first response of the set of responses to an answer key to determine whether the first response comprises a correct response to a first question of the pluralit of questions, and receive, from a instructor, a determination of whether a second response of the set of responses comprises a correct response to a second question of the pluralit of questions, in some implementations, scoring engine 525 may compare at least on© response of the set of responses to an answer key of correct responses. For example, once a filled-in circle next has bee identified and located next to answer choice B, the correct answer for the question may be retrieved and compared, if the correct answer is B, then the question may be scored as correct; otherwise the question ma be scored as incorrect In some implementations, the correct answer may be displayed next to the captured answer for verification by an instructor. For example, for a short answer response, the text of the response ma foe displayed next to an expected answer.. In other examples, stored answer keywords ma be compared to the captured response, such as via optical character recognition (OCR). The keywords may be used to mark the response as correct or incorrect, and/or may be used to highlight appropriate words in the response to aid an instructor when reviewing the responses. For example, certain names may be highlighted in a history essay response.

[0053 Upon detection of a correct and/or incorrect response, an indication of the correctness may be provided. Fo example, captur response instructions 132 may provide a printout and/or displa of all scored responses arid/or an indication of ' which response should have been entered. For another example, capture response instructions 132 may provide a count of correct and/or incorrect responses.

[0054] In some implementations, scoring ng ne 525 may receive a analysis of a second response of the set of responses. For example, system 500 may dispiay one of the questions and the captured response from one and/or a plurality of students. An instructor may review the displayed responses via a user interface and provide analysis, feedback, and/or assessment. For example, the instructor may use grading software to mark a response as correct or incorrect and/or to provide comments on the response. The provided analysis may be stored, such as in a database, and presented to the student, such as via email, display on a screen, and/or printout. In some implementations, the user interface, may displa each response to a first question of the pluralit of questions in a random order. For example, the user interface may display each student's response to question 2 in succession and/or at least partially simultaneously {e.g., multiple responses at once). The responses may be displayed in a randomized order or may be displayed in a sorted order, such as in the order received, ordered by identifier, and/or ordered by name. The responses may b displayed in an anonymized fashion, absent an identification of th perso associated with the set of res onses, in some implementations, no identifiers may fee shown such that no Indication is given that the same user submitted an two particular responses. In oilier implementations, the unique identifier (or other consistent identifier} may be displayed such that an instructor may know that different responses are associated with the same student without knowing which student that is.

[0055] In some implementations, the comparisons and/or received analyses may be aggregated into a plurality of determinations of whethe the set of responses are correct into a score for the person. Fo example, a particular student's set of responses may comprise five multiple choice answers of which four were determined to be correct by comparison and five short-answe responses, of whi ch four were determin ed to foe correct according to assessments received from the 'instructor. These evaluations may thus be aggregated into a total score of 8/10 correct. In some implementations, different questions may be stored as having different weights. For example, short answer questions may count twic as much as multiple choice, such that 4/5 correct short answer responses effectively count as 8/10 possible points to be added to 4/5 correct multiple choice answers before calculating a final score.

[0056] Display engine 530 may displa the determination of a correctness of each of the set of responses to the person associated with the piuraiity of questions. For example, a user interface (such as a web application) may be used to display assessments of correctness for each of the responses and/or an overall grade.

[0057] Th disclosed examples ma include systems, devices, computer- readable storage media, and methods for questio assessment. For purpose of explanation, certain examples are described with reference to the components illustrated in the Figures. The functionality of the illustrated components may overlap, however, and may be present in a fewer or greater number of elements and components. Further, all or part of the functionality of illustrated elements may coexist or be distributed among several geographically dispersed locations. Moreover, the disclosed; examples may be implemented in various nvironments and are not limited to the illustrated examples.

[OOSiJ Moreover, as used in the specifscaSon and the appended caims, the singular forms * !ian,K and "the" are intended to include the plural forms as well, unless the context indicates otherwise. Additionally, although the terms first, second, etc. may be used herein to describe various elements, these elements shouid not be limited by these terms, instead, these terms are only used to distinguish one element from another.

[0059] Further,, the sequence of operations described in connection win the Figures are examples and are not intended to be limiting. Additional or fewer operations or combinations of operations may be used or may var without departing from the scope of the disclosed examples. Thus, the present disclosure merely sets forth possible examples of implementations, and many variations and modifications may be made to the described examples. Ail such modifications and variations are intended to be included withi the scope of this disclosure and protected by the following claims.

Claims

1. A non-transitory machine-readable storage medium comprising instructions to;
capture a set of responses to a plurality of questions, wherein the set of response comprises at least one free-form response;
scan machine-readable link comprising a unique identifier associated with the plurality of questions; and
associate the set of res o ses with- the unique identifier.
2< The non-transitory machine-fea afcle medium of claim 1 s wherein the instructions to capture the set of response to a plurality of questions comprise instructions to recogni e a plurality of markup styles associated with a multiple choice type question.
3. The non-transitory machine-readable medium of claim 1 , wherein the instructions to capture the set of responses comprise instructions to perform optical character recognition on at least one of the responses.
4. The non-transitory machine-readable medium of claim 1, further comprising instructions to compare at least one response of the set of responses to an answer key of correct responses.
5. The non-transitory machine-readable medium of claim 4, wherein the instructions to compare at least one response of the set of responses to an answer key of correct responses further comprise instructions to determine whether the at least one response comprises a correct response.
6. The non-transitory machine-readable medium of claim 5, wherein the instructions to determine whethe the at least one response comprises a correc response -further comprise inst uction to provid an indication of whether th at least one response s correct.
7. A computeRmplerrsented method, comprising:
capturing a set of responses associated with a printed plurality of questions, wherein the plurality of questions comprise a pluralit of question types;
associating the set of responses with a person according to a umq Identifier encoded in a machine-readable code associated with the printed plurality of questions;
comparing a first response of the set of responses to an answer key to determine whether the first response of the set of responses comprises a correct response; and
receiving an analysis of a second response of the set of responses.
8. The compyter-irnplemented method of claim 7, wherein the analysis comprises a determination of whether the second response comprises a correct response.
9. The computer-Implemented method of claim 8, further comprising aggregating a plurality of determinations of whether the set of responses are correct into a score fo the person.
10. The c 3m ¾fe pleme ied method of claim 7, wherein th analysis of the second response is received from an instructor via a user interface.
11. The computer-implemented method of claim 10, wherein the user Interfac displays each response to a first question of the plurality of questions in a random order. 2. Th computeNmplemersted method of claim 10s wherein the user interface displays each response to a first question of th plurality of questions absent an identification of the person associated with the set of responses.
13. The c mputeNmpJementecl method of claim 7, wherein extracting th set of responses comprises:
scanning the printed plurality of questions;
recognizing a layout of each of the plurality of questions; and
capturing a response in a response area associated with each of the plurality of questions,
14. Th computer-implemented method of claim 13, wherei capturing the response in the response area associated wit each of the plurality of questions comprises recognizing at least one printed indicator of tha response area for at least one of the questions.
15, A system, comprising:
an ext action engine to:
extract a set of responses associated wit a pluralit of questions from a printed layout of the pluralit of questions, wherein the pluralit of questions comprise a plurality of question types, and
associate the set of responses with a person according to unique identifier encoded in a machine-readable code associated with the printed plurality of questions;
a scoring engine to:
compare a tlrst response of the set of responses to an answer key to determine whether the first resporss comprises a correct response to a first question of th plurality of questions, and
receive, from an instructor, a determination of whether a second response of the set of responses comprises a correct response to a second question of the plurality of questions; and
a display engine to:
display the determinations of correctness of each of the set of responses to the person associated with the plurality of questions.
PCT/US2015/066904 2015-12-18 2015-12-18 Question assessment WO2017105518A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/066904 WO2017105518A1 (en) 2015-12-18 2015-12-18 Question assessment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/066904 WO2017105518A1 (en) 2015-12-18 2015-12-18 Question assessment

Publications (1)

Publication Number Publication Date
WO2017105518A1 true true WO2017105518A1 (en) 2017-06-22

Family

ID=59057259

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/066904 WO2017105518A1 (en) 2015-12-18 2015-12-18 Question assessment

Country Status (1)

Country Link
WO (1) WO2017105518A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040185424A1 (en) * 1997-07-31 2004-09-23 Harcourt Assessment, Inc. Method for scoring and delivering to a reader test answer images for open-ended questions
US20080264701A1 (en) * 2007-04-25 2008-10-30 Scantron Corporation Methods and systems for collecting responses
US20090186327A1 (en) * 2004-07-02 2009-07-23 Vantage Technologies Knowledge Assessment, Llc Unified Web-Based System For The Delivery, Scoring, And Reporting Of On-Line And Paper-Based Assessments
US20100047758A1 (en) * 2008-08-22 2010-02-25 Mccurry Douglas System and method for using interim-assessment data for instructional decision-making
US20150154879A1 (en) * 2007-03-15 2015-06-04 Mcgraw-Hill School Education Holdings Llc Use of a resource allocation engine in processing student responses to assessment items

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040185424A1 (en) * 1997-07-31 2004-09-23 Harcourt Assessment, Inc. Method for scoring and delivering to a reader test answer images for open-ended questions
US20090186327A1 (en) * 2004-07-02 2009-07-23 Vantage Technologies Knowledge Assessment, Llc Unified Web-Based System For The Delivery, Scoring, And Reporting Of On-Line And Paper-Based Assessments
US20150154879A1 (en) * 2007-03-15 2015-06-04 Mcgraw-Hill School Education Holdings Llc Use of a resource allocation engine in processing student responses to assessment items
US20080264701A1 (en) * 2007-04-25 2008-10-30 Scantron Corporation Methods and systems for collecting responses
US20100047758A1 (en) * 2008-08-22 2010-02-25 Mccurry Douglas System and method for using interim-assessment data for instructional decision-making

Similar Documents

Publication Publication Date Title
Pincus et al. Faculty perceptions of academic dishonesty: A multidimensional scaling analysis
Gormally et al. Developing a test of scientific literacy skills (TOSLS): measuring undergraduates’ evaluation of scientific information and arguments
Borrego et al. Systematic literature reviews in engineering education and other developing interdisciplinary fields
Clough et al. Developing a corpus of plagiarised short answers
US5140139A (en) Preparing mark/read documents with markable boxes and locating the boxes from the document scan data
Falkenberg et al. Enhancing business ethics: Using cases to teach moral reasoning
Boone et al. Rasch analysis in the human sciences
Zeller Making students read and review code
Baird et al. Scattertype: a reading captcha resistant to segmentation attack
Roberts Defining literacy: Paradise, nightmare or red herring?
US5011413A (en) Machine-interpretable figural response testing
US4547161A (en) Apparatus and method for Cloze-Elide testing
Bernhard et al. Answering learners' questions by retrieving question paraphrases from social Q&A sites
Markell et al. Effects of increasing oral reading: Generalization across reading tasks
Mitchell On Quantification in Social Anthropology 1
Lingard et al. National testing in schools: An Australian assessment
US20060246410A1 (en) Learning support system and learning support program
Jargowsky et al. Before or after the bell? School context and neighborhood effects on student achievement
US20060252023A1 (en) Methods for automatically identifying user selected answers on a test sheet
US20090282009A1 (en) System, method, and program product for automated grading
US20080286732A1 (en) Method for Testing and Development of Hand Drawing Skills
Kritch et al. Degree of constructed‐response interaction in computer‐based programmed instruction
US20110235128A1 (en) Creating and Processing a Mark-able Document
Carver Teacher perception of barriers and benefits in K-12 technology usage.
Ware et al. Handbook of demonstrations and activities in the teaching of psychology: Volume I: Introductory, statistics, research methods, and history

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15910973

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE