WO2019111274A1 - A system and method for facilitating in evaluating one or more candidates - Google Patents

A system and method for facilitating in evaluating one or more candidates Download PDF

Info

Publication number
WO2019111274A1
WO2019111274A1 PCT/IN2018/050791 IN2018050791W WO2019111274A1 WO 2019111274 A1 WO2019111274 A1 WO 2019111274A1 IN 2018050791 W IN2018050791 W IN 2018050791W WO 2019111274 A1 WO2019111274 A1 WO 2019111274A1
Authority
WO
WIPO (PCT)
Prior art keywords
assessment
evaluator
answer
candidate
candidates
Prior art date
Application number
PCT/IN2018/050791
Other languages
French (fr)
Inventor
Dev Kumar ROY
Sumod K MOHAN
Nrupul DEVINENI
Original Assignee
Digital Aristotle Private Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital Aristotle Private Limited filed Critical Digital Aristotle Private Limited
Publication of WO2019111274A1 publication Critical patent/WO2019111274A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Definitions

  • the present invention generally relates to systems and methods for facilitating in evaluation, and more particularly relates to a system and method for facilitating in evaluating one or more candidates.
  • the steps of evaluating include creating assessment(s) by evaluators and grading the assessment response received from the candidates in response to the assessment, in order to ascertain the candidate’s aptitude and knowledge related to varying fields. Further, the evaluator assigns a rank to the candidate based on the awarded grade. The rank indicates the candidate’s performance compared with the rest of the candidates.
  • the assessment is manually created by the evaluator.
  • the evaluator creates the assessment based on one or more assessment parameters such as a curriculum standard, guide book, chapters to cover difficulty of questions and by referring to previous assessments for a particular subject, level or area.
  • This process of assessment creation consumes a lot of time and to keep track of the one or more assessment parameters makes the process error prone.
  • there is a probability of the created questions of the assessment not relevant to the curriculum standard, subject, level or area.
  • the assessment is created, the next task is to evaluate the candidates based on the created assessment.
  • the candidates provide assessment responses for the assessment.
  • the candidate’s assessment responses are typically in a handwritten or typed form, and often require a great deal of time on the part of the evaluator to personally and manually review and grade each candidate. Further, even more effort is required to total the marks of each individual question and this also is an error prone process.
  • the scores of the candidate may vary based on which evaluator corrects the assessment response. The reason being, each evaluator may not have had a process that allows them to come to consensus on what key points are desired in each answer of the assessment response. Therefore, the grading system may not be consistent.
  • the evaluator segregates the candidates based on score cohorts.
  • the score cohort indicates the candidate’s performance with the rest of the candidates who have provided assessment response to the assessment.
  • the evaluators have to manually or electronically enter the scores of each candidate and compute a rank based on the score, which is time consuming, and may lead to errors if not computed correctly, which may be detrimental to the candidate’s performance record and future.
  • One or more embodiments of the present invention provide a system and method for evaluating one or more candidates.
  • a system for evaluating one or more candidates includes a communication device accessible by an evaluator, an image capturing device and a server in communication with the communication device and the image capturing device over a communication network.
  • the server comprises an assessment generator unit to facilitate the evaluator to create an assessment, wherein the assessment includes multiple questions one of selected and customized by the evaluator on the basis of, one or more assessment parameters and evaluation history of each of the one or more candidates; a storage unit for receiving one or more assessment responses for the generated assessment from the image capturing device of the one or more candidates, the one or more assessment responses including multiple answers for the generated assessment; a segmentation unit configured to: determine one or more parameters of the assessment response based on at least one character recognition technique, wherein the one or more parameters include candidate identity of the assessment response, multiple characters written in the assessment response and a start point and an end point of each of the multiple answers provided by the one or more candidates for each of the multiple questions; and segment the assessment response into multiple answer strips based on the start point and the end point of each of the multiple answers; an evaluation unit configured to allow the evaluator for comparing the segmented answer strips of the assessment responses with a marking assist tool to assign pre-defined marks, wherein the multiple answer strips corresponding to the multiple questions
  • a method of evaluating one or more candidates at a server including the steps of establishing a communication link between a communication device of an evaluator and the server over a communication network; enabling the evaluator to create an assessment, wherein the assessment includes multiple questions one of selected and customized by the evaluator on the basis of one or more assessment parameters and evaluation history of each of one or more candidates; providing, a feedback on the multiple questions one of selected and customized by the evaluator, wherein the feedback indicates relevancy of the multiple questions one of the selected and customized with respect to the assessment parameters; receiving one or more assessment responses for the generated assessment in an image format, the one or more assessment responses including multiple answers for the generated assessment; segmenting each of the one or more assessment responses into multiple answer strips; allowing the evaluator to compare the segmented answer strips of the assessment responses with a marking assist tool to assign pre-defined marks, wherein the multiple answer strips corresponding to the multiple questions of each of the one or more candidates are provided in an
  • Figure 1 illustrates a block diagram of a system for facilitating in evaluating one or more candidates, according to one or more embodiments of the present invention
  • Figure 2 illustrates an assessment generator unit of figure 1, according to an embodiment of the present invention
  • Figure 3 is an exemplary flowchart of a method for facilitating an evaluator in creating an assessment, according to an embodiment of the present invention
  • Figure 4 is an exemplary illustration of an interface generated by an assessment generator unit for allowing an evaluator to select one or more assessment parameters, according to an embodiment of the present invention
  • Figure 5 is an exemplary illustration of an interface generated by an assessment generator unit for allowing an evaluator in creating an assessment, according to an embodiment of the present invention
  • Figure 6 illustrates a format of an assessment response including provision for one or more candidates to enter candidate identity and answer identity, according to an embodiment of the present invention
  • Figure 7 illustrates a storage unit of figure 1, according to an embodiment of the present invention
  • Figure 8 is an exemplary illustration of contextual correction performed by a segmentation unit utilizing a local order logic, according to one or more embodiments of the present invention.
  • Figure 9 is a format of a look-up table, according to an embodiment of the present invention.
  • Figure 10 is an exemplary illustration of contextual correction performed by a segmentation unit utilizing an intertype logic, according to one or more embodiments of the present invention.
  • Figure 11 is an exemplary illustration of an interface generated by an evaluation unit for allowing an evaluator to evaluate one or more assessment responses, according to one or more embodiments of the present invention.
  • Figure 12 is a flowchart of a method for facilitating in evaluating one or more candidates at a server, according to an embodiment of the present invention.
  • Various embodiments of the invention provide a system and method for evaluating one or more candidates. It is to be understood that, the system and method for evaluating one or more candidates can be utilized in one of, but not limited to, educational, medical and legal domains. Further, the system and method as described hereunder can also be utilized in evaluating the one or more candidates for a particular job in a particular domain.
  • figure 1 illustrates a system 100 for facilitating in evaluating one or more candidates over a communications network.
  • the system 100 includes at least one communication device 110, an image capturing device 120 and a server 130.
  • the server 130 includes an assessment generator unit 132 including a question memory 134, a storage unit 136, a segmentation unit 138, an evaluation unit 140, and a computing unit 142.
  • the communication device 110 is in communication with the server 130.
  • the server 130 is in communication with the image capturing device 120.
  • the system 100 is utilized for evaluating one or more candidates at the server 130.
  • the communication device 110 is accessible by an evaluator.
  • the evaluator is one of, but not limited to, educational professional particularly a teacher, a medical practitioner, a health professional and a legal professional.
  • the evaluator has to register with the server 130 before accessing the server 130.
  • the registration may be one of, but not limited to, providing information regarding, the evaluator’s identity, information of an institution that the evaluator is representing and information on evaluator’s area of specialization.
  • the server 130 transmits an acknowledgement to the communication device 110.
  • the acknowledgment indicating a grant of permission to access the server 130.
  • the assessment generator unit 132 of the server 130 facilitates the evaluator to create the assessment.
  • the assessment generator unit 132 includes the question memory 134.
  • the question memory 134 stores multiple questions on the basis of, but not limited to, multiple topics, curriculum standard, chapters, and evaluation history of the one or more candidates as shown in figure 2.
  • the question memory 134 also includes a marking assist tool.
  • the marking assist tool is utilized as an answer rubric.
  • the marking assist tool is utilized by the evaluator to assess answers more consistently.
  • Figure 3 illustrates an exemplary flowchart of a method 300 for facilitating the evaluator in creating an assessment based on one or more assessment parameters as selected by the evaluator.
  • the one or more assessment parameters are also referred to as assessment blueprint.
  • figure 4 illustrates an exemplary interface generated by the assessment generator unit 132 for allowing the evaluator to select one or more assessment parameters.
  • the one or more assessment parameters is one of, but not limited to, chapter, topic, curriculum standard and grade. It is to be understood that, sequence of steps of the method 300 for facilitating an evaluator in creating the assessment may vary, and may not be limited to the steps of the method 300:
  • the assessment generator unit 132 allows the evaluator to select a chapter of interest. For instance, let us consider, the evaluator selects, “science” as the chapter of interest from a plurality of chapters which include, but not limited to, English, mathematics and science. Further, the evaluator is allowed to select a topic within the chapter as shown in figure 4.
  • the assessment generator unit 132 generates one or more curriculum standards based on curricular content of interest.
  • the evaluator is allowed to select one of the curriculum standards from the one or more curriculum standards.
  • the assessment generator unit 132 automatically generates the relevant curriculum standard based on the selected chapter and topic.
  • the curriculum standard is one of, but not limited to, Blooms Taxonomy Model, difficulty level and curriculum standards as set by various educational boards.
  • the curriculum standard indicates the assessment format for a particular grade as set by the educational boards for a chapter of interest. For instance, for evaluating a seventh grade in the field of science, the educational board would have set a curriculum standard indicating the assessment format.
  • the assessment format may include, set of questions which form part of one of, but not limited to, MCQ questions, short answer questions and essay questions. Accordingly, a pre-assigned question score will be allotted to each question.
  • the assessment generator unit 132 generates multiple questions based on the selected curriculum standard as shown in figure 5. For instance the evaluator selects the one or more assessment parameters, which is as mentioned below:
  • the assessment generator unit 132 first generates questions in the MCQ format as shown in figure 5.
  • the assessment generator unit 132 allows the evaluator to one of select and customize the multiple questions based on the generated multiple questions. As shown in figure 5, the assessment generator unit 132 allows the evaluator to drag and drop the questions from the question memory 134, thereby facilitating in creating the assessment. Each of the selected and/or customized questions will be pre-assigned the question score. Further, once the questions are selected and/or customized by the evaluator, the assessment generator unit 132 generates a feedback. The feedback indicates relevancy of the questions one of the selected and/or customized by the evaluator with respect to the selected assessment parameters. Let us consider that the evaluator intends to customize one essay question.
  • the one customized question created by the evaluator is “what is exothermic reaction?”.
  • the assessment generator unit 132 checks for the customized question at the question memory 134.
  • the question memory 134 includes multiple questions arranged based on one of, but not limited to, chapters, topics, one or more curriculum standards, question types and evaluation history of the one or more candidates as shown in figure 2.
  • the assessment generator unit 132 compares the customized question with the multiple questions in the question memory 134 based on the assessment parameters as selected by the evaluator, which is as mentioned below:
  • the feedback generated indicates, that the customized question is relevant with respect to the assessment parameters. It is to be understood, that the percentage of matching, i.e. at least in part should be at least 50 percent. However, if the customized question does not match at least in part with the multiple questions at the question memory 134, based on the assessment parameters selected by the evaluator, then the feedback indicates that the customized question is not relevant with respect to the assessment parameters.
  • the evaluator is allowed to amend the selected and/or customized questions based on the generated feedback.
  • the assessment generator unit generates an assessment based on the selected and/or customized questions by the evaluator.
  • the assessment generator unit 132 provides a sample assessment format based on the selected one or more assessment parameters.
  • the assessment format indicates the type and number of questions that need to be selected from the multiple questions, such that, the assessment created is relevant with respect to the selected assessment parameters.
  • the storage unit 136 of the server 130 receives one or more assessment responses for the generated assessment from the image capturing device 120 in an image format.
  • the one or more assessment responses belong to one or more candidates.
  • the assessment response includes multiple answers provided by the candidate in response to the questions of the generated assessment as shown in figure 6.
  • the image capturing device 120 is one of, but not limited to, a scanner. Image of the assessment response is captured by the image capturing device 120 and transmitted to the storage unit 136 of the server 130 over the communications network.
  • the segmentation unit 138 of the server 130 fetches the one or more assessment responses from the storage unit 136.
  • the segmentation unit 138 is configured to determine one or more parameters of the assessment response based on at least one character recognition technique.
  • the one or more parameters include candidate identity of the assessment response, answer identity, multiple characters written in the assessment response and a start point and an end point of each of the multiple answers provided by the one or more candidates in response to each of the multiple questions.
  • the segmentation unit 138 firstly aligns an image of the one or more assessment responses based on a plurality of identifiers present on the assessment response.
  • the segmentation unit 138 utilizes an estimation of homography matrix to correct the misalignment of the assessment response based on the plurality of identifiers present on the assessment response. In an embodiment of the invention, there are at least four identifiers which are non-collinear and well separated on the assessment responses. In case the assessment response cannot be aligned, the segmentation unit 138 notifies the evaluator or an administrator that a fresh image has to be captured by the image capturing device 120 of the assessment response.
  • each of the assessment responses has a plurality of cells located at certain specific locations of the assessment response.
  • the cells located at the specific locations are meant for the candidate to provide information such as one of, but not limited to, candidate identity and answer identity corresponding to the question of the assessment.
  • the cells may be of various shapes depending on the specific location.
  • the segmentation unit 138 utilizes one of, but not limited to, location of the cells, shape of the cell and relative intensity of pixels within initial estimated shape of the cell in detecting the cells. Once the cells are detected, the segmentation unit 138 correlates the detected cell with the specific location in which it is contained, in determining candidate identity and answer identity.
  • Figure 6 illustrates a format of the assessment response including provisions such as cells for providing candidate identity and answer identity information.
  • the candidate identity is determined based on location of the cell.
  • the cell located on top left corner of the assessment response may be meant for the candidate to provide information of the candidate identity, which is one of, but not limited to, name of the candidate and roll number of the candidate.
  • the cells provided on left side of the assessment response may be meant for providing the answer identity adjacent to the answer for the corresponding question in the assessment as shown in figure 6.
  • the answer identity is one of, but not limited to, answer number.
  • the candidate identity and the answer identity may include certain characters which may form part of confusing group of characters as learnt by the segmentation unit 138 over a period of time.
  • the confusing group of characters is one of, but not limited to,‘1, 2, 3, 7, 8’.
  • the character‘G can be wrongly recognized as character‘7’
  • character‘2’ can be wrongly recognized as‘3 or 7’
  • character‘3’ can be wrongly recognized as‘2 or 8’
  • character‘3’ can be wrongly recognized as‘2 or 8’
  • character‘7’ can be wrongly recognized as‘G
  • character ‘8’ can be wrongly recognized as ‘2 or 3’.
  • the segmentation unit 138 recognizes one of the characters as‘1, 2, 3, 7 or 8’, the segmentation unit 138 does not come to a conclusion that, recognized characters are actually 1, 2, 3, 7 or 8’ respectively, instead, the segmentation unit 138, utilizes a plurality of contextual correction logics, to ensure that the recognized characters are correct. Examples wherein the segmentation unit 138 utilizes the contextual correction logics are explained hereunder:
  • Figure 8 is an exemplary illustration of contextual correction performed by the segmentation unit 138 utilizing a local order logic.
  • the local order logic is utilized for various situations. Let us consider a first situation, wherein the segmentation unit 138 recognizes the answer numbers of the assessment response are in a sequential order. Based on which, as referred in figure 8, the segmentation unit 138 recognizes the answer number, herein the current answer number in a cell as
  • the segmentation unit 138 utilizes a first pre-defined rule of the local order logic.
  • the first pre-defined rule of the local order logic is based on previous and next number/character prediction logic. For the current answer number which has been recognized as‘12’, the segmentation unit 138, checks for the previous answer number which happens to be‘12’ and checks for the next answer number which follows the current answer number, which happens to be‘14’. Therefore, based on the local order logic, the segmentation unit 138 predicts the correct answer number of the current answer as‘13’, instead of‘12’ and amends the current answer number accordingly.
  • the segmentation unit 138 utilizes a second pre-defined rule of the local order logic, which is as follows:
  • the segmentation unit 138 combines information of likelihood of the answer numbers being in order and likelihood of answer number predicted being wrong. Based on the combined information, the segmentation unit 138, sets up an optimization problem with associated costs by keeping order and replacing second confident pair of characters instead of first pair.
  • the cost is a matrix which is totaling up a result of a process of a differential for a result of a decision. For instance, let us assume that the segmentation unit 138, predicted‘12’ as first prediction and‘13’ as second prediction. Based on the prediction, the segmentation unit 138 assumes two possibilities. First possibility is, if the segmentation unit 138, chooses‘12’ it would have a large cost associated due to the order not being preserved and no associated cost for replacement.
  • the segmentation unit 138 chose‘13’ as the answer number, then there would be no cost added, due to order, but slightly greater cost for replacing the‘3’ with‘2’.
  • Overall the total cost of the replacement of each character is based on likelihood of confusion of the two characters by the segmentation unit 138. Therefore, if the characters are easily confused, then the cost of replacement is small and if they are not easily confused then the cost is large. Therefore the second possibility will have lower cost compared to the first possibility, since the cost of replacing‘2’ with‘3’ is less than breaking the order.
  • the segmentation unit 138 does this jointly for all combination of characters such as numbers instead of doing one at a time.
  • the segmentation unit 138 when the segmentation unit 138 recognizes the name of the candidate and the roll number of the assessment response in the cells contained therein, the segmentation unit 138, utilizes inter-type logic of the plurality of contextual correction logics to validate recognized information based on cross verification.
  • the segmentation unit 138 validates the recognized information by cross-verifying with a look up table (LUT) as shown in figure 9.
  • the LUT is a table comprising a list of names of the candidates corresponding to the roll numbers who have submitted the assessment responses.
  • the segmentation unit 138 looks up the name of the candidate that was predicted and identifies the corresponding roll number.
  • FIG. 10 is an exemplary illustration of contextual correction of candidate identity performed by the segmentation unit 138 utilizing an intertype logic. Let us consider, the segmentation unit 138 predicted, the candidate name as‘Aeron’ in the respective box and the corresponding roll number as ‘02’. Pursuant to the recognition, the segmentation unit 138, validates the recognized information by cross verifying with the look up table.
  • the segmentation unit 138 assumes that the recognized information is wrong, since the roll number of closest name in database is 'Aaron’ as per the look up table of figure 9 is‘01’ and not‘02’. Given two types of information, i.e. name and roll number of the candidate from two sources, i.e. prediction information by the segmentation unit 138 and information from the Look up table, the segmentation unit 138 compares what is the smallest change to either/both of information type(s), i.e. the candidate name and the roll number such that a matching pair is found in the Look Up Table (LUT).
  • LUT Look Up Table
  • the segmentation unit 138 amends the name to‘Aaron’ from‘Aeron’ in the assessment response and roll number to‘01’ from‘02’.
  • the cost of replacing each type of information is different for each sources (name vs. roll numbers) and different for each character of each type of information, i.e. ‘a’ with‘e’ can be replaced with lower cost but not‘a’ with‘x’.
  • the LUT is provided at the storage unit 136 as shown in figure 7.
  • the segmentation unit 138 determines the start point and end point of each answer of the assessment response.
  • the start point is beginning of an answer, which is adjacent to the answer number present in the cell.
  • the end point is beginning of the next answer, which is adjacent to another answer number present in another cell. Further, the start point and end point of each answer is determined based on lice spacing between two answers.
  • one or more parameters of the assessment responses are determined based on at least one character recognition technique.
  • the character recognition technique is one of, but not limited to, Convolution Neural Networks, Recurrent Neural Networks, Support Vector Machines and K-nearest Neighbors. Further, if confidence of the recognized character is below a threshold, the characters have to be further segmented into two pieces using a modified seam carving inspired algorithm. It is to be understood that the character recognition techniques as mentioned above are well known in the art.
  • the segmentation unit 138 determines existing sequence of the multiple answers in the assessment response.
  • the existing sequence of the answers is the sequence of answers provided by the candidate in the assessment response.
  • the assessment response has three answers in the sequence of 01, 02 and 03 as the respective answer numbers. Therefore, the segmentation unit 138 determines the existing sequence of answers based on the answer number; herein the sequence is 01, 02 and 03.
  • the segmentation unit 138 determines the existing sequence of the answers in the assessment response, the answers are segmented into a plurality of answer strips. Each answer strip includes an answer. The segmentation unit 138 segments the answers into answer strips based on one of, but not limited to, the start point and end point of each answer.
  • the segmented answer strips are stored in the storage unit 136 as shown in figure 7.
  • the evaluation unit 140 of the server 130 allows the evaluator to define a sequence in which the answers need to be displayed for the evaluator to correct them.
  • the sequence herein is called as the evaluator defined sequence.
  • evaluation unit 140 arranges the answer strips accordingly.
  • the evaluator defined sequence is based on one of, but not limited to, candidate wise and question wise defined sequence. Evaluator defined sequence is explained with an example. Let us consider the evaluator intends to compare the answer strips with the marking assist tool of each candidate.
  • the evaluator has to select an option of candidate wise from options of candidate wise and question wise at the server 130 via the communication device 110.
  • the segmentation unit 138 allows the evaluator to select one particular candidate for evaluation. For instance, the evaluator selects the candidate‘Amanda’, as shown in figure 11, based on which, the evaluation unit 140 fetches relevant information from the storage unit 136 and arranges all the answer strips belonging to‘Amanda’. Pursuant to arrangement of answer strips belonging to‘Amanda’, the evaluator is allowed to select any one of the answer strips belonging to‘Amanda’ for evaluation.
  • the evaluator selects the answer strip including the answer number 1 pertaining to question 1, based on which, the relevant marking assist tool stored at the question memory 134 pertaining to question 1 is fetched by the evaluation unit 140 and displayed along with the answer strip of answer number 1 pertaining to question 1.
  • Figure 11 is an exemplary illustration of an interface generated by the evaluation unit 140 for allowing evaluator to evaluate one or more assessment responses. As shown in figure
  • the answer strip selected by the evaluator along with the relevant marking assist tool is displayed by the evaluation unit 140. Further, if the evaluator intends to evaluate one particular question of all the candidates, then the evaluator has to select an option of question wise from the options of candidate wise and question wise at the server 130 via the communication device 110. In response to the evaluator selecting the option of question wise, the evaluation unit 140 allows the evaluator to select one particular question to be evaluated. Based on the selected question, the evaluation unit 140 fetches the answer strips of all the candidates only pertaining to the selected question and arranges the answer strips along with the relevant marking assist tool pertaining to the selected question.
  • the evaluation unit 140 displays the answer strips including answer number 1 of all the candidates pertaining to question 1 along with the relevant marking assist tool.
  • the question wise correction is shown to have better consistency of evaluations since the evaluator has to only remember one question and the corresponding answer at a time.
  • the evaluation unit 140 displays answer strip including an answer number 1 pertaining to question 1 of the candidate‘Amanda’ based upon the selection by the evaluator.
  • the marking assist tool includes a plurality of key points that are required to be checked by the evaluator in the respective answer strip for ascertaining correctness of the answer.
  • the marking assist tool as shown in figure 11 assists the evaluator in determining correctness of the answer pertaining to question 1.
  • the marking assist tool illustrated in figure 11 mentions three key points (shown as step 1 , 2 and 3 respectively) that are required to be present in the answer pertaining to question 1. Each key point is assigned pre defined marks.
  • the evaluator should assign pre-defined marks to the answer pertaining to a question, if that particular key point is present in the answer. Any person skilled in the art will understand that the pre-defined marks for the key points of the question will not exceed the assigned question score for the question.
  • the computing unit 142 determines a total score for the assessment response for each of the candidates. The total score is determined by summing up the answer score for each answer of the assessment response. Once the total score has been determined for each of the assessment responses, the total score will be recorded against the candidate’s name in the look-up table and stored at the storage unit 136. It is, to be noted that, any person skilled in the art will understand that the total score obtained by the candidate for the assessment response will not exceed the total score assigned to the assessment.
  • the computing unit 142 generates a performance report for each of the candidate.
  • the performance report indicates the candidate’s performance based on one of, but not limited to, question wise, rubric performance, topic wise and chapter wise. Further, the performance report also indicates the candidate’s performance with respect to the rest of the candidates who have submitted the assessment responses.
  • the computing unit 142 determines the candidate’s performance with respect to the rest of the candidates based on the following steps. It is however, to be understood that, sequence of steps for determining candidate’s performance with the rest of the candidates may vary, and may not be limited to the following steps:
  • a skill level is determined for each of the answer score based on a pre-defined skill level to answer score logic. For instance, if the answer score assigned is 2, then the evaluation unit 140 determines the skill level as 2. Similarly, if the candidate obtains 1 as the answer score, then the skill level will be automatically computed as 1.
  • the skill level of the candidate is determined based on one of, but not limited to, a chapter wise performance, topic wise performance, sub-topic wise performance and concept wise performance.
  • an overall skill level is computed by the computing unit 142, by summing up the skill level computed for each answer.
  • a rank is assigned to the candidate based on the computed total score of each candidate.
  • the candidate obtaining the highest total score will be assigned the highest rank. For instance candidates ‘Aaron, Liam, Mason, Jacob’ have been assigned a total score of 12, 10, 11 and 9 respectively, then the computing unit 142 will assign Aaron first rank, Mason the second rank, Liam the third rank and Jacob the fourth rank respectively.
  • a training source is provided/suggested to the candidate.
  • the training source aids in enhancing the overall skill level of the candidate above the threshold level.
  • the training source is provided/suggested to the candidate is one of, but not limited to, a relevant web resource and relevant remediation assessments.
  • the training source is stored at the storage unit 136.
  • the training source is provided to a candidate based on the skill level of each candidate in the assessment. For instance, a plurality of training sources are stored at the storage unit for one or more skill levels.
  • the communication device 110 is one of, but not limited to a computer, mobile phone, desktop, tablet, and personal digital assistant (PDA).
  • PDA personal digital assistant
  • the communication device 110 includes a display which is compatible with the server 130 for facilitating the evaluator to view interfaces generated by the server 130.
  • the communication device 110 and the image capturing device 120 communicate with the server 130 over the communications network.
  • the communications network is one of, but not limited to, local area network (LAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), Wireless Network and Inter Network.
  • FIG 12 of an embodiment of the invention a flowchart of a method 1200 for evaluating one or more candidates at a server 130 is illustrated. It is however, to be understood that, the sequence of the steps of the method may vary, and may not be limited to the steps of the method 1200. Further, it is to be noted that, illustrations and examples for the steps of the method 1300 are mentioned above.
  • a communication link is established between the communication device 110 of an evaluator and the server 130 over a communication network.
  • the communication link is established by the evaluator by registering with the server 130.
  • the evaluator is allowed to create an assessment utilizing the assessment generator unit 132.
  • the assessment includes multiple questions one of selected and customized by the evaluator on the basis of one or more assessment parameters.
  • the assessment parameters include a topic selected by the evaluator, pre set curriculum standard and evaluation history of each of the one or more candidates.
  • the pre-set curriculum standard is one of, but not limited to, blooms taxonomy model or any other curriculum as prescribed by an educational institution.
  • the evaluator can set the questions based on the evaluation history of each of the one or more candidates.
  • the evaluation history of each of the one or more candidates is stored at the storage unit 136 of the server 130. Keeping the evaluation history of each of the one or more candidates as reference, the evaluator can create an assessment.
  • step 1206 a feedback is provided by the assessment generator unit
  • the feedback indicates relevancy of the multiple questions one of the selected and customized with respect to the assessment parameters.
  • one or more assessment responses for the generated assessment in an image format are received at the image capturing device 120.
  • the one or more assessment responses including multiple answers for the generated assessment.
  • each of the one or more assessment responses are segmented into multiple answer strips by the segmentation unit 138.
  • an evaluation unit 140 allows the evaluator to compare the segmented answer strips of the assessment responses with a marking assist tool.
  • the multiple answer strips corresponding to the multiple questions of each of the one or more candidates are provided in an evaluator defined sequence to the evaluator.
  • the evaluator defined sequence is one of, but not limited to, candidate wise and question wise defined sequence.
  • a computing unit 142 determines an answer score and a total score based on the answer score of the assessment response for each of the one or more candidates.
  • the computing unit 142 determines a performance report for each of the candidate based on the answer score and the total score.
  • the step 1210 of segmenting each of the one or more assessment responses into multiple answer strips further comprises the steps of: aligning an image of the one or more assessment responses based on a plurality of identifiers present on the one or more assessment responses; determining one or more parameters of the assessment response based on at least one character recognition technique, wherein the one or more parameters include candidate identity of the assessment response, answer identity of each of the multiple answers corresponding to the assessment and a start point and an end point of each of the multiple answers provided by the one or more candidates for each of the multiple questions; and determining a sequence in which answers are provided in the assessment response, and arranging the answers provided in the assessment response as per the evaluator defined sequence.
  • the step 1216 of generating a performance report for each of the candidate based on the answer score and the total score further comprises the steps of: determining a skill level for each of the question score based on a pre-defined skill level to question score logic; computing an overall skill level for each of the candidates based on summing up the skill level determined for each question; providing a rank based on the total score for each of the candidate; and providing a training source to the candidate if the overall skill level is below a threshold level, wherein the training source aids in enhancing the overall skill level of the candidate above the threshold level.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A system for facilitating in evaluating one or more candidates is provided, which includes a communication device, an image capturing device and a server. The server includes an assessment generator unit to facilitate the evaluator to create an assessment, a storage unit for receiving assessment responses for the generated assessment from the image capturing device, a segmentation unit to determine one or more parameters of the assessment response, and segment into multiple answer strips based on one of the one or more parameters, an evaluation unit to allow the evaluator for comparing the segmented answer strips of the assessment responses with an marking assist tool to assign pre-defined marks, and a computing unit to determine an answer score and a total score of the assessment response for each candidate; and generate a performance report for each of the candidate.

Description

A SYSTEM AND METHOD FOR FACILITATING IN EVALUATING ONE OR
MORE CANDIDATES
FIELD OF THE INVENTION
[0001] The present invention generally relates to systems and methods for facilitating in evaluation, and more particularly relates to a system and method for facilitating in evaluating one or more candidates.
BACKGROUND OF THE INVENTION
[0002] Educators are constantly attempting to assess candidate’s performance. This is important for determining candidates' progress and for determining how to help the candidate learn more efficiently and progress more effectively.
[0003] Various systems and methods are well known in the art for evaluating candidates. The steps of evaluating include creating assessment(s) by evaluators and grading the assessment response received from the candidates in response to the assessment, in order to ascertain the candidate’s aptitude and knowledge related to varying fields. Further, the evaluator assigns a rank to the candidate based on the awarded grade. The rank indicates the candidate’s performance compared with the rest of the candidates.
[0004] Generally, the assessment is manually created by the evaluator. The evaluator creates the assessment based on one or more assessment parameters such as a curriculum standard, guide book, chapters to cover difficulty of questions and by referring to previous assessments for a particular subject, level or area. This process of assessment creation consumes a lot of time and to keep track of the one or more assessment parameters makes the process error prone. Further, there is a probability of the created questions of the assessment not relevant to the curriculum standard, subject, level or area. There may also be a probability of the format and standard of the assessment varying from one evaluator to another. In view of the same, the format of the assessment may not be uniform.
[0005] Once, the assessment is created, the next task is to evaluate the candidates based on the created assessment. Generally, the candidates provide assessment responses for the assessment. The candidate’s assessment responses are typically in a handwritten or typed form, and often require a great deal of time on the part of the evaluator to personally and manually review and grade each candidate. Further, even more effort is required to total the marks of each individual question and this also is an error prone process.
[0006] Further, the scores of the candidate may vary based on which evaluator corrects the assessment response. The reason being, each evaluator may not have had a process that allows them to come to consensus on what key points are desired in each answer of the assessment response. Therefore, the grading system may not be consistent.
[0007] Subsequent to, completion of the assessment creation and awarding scores to the candidates for the assessment response, the evaluator segregates the candidates based on score cohorts. The score cohort indicates the candidate’s performance with the rest of the candidates who have provided assessment response to the assessment. Again, for this process, the evaluators have to manually or electronically enter the scores of each candidate and compute a rank based on the score, which is time consuming, and may lead to errors if not computed correctly, which may be detrimental to the candidate’s performance record and future.
[0008] Accordingly, it is desired to provide for standardized systems and methods for facilitating in evaluating one or more candidates.
SUMMARY OF THE INVENTION
[0009] One or more embodiments of the present invention provide a system and method for evaluating one or more candidates.
[0010] In one aspect of the invention, a system for evaluating one or more candidates is provided. The system includes a communication device accessible by an evaluator, an image capturing device and a server in communication with the communication device and the image capturing device over a communication network. The server comprises an assessment generator unit to facilitate the evaluator to create an assessment, wherein the assessment includes multiple questions one of selected and customized by the evaluator on the basis of, one or more assessment parameters and evaluation history of each of the one or more candidates; a storage unit for receiving one or more assessment responses for the generated assessment from the image capturing device of the one or more candidates, the one or more assessment responses including multiple answers for the generated assessment; a segmentation unit configured to: determine one or more parameters of the assessment response based on at least one character recognition technique, wherein the one or more parameters include candidate identity of the assessment response, multiple characters written in the assessment response and a start point and an end point of each of the multiple answers provided by the one or more candidates for each of the multiple questions; and segment the assessment response into multiple answer strips based on the start point and the end point of each of the multiple answers; an evaluation unit configured to allow the evaluator for comparing the segmented answer strips of the assessment responses with a marking assist tool to assign pre-defined marks, wherein the multiple answer strips corresponding to the multiple questions of each of the one or more candidates are provided in an evaluator defined sequence to the evaluator; and a computing unit configured to: determine answer score based on the assigned pre-defined marks and a total score based on the answer score of the assessment response for each of the candidate; and generate a performance report for each of the candidate based on the answer score and the total score.
[0011] In another aspect of the invention, a method of evaluating one or more candidates at a server is provided. The method including the steps of establishing a communication link between a communication device of an evaluator and the server over a communication network; enabling the evaluator to create an assessment, wherein the assessment includes multiple questions one of selected and customized by the evaluator on the basis of one or more assessment parameters and evaluation history of each of one or more candidates; providing, a feedback on the multiple questions one of selected and customized by the evaluator, wherein the feedback indicates relevancy of the multiple questions one of the selected and customized with respect to the assessment parameters; receiving one or more assessment responses for the generated assessment in an image format, the one or more assessment responses including multiple answers for the generated assessment; segmenting each of the one or more assessment responses into multiple answer strips; allowing the evaluator to compare the segmented answer strips of the assessment responses with a marking assist tool to assign pre-defined marks, wherein the multiple answer strips corresponding to the multiple questions of each of the one or more candidates are provided in an evaluator defined sequence to the evaluator; determining an answer score based on the assigned pre-defined marks and a total score based on the answer score of the assessment response for each of the one or more candidates; and generating a performance report for each of the candidate based on the answer score and the total score.
[0012] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Reference will be made to embodiments of the invention, examples of which may be illustrated in the accompanying figures. These figures are intended to be illustrative, not limiting. Although the invention is generally described in the context of these embodiments, it should be understood that it is not intended to limit the scope of the invention to these particular embodiments.
[0014] Figure 1 illustrates a block diagram of a system for facilitating in evaluating one or more candidates, according to one or more embodiments of the present invention;
[0015] Figure 2 illustrates an assessment generator unit of figure 1, according to an embodiment of the present invention;
[0016] Figure 3 is an exemplary flowchart of a method for facilitating an evaluator in creating an assessment, according to an embodiment of the present invention;
[0017] Figure 4 is an exemplary illustration of an interface generated by an assessment generator unit for allowing an evaluator to select one or more assessment parameters, according to an embodiment of the present invention; [0018] Figure 5 is an exemplary illustration of an interface generated by an assessment generator unit for allowing an evaluator in creating an assessment, according to an embodiment of the present invention;
[0019] Figure 6 illustrates a format of an assessment response including provision for one or more candidates to enter candidate identity and answer identity, according to an embodiment of the present invention;
[0020] Figure 7 illustrates a storage unit of figure 1, according to an embodiment of the present invention;
[0021] Figure 8 is an exemplary illustration of contextual correction performed by a segmentation unit utilizing a local order logic, according to one or more embodiments of the present invention;
[0022] Figure 9 is a format of a look-up table, according to an embodiment of the present invention;
[0023] Figure 10 is an exemplary illustration of contextual correction performed by a segmentation unit utilizing an intertype logic, according to one or more embodiments of the present invention;
[0024] Figure 11 is an exemplary illustration of an interface generated by an evaluation unit for allowing an evaluator to evaluate one or more assessment responses, according to one or more embodiments of the present invention; and
[0025] Figure 12 is a flowchart of a method for facilitating in evaluating one or more candidates at a server, according to an embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTION
[0026] Reference will now be made in detail to specific embodiments or features, examples of which are illustrated in the accompanying drawings. Wherever possible, corresponding or similar reference numbers will be used throughout the drawings to refer to the same or corresponding parts. Moreover, references to various elements described herein, are made collectively or individually when there may be more than one element of the same type. However, such references are merely exemplary in nature. It may be noted that any reference to elements in the singular may also be construed to relate to the plural and vice-versa without limiting the scope of the invention to the exact number or type of such elements unless set forth explicitly in the appended claims.
[0027] Various embodiments of the invention provide a system and method for evaluating one or more candidates. It is to be understood that, the system and method for evaluating one or more candidates can be utilized in one of, but not limited to, educational, medical and legal domains. Further, the system and method as described hereunder can also be utilized in evaluating the one or more candidates for a particular job in a particular domain.
[0028] In accordance with an embodiment of the invention, figure 1 illustrates a system 100 for facilitating in evaluating one or more candidates over a communications network. The system 100 includes at least one communication device 110, an image capturing device 120 and a server 130.
[0029] In accordance with an embodiment of the invention, the server 130 includes an assessment generator unit 132 including a question memory 134, a storage unit 136, a segmentation unit 138, an evaluation unit 140, and a computing unit 142.
[0030] In accordance with an embodiment of the invention, the communication device 110 is in communication with the server 130. The server 130 is in communication with the image capturing device 120.
[0031] With reference to figure 1 of an embodiment of the invention, the system 100 is utilized for evaluating one or more candidates at the server 130. The communication device 110 is accessible by an evaluator. The evaluator is one of, but not limited to, educational professional particularly a teacher, a medical practitioner, a health professional and a legal professional. The evaluator has to register with the server 130 before accessing the server 130. The registration may be one of, but not limited to, providing information regarding, the evaluator’s identity, information of an institution that the evaluator is representing and information on evaluator’s area of specialization. In response to the registration, the server 130 transmits an acknowledgement to the communication device 110. The acknowledgment indicating a grant of permission to access the server 130. [0032] Once the evaluator is registered with the server 130, the evaluator is allowed to create an assessment for evaluating one or more candidates. With reference to figure 1 of an embodiment of the invention, the assessment generator unit 132 of the server 130 facilitates the evaluator to create the assessment. The assessment generator unit 132 includes the question memory 134. The question memory 134 stores multiple questions on the basis of, but not limited to, multiple topics, curriculum standard, chapters, and evaluation history of the one or more candidates as shown in figure 2. Further, the question memory 134 also includes a marking assist tool. The marking assist tool is utilized as an answer rubric. The marking assist tool is utilized by the evaluator to assess answers more consistently.
[0033] Figure 3 illustrates an exemplary flowchart of a method 300 for facilitating the evaluator in creating an assessment based on one or more assessment parameters as selected by the evaluator. The one or more assessment parameters are also referred to as assessment blueprint. Further, figure 4 illustrates an exemplary interface generated by the assessment generator unit 132 for allowing the evaluator to select one or more assessment parameters. In accordance with an embodiment of the invention, the one or more assessment parameters is one of, but not limited to, chapter, topic, curriculum standard and grade. It is to be understood that, sequence of steps of the method 300 for facilitating an evaluator in creating the assessment may vary, and may not be limited to the steps of the method 300:
[0034] At step 302, the assessment generator unit 132 allows the evaluator to select a chapter of interest. For instance, let us consider, the evaluator selects, “science” as the chapter of interest from a plurality of chapters which include, but not limited to, English, mathematics and science. Further, the evaluator is allowed to select a topic within the chapter as shown in figure 4.
[0035] At step 304, the assessment generator unit 132 generates one or more curriculum standards based on curricular content of interest. The evaluator is allowed to select one of the curriculum standards from the one or more curriculum standards.
Further, in accordance with an embodiment of the invention, the assessment generator unit 132 automatically generates the relevant curriculum standard based on the selected chapter and topic. The curriculum standard is one of, but not limited to, Blooms Taxonomy Model, difficulty level and curriculum standards as set by various educational boards. The curriculum standard indicates the assessment format for a particular grade as set by the educational boards for a chapter of interest. For instance, for evaluating a seventh grade in the field of science, the educational board would have set a curriculum standard indicating the assessment format. The assessment format may include, set of questions which form part of one of, but not limited to, MCQ questions, short answer questions and essay questions. Accordingly, a pre-assigned question score will be allotted to each question.
[0036] At step 306, the assessment generator unit 132 generates multiple questions based on the selected curriculum standard as shown in figure 5. For instance the evaluator selects the one or more assessment parameters, which is as mentioned below:
Assessment parameters selected by the evaluator:
Topic of interest: Science;
Chapter of interest: Transfer of heat
Curriculum Standard: Blooms Taxonomy for grade 7;
Question Type: MCQ, short questions and essay questions.
As mentioned above, based on the one or more assessment parameters as selected by the evaluator, multiple questions are generated for each question type. For instance, for the above assessment parameters, the assessment generator unit 132 first generates questions in the MCQ format as shown in figure 5.
[0037] At step 308, the assessment generator unit 132, allows the evaluator to one of select and customize the multiple questions based on the generated multiple questions. As shown in figure 5, the assessment generator unit 132 allows the evaluator to drag and drop the questions from the question memory 134, thereby facilitating in creating the assessment. Each of the selected and/or customized questions will be pre-assigned the question score. Further, once the questions are selected and/or customized by the evaluator, the assessment generator unit 132 generates a feedback. The feedback indicates relevancy of the questions one of the selected and/or customized by the evaluator with respect to the selected assessment parameters. Let us consider that the evaluator intends to customize one essay question. As an example, the one customized question created by the evaluator is “what is exothermic reaction?”. In response to the customized question, the assessment generator unit 132 checks for the customized question at the question memory 134. The question memory 134 includes multiple questions arranged based on one of, but not limited to, chapters, topics, one or more curriculum standards, question types and evaluation history of the one or more candidates as shown in figure 2. The assessment generator unit 132 compares the customized question with the multiple questions in the question memory 134 based on the assessment parameters as selected by the evaluator, which is as mentioned below:
Assessment parameters:
Topic of interest: Science;
Curriculum Standard: Blooms Taxonomy for grade 7;
Question Types: Essay questions.
If the customized question, matches at least in part with any of the multiple questions based on the assessment parameters as selected by the evaluator, which is mentioned above, then the feedback generated indicates, that the customized question is relevant with respect to the assessment parameters. It is to be understood, that the percentage of matching, i.e. at least in part should be at least 50 percent. However, if the customized question does not match at least in part with the multiple questions at the question memory 134, based on the assessment parameters selected by the evaluator, then the feedback indicates that the customized question is not relevant with respect to the assessment parameters. The evaluator is allowed to amend the selected and/or customized questions based on the generated feedback. Pursuant to the amendment by the evaluator, if the feedback generated, indicates one of the selected and/or customized questions is relevant with respect to the assessment parameters, then the assessment is generated and stored at the question memory 134 of the assessment generator unit 132. Further, a total assessment score will be assigned to the assessment based on summing the question score of each question of the assessment. [0038] At step 310, the assessment generator unit generates an assessment based on the selected and/or customized questions by the evaluator.
[0039] In another embodiment of the invention, the assessment generator unit 132 provides a sample assessment format based on the selected one or more assessment parameters. The assessment format indicates the type and number of questions that need to be selected from the multiple questions, such that, the assessment created is relevant with respect to the selected assessment parameters.
[0040] In accordance with an embodiment of the invention, the storage unit 136 of the server 130 receives one or more assessment responses for the generated assessment from the image capturing device 120 in an image format. The one or more assessment responses belong to one or more candidates. The assessment response includes multiple answers provided by the candidate in response to the questions of the generated assessment as shown in figure 6.
[0041] In accordance with an embodiment of the invention, the image capturing device 120 is one of, but not limited to, a scanner. Image of the assessment response is captured by the image capturing device 120 and transmitted to the storage unit 136 of the server 130 over the communications network.
[0042] Once the assessment responses of the one or more candidates are received at the storage unit 136, the segmentation unit 138 of the server 130 fetches the one or more assessment responses from the storage unit 136. In accordance with an embodiment of the invention, the segmentation unit 138 is configured to determine one or more parameters of the assessment response based on at least one character recognition technique. The one or more parameters include candidate identity of the assessment response, answer identity, multiple characters written in the assessment response and a start point and an end point of each of the multiple answers provided by the one or more candidates in response to each of the multiple questions. To determine the one or more parameters, the segmentation unit 138, firstly aligns an image of the one or more assessment responses based on a plurality of identifiers present on the assessment response. The segmentation unit 138 utilizes an estimation of homography matrix to correct the misalignment of the assessment response based on the plurality of identifiers present on the assessment response. In an embodiment of the invention, there are at least four identifiers which are non-collinear and well separated on the assessment responses. In case the assessment response cannot be aligned, the segmentation unit 138 notifies the evaluator or an administrator that a fresh image has to be captured by the image capturing device 120 of the assessment response.
[0043] Further, each of the assessment responses has a plurality of cells located at certain specific locations of the assessment response. The cells located at the specific locations are meant for the candidate to provide information such as one of, but not limited to, candidate identity and answer identity corresponding to the question of the assessment. Further, the cells may be of various shapes depending on the specific location. The segmentation unit 138 utilizes one of, but not limited to, location of the cells, shape of the cell and relative intensity of pixels within initial estimated shape of the cell in detecting the cells. Once the cells are detected, the segmentation unit 138 correlates the detected cell with the specific location in which it is contained, in determining candidate identity and answer identity. Figure 6 illustrates a format of the assessment response including provisions such as cells for providing candidate identity and answer identity information. The candidate identity is determined based on location of the cell. For instance, the cell located on top left corner of the assessment response may be meant for the candidate to provide information of the candidate identity, which is one of, but not limited to, name of the candidate and roll number of the candidate. Further, the cells provided on left side of the assessment response may be meant for providing the answer identity adjacent to the answer for the corresponding question in the assessment as shown in figure 6. The answer identity is one of, but not limited to, answer number. Once the cells are detected, the segmentation unit 138 removes the boundary of the cell by using a mask for internal region of the cell and identifies the characters provided by the candidate within the cell. Based on the characters identified, the segmentation unit 138 determines the candidate identity of the assessment response and the answer identity corresponding to the question in the assessment. [0044] In certain situations, the candidate identity and the answer identity may include certain characters which may form part of confusing group of characters as learnt by the segmentation unit 138 over a period of time. The confusing group of characters is one of, but not limited to,‘1, 2, 3, 7, 8’. For instance, the character‘G can be wrongly recognized as character‘7’, character‘2’ can be wrongly recognized as‘3 or 7’, character‘3’ can be wrongly recognized as‘2 or 8’, character‘3’ can be wrongly recognized as‘2 or 8’, character‘7’ can be wrongly recognized as‘G and character ‘8’ can be wrongly recognized as ‘2 or 3’. Therefore, when the segmentation unit 138, recognizes one of the characters as‘1, 2, 3, 7 or 8’, the segmentation unit 138 does not come to a conclusion that, recognized characters are actually 1, 2, 3, 7 or 8’ respectively, instead, the segmentation unit 138, utilizes a plurality of contextual correction logics, to ensure that the recognized characters are correct. Examples wherein the segmentation unit 138 utilizes the contextual correction logics are explained hereunder:
[0045] Figure 8 is an exemplary illustration of contextual correction performed by the segmentation unit 138 utilizing a local order logic. The local order logic is utilized for various situations. Let us consider a first situation, wherein the segmentation unit 138 recognizes the answer numbers of the assessment response are in a sequential order. Based on which, as referred in figure 8, the segmentation unit 138 recognizes the answer number, herein the current answer number in a cell as
‘12’ of the assessment response. The answer number ‘12’ corresponds to the question‘12’ of the assessment. Since the characters,‘1 and 2’ of the answer number ‘12’ form part of the confusing group of characters, the segmentation unit 138 utilizes a first pre-defined rule of the local order logic. The first pre-defined rule of the local order logic is based on previous and next number/character prediction logic. For the current answer number which has been recognized as‘12’, the segmentation unit 138, checks for the previous answer number which happens to be‘12’ and checks for the next answer number which follows the current answer number, which happens to be‘14’. Therefore, based on the local order logic, the segmentation unit 138 predicts the correct answer number of the current answer as‘13’, instead of‘12’ and amends the current answer number accordingly.
[0046] In accordance with an embodiment of the invention, in a second situation, let us consider, the candidate has written the answers in a non-sequential order, in this situation, the segmentation unit will try to preserve the order and this could be erroneous, since the candidate has written answers in the non-sequential order. In view of the same, the segmentation unit 138, utilizes a second pre-defined rule of the local order logic, which is as follows:
The segmentation unit 138, combines information of likelihood of the answer numbers being in order and likelihood of answer number predicted being wrong. Based on the combined information, the segmentation unit 138, sets up an optimization problem with associated costs by keeping order and replacing second confident pair of characters instead of first pair. The cost is a matrix which is totaling up a result of a process of a differential for a result of a decision. For instance, let us assume that the segmentation unit 138, predicted‘12’ as first prediction and‘13’ as second prediction. Based on the prediction, the segmentation unit 138 assumes two possibilities. First possibility is, if the segmentation unit 138, chooses‘12’ it would have a large cost associated due to the order not being preserved and no associated cost for replacement. Further, in the second possibility, let us assume the segmentation unit 138, chose‘13’ as the answer number, then there would be no cost added, due to order, but slightly greater cost for replacing the‘3’ with‘2’. Overall the total cost of the replacement of each character is based on likelihood of confusion of the two characters by the segmentation unit 138. Therefore, if the characters are easily confused, then the cost of replacement is small and if they are not easily confused then the cost is large. Therefore the second possibility will have lower cost compared to the first possibility, since the cost of replacing‘2’ with‘3’ is less than breaking the order. The segmentation unit 138, does this jointly for all combination of characters such as numbers instead of doing one at a time.
[0047] Further, when the segmentation unit 138 recognizes the name of the candidate and the roll number of the assessment response in the cells contained therein, the segmentation unit 138, utilizes inter-type logic of the plurality of contextual correction logics to validate recognized information based on cross verification. The segmentation unit 138, validates the recognized information by cross-verifying with a look up table (LUT) as shown in figure 9. The LUT is a table comprising a list of names of the candidates corresponding to the roll numbers who have submitted the assessment responses. The segmentation unit 138, looks up the name of the candidate that was predicted and identifies the corresponding roll number. If either the name of the candidate or the roll number match with each other, then the segmentation unit 138 assumes that the recognized name of the candidate and the roll number as correct. If, one of, the name of the candidate or the corresponding roll numbers do not match with each other, then the segmentation unit 138 assumes that the recognized information is wrong, and automatically amends the recognized information based on the information provided in the LUT. Figure 10 is an exemplary illustration of contextual correction of candidate identity performed by the segmentation unit 138 utilizing an intertype logic. Let us consider, the segmentation unit 138 predicted, the candidate name as‘Aeron’ in the respective box and the corresponding roll number as ‘02’. Pursuant to the recognition, the segmentation unit 138, validates the recognized information by cross verifying with the look up table. Based on the verification, the segmentation unit 138 assumes that the recognized information is wrong, since the roll number of closest name in database is 'Aaron’ as per the look up table of figure 9 is‘01’ and not‘02’. Given two types of information, i.e. name and roll number of the candidate from two sources, i.e. prediction information by the segmentation unit 138 and information from the Look up table, the segmentation unit 138 compares what is the smallest change to either/both of information type(s), i.e. the candidate name and the roll number such that a matching pair is found in the Look Up Table (LUT). To this effect, the segmentation unit 138, amends the name to‘Aaron’ from‘Aeron’ in the assessment response and roll number to‘01’ from‘02’. The cost of replacing each type of information is different for each sources (name vs. roll numbers) and different for each character of each type of information, i.e. ‘a’ with‘e’ can be replaced with lower cost but not‘a’ with‘x’. These situations arise when candidates overwrite a character or strike out one character and add a new character.
[0048] In accordance with an embodiment of the invention, the LUT is provided at the storage unit 136 as shown in figure 7.
[0049] In accordance with an embodiment of the invention, once the candidate identity and the answer identity are determined, the segmentation unit 138 determines the start point and end point of each answer of the assessment response. The start point is beginning of an answer, which is adjacent to the answer number present in the cell. The end point is beginning of the next answer, which is adjacent to another answer number present in another cell. Further, the start point and end point of each answer is determined based on lice spacing between two answers.
[0050] In accordance with an embodiment of the invention, one or more parameters of the assessment responses are determined based on at least one character recognition technique. The character recognition technique is one of, but not limited to, Convolution Neural Networks, Recurrent Neural Networks, Support Vector Machines and K-nearest Neighbors. Further, if confidence of the recognized character is below a threshold, the characters have to be further segmented into two pieces using a modified seam carving inspired algorithm. It is to be understood that the character recognition techniques as mentioned above are well known in the art.
[0051] In accordance with an embodiment of the invention, once the candidate identity, answer identity, and start point and end points are determined, the segmentation unit 138 determines existing sequence of the multiple answers in the assessment response. The existing sequence of the answers is the sequence of answers provided by the candidate in the assessment response. For instance, the assessment response has three answers in the sequence of 01, 02 and 03 as the respective answer numbers. Therefore, the segmentation unit 138 determines the existing sequence of answers based on the answer number; herein the sequence is 01, 02 and 03.
[0052] Once the segmentation unit 138 determines the existing sequence of the answers in the assessment response, the answers are segmented into a plurality of answer strips. Each answer strip includes an answer. The segmentation unit 138 segments the answers into answer strips based on one of, but not limited to, the start point and end point of each answer.
[0053] In accordance with an embodiment of the invention, the segmented answer strips are stored in the storage unit 136 as shown in figure 7.
[0054] Once the segmented answer strips are stored at the storage unit 136, the evaluation unit 140 of the server 130 allows the evaluator to define a sequence in which the answers need to be displayed for the evaluator to correct them. The sequence herein is called as the evaluator defined sequence. In response to the evaluator defined sequence, evaluation unit 140 arranges the answer strips accordingly. The evaluator defined sequence is based on one of, but not limited to, candidate wise and question wise defined sequence. Evaluator defined sequence is explained with an example. Let us consider the evaluator intends to compare the answer strips with the marking assist tool of each candidate. For this criterion, the evaluator has to select an option of candidate wise from options of candidate wise and question wise at the server 130 via the communication device 110. In response to the evaluator selecting the option of candidate wise, the segmentation unit 138 allows the evaluator to select one particular candidate for evaluation. For instance, the evaluator selects the candidate‘Amanda’, as shown in figure 11, based on which, the evaluation unit 140 fetches relevant information from the storage unit 136 and arranges all the answer strips belonging to‘Amanda’. Pursuant to arrangement of answer strips belonging to‘Amanda’, the evaluator is allowed to select any one of the answer strips belonging to‘Amanda’ for evaluation. For instance, the evaluator selects the answer strip including the answer number 1 pertaining to question 1, based on which, the relevant marking assist tool stored at the question memory 134 pertaining to question 1 is fetched by the evaluation unit 140 and displayed along with the answer strip of answer number 1 pertaining to question 1. Figure 11 is an exemplary illustration of an interface generated by the evaluation unit 140 for allowing evaluator to evaluate one or more assessment responses. As shown in figure
11 , the answer strip selected by the evaluator along with the relevant marking assist tool is displayed by the evaluation unit 140. Further, if the evaluator intends to evaluate one particular question of all the candidates, then the evaluator has to select an option of question wise from the options of candidate wise and question wise at the server 130 via the communication device 110. In response to the evaluator selecting the option of question wise, the evaluation unit 140 allows the evaluator to select one particular question to be evaluated. Based on the selected question, the evaluation unit 140 fetches the answer strips of all the candidates only pertaining to the selected question and arranges the answer strips along with the relevant marking assist tool pertaining to the selected question. For instance, if the evaluator selects question 1 for evaluation of all the candidates, the evaluation unit 140 displays the answer strips including answer number 1 of all the candidates pertaining to question 1 along with the relevant marking assist tool. The question wise correction is shown to have better consistency of evaluations since the evaluator has to only remember one question and the corresponding answer at a time. As shown in figure 11, the evaluation unit 140 displays answer strip including an answer number 1 pertaining to question 1 of the candidate‘Amanda’ based upon the selection by the evaluator.
[0055] In accordance with an embodiment of the invention, the marking assist tool includes a plurality of key points that are required to be checked by the evaluator in the respective answer strip for ascertaining correctness of the answer. The marking assist tool as shown in figure 11 assists the evaluator in determining correctness of the answer pertaining to question 1. The marking assist tool illustrated in figure 11 , mentions three key points (shown as step 1 , 2 and 3 respectively) that are required to be present in the answer pertaining to question 1. Each key point is assigned pre defined marks. The evaluator should assign pre-defined marks to the answer pertaining to a question, if that particular key point is present in the answer. Any person skilled in the art will understand that the pre-defined marks for the key points of the question will not exceed the assigned question score for the question.
[0056] Further, once the answer score has been determined for each of the multiple answer strips for all the candidates, the computing unit 142 determines a total score for the assessment response for each of the candidates. The total score is determined by summing up the answer score for each answer of the assessment response. Once the total score has been determined for each of the assessment responses, the total score will be recorded against the candidate’s name in the look-up table and stored at the storage unit 136. It is, to be noted that, any person skilled in the art will understand that the total score obtained by the candidate for the assessment response will not exceed the total score assigned to the assessment.
[0057] In accordance with an embodiment of the invention, the computing unit 142 generates a performance report for each of the candidate. The performance report indicates the candidate’s performance based on one of, but not limited to, question wise, rubric performance, topic wise and chapter wise. Further, the performance report also indicates the candidate’s performance with respect to the rest of the candidates who have submitted the assessment responses. The computing unit 142 determines the candidate’s performance with respect to the rest of the candidates based on the following steps. It is however, to be understood that, sequence of steps for determining candidate’s performance with the rest of the candidates may vary, and may not be limited to the following steps:
[0058] Firstly, a skill level is determined for each of the answer score based on a pre-defined skill level to answer score logic. For instance, if the answer score assigned is 2, then the evaluation unit 140 determines the skill level as 2. Similarly, if the candidate obtains 1 as the answer score, then the skill level will be automatically computed as 1.
[0059] In accordance with an embodiment of the invention, the skill level of the candidate is determined based on one of, but not limited to, a chapter wise performance, topic wise performance, sub-topic wise performance and concept wise performance.
[0060] Once the skill level is determined for each question, an overall skill level is computed by the computing unit 142, by summing up the skill level computed for each answer.
[0061] A rank is assigned to the candidate based on the computed total score of each candidate. The candidate obtaining the highest total score will be assigned the highest rank. For instance candidates ‘Aaron, Liam, Mason, Jacob’ have been assigned a total score of 12, 10, 11 and 9 respectively, then the computing unit 142 will assign Aaron first rank, Mason the second rank, Liam the third rank and Jacob the fourth rank respectively.
[0062] Further, if the overall skill level of the candidate is below a threshold skill level, then a training source is provided/suggested to the candidate. The training source aids in enhancing the overall skill level of the candidate above the threshold level.
[0063] In accordance with an embodiment of the invention, the training source is provided/suggested to the candidate is one of, but not limited to, a relevant web resource and relevant remediation assessments.
[0064] In accordance with an embodiment of the invention, the training source is stored at the storage unit 136.
[0065] In accordance with an embodiment of the invention, the training source is provided to a candidate based on the skill level of each candidate in the assessment. For instance, a plurality of training sources are stored at the storage unit for one or more skill levels.
[0066] In accordance with an embodiment of the invention, the communication device 110 is one of, but not limited to a computer, mobile phone, desktop, tablet, and personal digital assistant (PDA).
[0067] In accordance with an embodiment of the invention, the communication device 110 includes a display which is compatible with the server 130 for facilitating the evaluator to view interfaces generated by the server 130.
[0068] In accordance with an embodiment of the invention, the communication device 110 and the image capturing device 120 communicate with the server 130 over the communications network. The communications network is one of, but not limited to, local area network (LAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), Wireless Network and Inter Network.
[0069] With reference to figure 12 of an embodiment of the invention, a flowchart of a method 1200 for evaluating one or more candidates at a server 130 is illustrated. It is however, to be understood that, the sequence of the steps of the method may vary, and may not be limited to the steps of the method 1200. Further, it is to be noted that, illustrations and examples for the steps of the method 1300 are mentioned above.
[0070] At step 1202, a communication link is established between the communication device 110 of an evaluator and the server 130 over a communication network. The communication link is established by the evaluator by registering with the server 130.
[0071] At step 1204, the evaluator is allowed to create an assessment utilizing the assessment generator unit 132. The assessment includes multiple questions one of selected and customized by the evaluator on the basis of one or more assessment parameters. The assessment parameters include a topic selected by the evaluator, pre set curriculum standard and evaluation history of each of the one or more candidates. The pre-set curriculum standard is one of, but not limited to, blooms taxonomy model or any other curriculum as prescribed by an educational institution. Further, the evaluator can set the questions based on the evaluation history of each of the one or more candidates. The evaluation history of each of the one or more candidates is stored at the storage unit 136 of the server 130. Keeping the evaluation history of each of the one or more candidates as reference, the evaluator can create an assessment.
[0072] At step 1206, a feedback is provided by the assessment generator unit
132 on the multiple questions one of selected and/or customized by the evaluator, wherein the feedback indicates relevancy of the multiple questions one of the selected and customized with respect to the assessment parameters.
[0073] At step 1208, one or more assessment responses for the generated assessment in an image format are received at the image capturing device 120. The one or more assessment responses including multiple answers for the generated assessment.
[0074] At step 1210, each of the one or more assessment responses are segmented into multiple answer strips by the segmentation unit 138. [0075] At step 1212, an evaluation unit 140 allows the evaluator to compare the segmented answer strips of the assessment responses with a marking assist tool. The multiple answer strips corresponding to the multiple questions of each of the one or more candidates are provided in an evaluator defined sequence to the evaluator. In accordance with an embodiment of the invention, the evaluator defined sequence is one of, but not limited to, candidate wise and question wise defined sequence.
[0076] At step 1214, a computing unit 142 determines an answer score and a total score based on the answer score of the assessment response for each of the one or more candidates.
[0077] At step 1216, the computing unit 142 determines a performance report for each of the candidate based on the answer score and the total score.
[0078] In accordance with an embodiment of the invention, the step 1210 of segmenting each of the one or more assessment responses into multiple answer strips, further comprises the steps of: aligning an image of the one or more assessment responses based on a plurality of identifiers present on the one or more assessment responses; determining one or more parameters of the assessment response based on at least one character recognition technique, wherein the one or more parameters include candidate identity of the assessment response, answer identity of each of the multiple answers corresponding to the assessment and a start point and an end point of each of the multiple answers provided by the one or more candidates for each of the multiple questions; and determining a sequence in which answers are provided in the assessment response, and arranging the answers provided in the assessment response as per the evaluator defined sequence.
[0079] In accordance with an embodiment of the invention, the step 1216 of generating a performance report for each of the candidate based on the answer score and the total score, further comprises the steps of: determining a skill level for each of the question score based on a pre-defined skill level to question score logic; computing an overall skill level for each of the candidates based on summing up the skill level determined for each question; providing a rank based on the total score for each of the candidate; and providing a training source to the candidate if the overall skill level is below a threshold level, wherein the training source aids in enhancing the overall skill level of the candidate above the threshold level.
[0080] While aspects of the present invention have been particularly shown and described with reference to the embodiments above, it will be understood by those skilled in the art that various additional embodiments may be contemplated by the modification of the disclosed machines, systems and methods without departing from the scope of what is disclosed. Such embodiments should be understood to fall within the scope of the present invention as determined based upon the claims and any equivalents thereof.

Claims

Claims: WE CLAIM:
1. A method for facilitating in evaluating one or more candidates at a server, the method comprising:
establishing a communication link between a communication device of an evaluator and the server over a communication network;
facilitating the evaluator to create an assessment, wherein the assessment includes multiple questions one of selected and customized by the evaluator on the basis of one or more assessment parameters and evaluation history of each of one or more candidates;
providing, a feedback on the multiple questions one of selected and customized by the evaluator, wherein the feedback indicates relevancy of the multiple questions one of the selected and customized with respect to the assessment parameters;
receiving one or more assessment responses for the generated assessment in an image format, the one or more assessment responses including multiple answers for the generated assessment;
segmenting each of the one or more assessment responses into multiple answer strips;
allowing the evaluator to compare the segmented answer strips of the assessment responses with a marking assist tool to assign pre-defined marks, wherein the multiple answer strips corresponding to the multiple questions of each of the one or more candidates are provided in an evaluator defined sequence to the evaluator;
determining an answer score based on the assigned pre-defined marks and a total score based on the answer score of the assessment response for each of the one or more candidates; and
generating a performance report for each of the candidate based on the answer score and the total score.
2. The method as claimed in claim 1, wherein the one or more assessment parameters is one of, but not limited to, a topic and chapter, grade, a preset curriculum standard and grade.
3. The method as claimed in claim 2, wherein the curriculum standard is one of, but not limited to, Bloom’s Taxonomy Model, difficulty, curriculum standard as set by an educational board.
4. The method as claimed in claim 1, wherein segmenting each of the one or more assessment responses into multiple answer strips further comprises: aligning an image of the one or more assessment responses based on a plurality of identifiers present on the one or more assessment responses; determining one or more parameters of the assessment response utilizing at least one character recognition technique, wherein the one or more parameters is one of, but not limited to, candidate identity of the assessment response, answer identity of each of the multiple answers corresponding to the assessment and a start point and an end point of each of the multiple answers provided by the one or more candidates for each of the multiple questions; and
determining a sequence in which answers are provided in the assessment response, and arranging the answers provided in the assessment response as per the evaluator defined sequence.
5. The method as claimed in claim 4, wherein the one or more parameters of the assessment response are determined based on a plurality of contextual correction logics, wherein the plurality of contextual correction logics is one of, but not limited to, local order logic and intertype logic.
6. The method as claimed in claim 1 , wherein generating the performance report further comprises:
determining a skill level for each of the answer score based on a pre defined skill level to answer score logic;
computing an overall skill level for each of the candidates based on summing up the skill level determined for each question; providing a rank based on the total score for each of the candidate; and
providing a training source to the candidate if the overall skill level is below a threshold level, wherein the training source aids in enhancing the overall skill level of the candidate above the threshold level.
The method as claimed in claim 6, wherein the skill level of the candidate is determined based on a chapter wise performance, topic wise performance, sub-topic wise performance, and concept wise performance.
The method as claimed in claim 6, wherein the training source provided to the candidate is one of, but not limited to, a relevant web resource, and relevant remediation assessments.
A system for evaluating one or more candidates, the system comprising: a communication device accessible by an evaluator;
an image capturing device; and
a server in communication with the communication device and the image capturing device over a communication network, the server comprising:
an assessment generator unit to facilitate the evaluator to create an assessment, wherein the assessment includes multiple questions one of selected and customized by the evaluator on the basis of one or more assessment parameters and evaluation history of each of the one or more candidates;
a storage unit for receiving one or more assessment responses for the generated assessment from the image capturing device of the one or more candidates, the one or more assessment responses including multiple answers for the generated assessment;
a segmentation unit configured to:
determine one or more parameters of the assessment response based on at least one character recognition technique, wherein the one or more parameters include candidate identity of the assessment response, multiple characters written in the assessment response and a start point and an end point of each of the multiple answers provided by the one or more candidates for each of the multiple questions; and segment the assessment response into multiple answer strips based on the start point and the end point of each of the multiple answers;
an evaluation unit configured to allow the evaluator for comparing the segmented answer strips of the assessment responses with a marking assist tool to assign pre-defined marks, wherein the multiple answer strips corresponding to the multiple questions of each of the one or more candidates are provided in an evaluator defined sequence to the evaluator; and
a computing unit configured to:
determine an answer score based on the assigned pre-defined marks and a total score based on the answer score of the assessment response for each of the candidate; and
generate a performance report for each of the candidate based on the answer score and the total score.
10. The system as claimed in claim 9, wherein the assessment generator unit further includes a question memory, wherein the questions are arranged based on one of, but not limited to, topic wise, concept wise, subject wise and field wise.
11. The system as claimed in claim 9, wherein the assessment generator unit is further configured to:
provide a feedback on the multiple questions one of selected and customized by the evaluator, wherein the feedback indicates relevancy of the multiple questions one of the selected and customized with respect to the curriculum standard.
12. The system as claimed in claim 9, wherein the segmentation unit is further configured to: align an image of the one or more assessment responses based on a plurality of identifiers present on the one or more assessment responses; and determine a sequence in which answers are provided in the assessment response, and arrange the answers provided in the assessment response as per an evaluator defined sequence set by the evaluator in the assessment response.
13. The system as claimed in claim 9, wherein the computing device is further configured to:
determine a skill level for each of the answer score based on a pre- defined skill level to answer score logic;
compute an overall skill level for each of the candidates based on summing up the skill level determined for each question;
provide a rank based on the total score for each of the candidate; and provide a training source to the candidate if the overall skill level is below a threshold level, wherein the training source aids in enhancing the overall skill level of the candidate above a threshold level.
PCT/IN2018/050791 2017-12-05 2018-11-27 A system and method for facilitating in evaluating one or more candidates WO2019111274A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201741043592 2017-12-05
IN201741043592 2017-12-05

Publications (1)

Publication Number Publication Date
WO2019111274A1 true WO2019111274A1 (en) 2019-06-13

Family

ID=66750117

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2018/050791 WO2019111274A1 (en) 2017-12-05 2018-11-27 A system and method for facilitating in evaluating one or more candidates

Country Status (1)

Country Link
WO (1) WO2019111274A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1598845A (en) * 2004-09-03 2005-03-23 深圳市海云天科技有限公司 Network exanmination paper go-over method for subjective question
US7377785B2 (en) * 2003-05-22 2008-05-27 Gradiance Corporation System and method for generating and providing educational exercises

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7377785B2 (en) * 2003-05-22 2008-05-27 Gradiance Corporation System and method for generating and providing educational exercises
CN1598845A (en) * 2004-09-03 2005-03-23 深圳市海云天科技有限公司 Network exanmination paper go-over method for subjective question

Similar Documents

Publication Publication Date Title
Alonso‐Fernández et al. Predicting students' knowledge after playing a serious game based on learning analytics data: A case study
Romero et al. Association rule mining using genetic programming to provide feedback to instructors from multiple‐choice quiz data
US8155578B2 (en) Method and system for generating and processing an assessment examination
KR20170100265A (en) Method, apparatus and computer program for providing personalized educateionl contents
KR20180062000A (en) Apparatus and method for learning diagnosis, adaptive learning system using thereof
CN111858906B (en) Problem recommendation method and device, electronic equipment and computer readable storage medium
Andrews et al. Why does this learner perform poorly on tests? Using self-regulated learning theory to diagnose the problem and implement solutions
US10410534B2 (en) Modular system for the real time assessment of critical thinking skills
KR20190025873A (en) Method, apparatus and computer program for providing educational contents
CN116383481B (en) Personalized test question recommending method and system based on student portrait
Bairaktarova et al. Engineering ethics education: Aligning practice and outcomes
Mitros et al. An integrated framework for the grading of freeform responses
US20170124895A1 (en) Concept-associated online study service method, and service system therefor
CN115050039A (en) Automatic test paper analysis method, system, electronic device and storage medium
Ekpenyong et al. Hierarchical linear modelling of educational outcomes in secondary schools: What matters–teachers’ or administrators’ input?
CN106781782A (en) A kind of information feedback method and device
US20060099562A1 (en) Learning system and method
WO2019111274A1 (en) A system and method for facilitating in evaluating one or more candidates
JP2006079113A (en) Test problem setting system, test problem setting method and program therefor
Leitão et al. New metrics for learning evaluation in digital education platforms
Roy et al. Case based modeling of answer points to expedite semi-automated evaluation of subjective papers
Lee et al. Personalized item generation method for adaptive testing systems
KR20220065722A (en) Learning problem recommendation system that recommends evaluable problems through unification of the score probability distribution form and operation thereof
KR102023809B1 (en) Linking to Employment and Start-Up through Enhancing Professionalism Meister College Service Providing System
KR20220155673A (en) Method and system for assessing metacognition ability

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18887106

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18887106

Country of ref document: EP

Kind code of ref document: A1