US20170154542A1 - Automated grading for interactive learning applications - Google Patents
Automated grading for interactive learning applications Download PDFInfo
- Publication number
- US20170154542A1 US20170154542A1 US15/365,014 US201615365014A US2017154542A1 US 20170154542 A1 US20170154542 A1 US 20170154542A1 US 201615365014 A US201615365014 A US 201615365014A US 2017154542 A1 US2017154542 A1 US 2017154542A1
- Authority
- US
- United States
- Prior art keywords
- annotations
- grades
- grading
- student
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G06F17/241—
-
- G06F17/28—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/169—Annotation, e.g. comment data or footnotes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/08—Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
- G09B5/12—Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations different stations being capable of presenting different information simultaneously
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B7/00—Electrically-operated teaching apparatus or devices working with questions and answers
- G09B7/02—Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
Abstract
In various embodiments, grades are assigned to student annotations associated with an educational resource based on grading features derived utilizing human grades of annotations in a training set of annotations.
Description
- This application claims the benefit of and priority to U.S. Provisional Patent Application No. 62/261,387, filed Dec. 1, 2015, U.S. Provisional Patent Application No. 62/261,397, filed Dec. 1, 2015, U.S. Provisional Patent Application No. 62/261,398, filed Dec. 1, 2015, and U.S. Provisional Patent Application No. 62/261,400, filed Dec. 1, 2015, the entire disclosure of each of which is hereby incorporated herein by reference.
- In various embodiments, the present invention relates generally to online learning, and in particular to resources for enhancing and personalizing learning experiences involving an online component.
- As digital textbooks inexorably replace traditional printed media, and online social resources such as discussion boards supplement classroom instruction, teachers and publishers are finding new opportunities for engaging students. Students with access to digital materials may annotate a shared digital version of a class text or videos, ask and answer each other's questions, and interact with the teaching staff while reading. The advantages are substantial: instead of waiting days until office hours to get past a conceptual roadblock, students can ask a question at any time and often get a response within minutes. Student motivation is enhanced through online interactions that enable them to share interest and knowledge.
- In increasing number of classrooms, when students are given reading material as homework assignments, it is in digital format and they are allowed to highlight a passage and add a comment or question. Other students (and the teaching staff) can then see this immediately and can answer questions or add their own comments (in an interaction that looks roughly as it does on Facebook). Students stumped about some problem can easily address it, whatever the hour, if other students are reading at the same time or soon after. When students are assigned videos, they may now be able to annotate the timeline, with comments and interactions following.
- Research has shown that students who engage in high levels of meaningful online discussion using annotation systems have higher normalized learning gain scores than students who participate just to fulfill basic requirements. Moreover, providing students with incentives to complete the readings thoughtfully and feedback on their annotations helps ensure that students do the assigned readings on time. Overall, when integrated properly into the classroom experience, annotations and their evaluation contribute meaningfully to student learning.
- As annotation systems assume a greater role in learning, students may be graded on their performance—how insightful their questions are, how much they help their fellow students, how engaged they are with the readings, what kinds of contributions they make, etc. Indeed, the success of these new approaches may depend on feedback provided in the form of grades. Unfortunately, expanded opportunities for assessment add to instructors' grading burden; ironically, the more successful the teachers are in encouraging out-of-class learning, the more work they will have.
- Unlike direct assessments such as quizzes or tests, evaluating an annotation requires considerable expertise that cannot readily be automated. But teachers will hesitate to adopt new educational technology if it adds to an already onerous workload. Thus, there is a need for techniques and systems for the evaluation of annotations supplied by participants in online learning scenarios without significant increases to the workload of the instructor and/or other course managers.
- Embodiments of the invention automate critical aspects of student performance evaluation for online resources in a manner more efficient and accurate than a single human evaluator could achieve on his or her own. In various embodiments, systems and methods in accordance with the invention monitor users of an online resource (e.g., readers of an e-book), and grade the users' annotations of the online resource in an automated fashion based on criteria derived from the inputs of multiple human evaluators. As utilized herein, the term “annotation” refers to any feedback supplied by a student in response to and/or associated with an educational resource. Annotations may include, for example, answers to embedded questions, comments related to specific passages of the resource, or both. Systems and methods in accordance with embodiments of the invention may utilize as inputs (1) a large number of predetermined features (i.e., grading criteria) that are automatically and adaptively selected, modified, and combined to produce novel features not previously utilized for grading, (2) optionally, criteria from instructional staff and teaching assistants, and/or (3) judgments from students completing an annotation-based assignment, which may simply be aimed at allowing the students to learn about a specific class, reading, and/or grading rubric.
- In various embodiments of the invention, these inputs are utilized to obtain multiple judgments about each annotation or each set of annotations by a student for an assignment. These multiple judgments are then averaged in order to produce a training set of annotations utilized as the basis for grading of subsequent annotations. The average has much less error and noise than what any one human coder could accomplish, especially since a human coder, such as the instructor, would need to grade thousands of annotations to keep up with a class over many assignments. In real classrooms, it would be infeasible to assign multiple instructors or teaching staff to code each individual annotation; in most classrooms, it may even be infeasible for each of the thousands of individual annotations to be graded by one instructor. Embodiments of the present invention, trained on the much less noisy average for each annotation in the training set, perform much better in practice than any individual human coder is likely to do or even could do, and also better than any automated algorithm based on a training set constructed in a conventional manner with one (noisy) coder per annotation.
- As utilized herein, the term “class” refers to a gathering of “users,” “participants,” or “students” led by one or more “instructors.” Participants need not be in the same room as each other or the instructor, so classes encompass distance learning situations. In addition, participants need not be students; they might be employees participating in a corporate training event or workshop participants attending an educational workshop. Accordingly, the terms “participant” and “student” are used interchangeably herein, it being understood that the utility of the invention is not limited to students in classroom environments. In addition, the term “instructor” used herein is not limited to a teacher or a professor in the classroom; the “instructor” may be a facilitator in a corporate event or in any group pursuing a pedagogical or intellectual endeavor.
- In an aspect, embodiments of the invention feature an automated grading method for student annotations in an interactive learning application. In a step (a), an interactive educational resource is distributed over a network to a plurality of student devices. In a step (b), an initial set of annotations generated at the student devices in response to the educational resource is received at a server. In a step (c), a plurality of grades for each of the annotations in the initial set of annotations is received at the server, each of the grades being provided by a different human grader. In a step (d), the plurality of grades to produce an average grade for each of the annotations in the initial set of annotations, at least a portion of the initial set of annotations constituting a training set. In a step (e), portions of annotations within the training set are extracted, thereby producing a plurality of seed features. In a step (f), one or more grading features predictive of the average grades associated with the training set are computationally derived from the seed features. In a step (g), a grade is assigned to a new annotation based on the one or more grading features.
- Embodiments of the invention may include one or more of the following in any of a variety of combinations. The new annotation may be ungraded by human graders prior to the grade being assigned based on the one or more grading features. Step (g) may include or consist essentially of using a machine-learning model to predict the grade assigned to the new annotation based on the one or more grading features. The model may be predictive in accordance with a prediction algorithm and may be generated by steps including or consisting essentially of (i) dividing the initial set of annotations into the training set and a testing set, each of the training set and testing set including, consisting essentially of, or consisting of a plurality of annotations and average grades associated therewith, and (ii) identifying the one or more grading features based on predictive reliability in accordance with the prediction algorithm. Grades for one or more annotations within the testing set may be computationally predicted based on the one or more grading features. Parameters of the model may be adjusted, e.g., prior to assigning the grade to the new annotation. Parameters of the model may be adjusted based on the predicted grades for one or more annotations within the testing set. Parameters of the model may be adjusted based on grades assigned by an instructor, e.g., grades assigned by an instructor that override grades predicted by the prediction algorithm. The prediction algorithm may include, consist essentially of, or consist of a classification tree. The prediction algorithm may include, consist essentially of, or consist of a random forest. The random forest may include, consist essentially of, or consist of a plurality of regression trees. At least a portion of the initial set of annotations may be distributed over the network to a plurality of human graders before step (c). Access to the educational resource by the student device at which the new annotation was generated may be controlled based at least in part on the grade assigned to the new annotation. The grade assigned to the new annotation may be displayed, e.g., on the student device at which the new annotation was generated, on an instructor device, and/or on a grading device. Step (e) may include, consist essentially of, or consist of applying natural-language processing to annotations within the training set.
- In another aspect, embodiments of the invention feature an education system that includes, consists essentially of, or consists of a plurality of student devices for executing an interactive educational resource received over a network and a server in electronic communication with the student devices. The student devices are configured to receive student annotations of the educational resource and transmit the annotations to the server. The server includes, consists essentially of, or consists of a communication module and an analysis module. The communication module is configured to (i) receive annotations from the student devices, and (ii) receive grades associated with annotations from a plurality of human graders. The analysis module is configured to (i) associate composite grades (e.g., an average of a plurality of grades received from different human graders) with each of an initial set of annotations, at least a portion of the initial set of annotations constituting a training set, (ii) computationally derive one or more grading features predictive of the average grades associated with the training set, and (iii) assign grades to ungraded annotations based on the one or more grading features.
- Embodiments of the invention may include one or more of the following in any of a variety of combinations. The analysis module may be configured to extract portions of annotations within the training set, thereby producing a plurality of seed features. The one or more grading features may be computationally derived from the seed features. The analysis module may use a machine-learning model to predict the grades assigned to the ungraded annotations based on the one or more grading features. The model may be predictive in accordance with a prediction algorithm and may be generated by steps including or consisting essentially of (i) dividing the initial set of annotations into the training set and a testing set, each of the training set and testing set including, consisting essentially of, or consisting of a plurality of annotations and average grades associated therewith, and (ii) identifying the one or more grading features based on predictive reliability in accordance with the prediction algorithm. The analysis module may be configured to (i) computationally predict, based on the one or more grading features, grades for one or more annotations within the testing set, and (ii) adjust parameters of the model based on the predictions, e.g., prior to assigning grades to ungraded annotations. The prediction algorithm may include, consist essentially of, or consist of a classification tree. The prediction algorithm may include, consist essentially of, or consist of a random forest. The random forest may include, consist essentially of, or consist of a plurality of regression trees. At least one of the student devices may include, consist essentially of, or consist of a computer or a handheld device. The communication module may be configured to transmit grades assigned by the analysis module to one or more student devices, to one or more instructor devices, and/or to one or more grading devices. The system may include a plurality of grading devices configured to display student annotations of the educational resource, receive grades associated with the student annotations from a human grader, and transmit the grades to the server. At least one of the grading devices may include, consist essentially of, or consist of a computer or a handheld device.
- These and other objects, along with advantages and features of the present invention herein disclosed, will become more apparent through reference to the following description, the accompanying drawings, and the claims. Furthermore, it is to be understood that the features of the various embodiments described herein are not mutually exclusive and may exist in various combinations and permutations. As used herein, the terms “approximately” and “substantially” mean ±10%, and in some embodiments, ±5%. The term “consists essentially of” means excluding other materials that contribute to function, unless otherwise defined herein.
- In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments of the present invention are described with reference to the following drawings, in which:
-
FIG. 1 is a schematic depiction of an educational environment in accordance with various embodiments of the invention; -
FIG. 2 is a block diagram of an educational server or system utilized in accordance with various embodiments of the invention; and -
FIG. 3 is a flowchart of a technique of automated grading of student annotations in accordance with various embodiments of the invention. -
FIG. 1 illustrates an exemplaryeducational environment 100 in accordance with embodiments of the present invention. As shown, within theenvironment 100, communication is established, via anetwork 110, among aninstructor 120 utilizing aninstructor device 130,various students 140 each utilizing astudent device 150, one ormore graders 160 each utilizing agrading device 170, and an educational system orserver 180. Thenetwork 110 may include or consist essentially of, for example, the Internet and/or one or more local-area networks (LANs) or wide-area networks (WANs). The terms “student device,” “instructor device,” and “grading device” as used herein broadly connote any electronic device or system facilitating wired and/or wireless bi-directional communications, and may include computers (e.g., laptop computers and/or desktop computers), handheld devices, or other personal communication devices. Handheld devices include, for example, smart phones or tablets capable of executing locally stored applications and supporting wireless communication and data transfer via the Internet or the public telecommunications infrastructure. Smart phones include, for example, IPHONES (available from Apple Inc., Cupertino, Calif.), BLACKBERRIES (available from RIM, Waterloo, Ontario, Canada), or any mobile phones equipped with the ANDROID platform (available from Google Inc., Mountain View, Calif.); tablets, such as the IPAD and KINDLE FIRE; and personal digital assistants (PDAs). The bi-directional communication and data transfer may take place via, for example, one or more of cellular telecommunication, a Wi-Fi LAN, a point-to-point Bluetooth connection, and/or an NFC communication. -
FIG. 2 depicts a more detailed schematic of theserver 180, which includes or consists essentially of a general-purpose computing device whose operation is directed by a computer processor, i.e., central processing unit (CPU) 200. Theserver 180 includes anetwork interface 205 that facilitates communication over thenetwork 110, using hypertext transfer protocol (HTTP) or other suitable protocols. For example, thenetwork interface 205 may include or consist essentially of one or more hardware interfaces enabling data communication vianetwork 110, as well as a communications module for sending, receiving, and routing such communications within server 180 (e.g., via system bus 210). Theserver 180 further includes a bi-directional system bus 210, over which the system components communicate, a main (typically volatile)system memory 215, and a non-volatile mass storage device (such as one or more hard disks and/or optical storage units) 220, which may contain resources, such as digital textbooks and/or other educational resources, that may be delivered to thestudent devices 150. - The
main memory 215 contains instructions, conceptually illustrated as a group of modules, which control the operation of theCPU 200 and its interaction with the other hardware components. Anoperating system 225 directs the execution of low-level, basic system functions such as memory allocation, file management and operation ofmass storage devices 220. Theoperating system 225 may be or include a variety of operating systems such as Microsoft WINDOWS operating system, the Unix operating system, the Linux operating system, the Xenix operating system, the IBM AIX operating system, the Hewlett Packard UX operating system, the Novell NETWARE operating system, the Sun Microsystems SOLARIS operating system, the OS/2 operating system, the BeOS operating system, the MACINTOSH operating system, the APACHE operating system, an OPENSTEP operating system or another operating system of platform. - A resource-
management module 230 is responsible for, e.g., allowing properly authenticatedstudents 140 to access privileged educational resources via theirdevices 150, and for monitoring the students' interactions with these resources. The resource-management module 230 may also control and facilitate access to educational resources for theinstructor 120 via theinstructor device 130 and/or for thegraders 160 via gradingdevices 170. It should be understood that resources provided to thestudent devices 150 need not reside physically within theserver 180; the resource-management module 230 may obtain resources from other servers, or direct other servers (e.g., an educational publisher's server) to provide resources to student devices. It should further be understood that the access-control functions of the resource-management module 230 are well known to those skilled in the art of online educational platforms and, more generally, to access control for resources available online or via a private network. - An
analysis engine 235 monitors student interaction with educational resources provided by theserver 180 and utilizes human-originated grades of a subset of student-generated content (i.e., annotations) to grade subsequent annotations. As is well known, digital textbooks and similar electronic materials, when opened on astudent device 150, may have embedded code that communicates actions (e.g., page turning) or annotations via thedevice 150 back to theserver 180. The annotations are analyzed and grading is performed as described further below. Theserver 180 performs grading based on a machine-learning or statistical model trained on the average of two or more human grades provided for each annotation in an initial set. The human grades are received from thegrading devices 170 and are based on the assessments of thegraders 160 of student-generated content (e.g., annotations) supplied to thegrading devices 170 after such content is generated by thestudents 140 viastudent devices 150. Once developed, the machine-learning model may be utilized byserver 180 for the grading of subsequent sets of annotations without the need for further human input (e.g., further grading by human graders 160). - The grades assigned by the grading algorithm may be delivered to students by resource-management module 230 (e.g., via the student's
device 150 as a pop-up message or via another channel, such as the student's mobile phone, wireless tablet, or other device). For example, theserver 180 may maintain or have access to astudent database 240 containing contact information for each student, including email addresses, phone numbers (e.g., to which text messages may be sent). In various embodiments of the invention, thestudent database 240 maintains rosters of classes, sections, and students within each class section. Theserver 180 may even utilize the grades assigned by the grading algorithm, for example, to assign supplementary lessons or other materials to low-performingstudents 140, and/or to control student access to subsequent lessons. For example, astudent 140 might not have the ability to access a subsequent lesson via his or herdevice 150 until a satisfactory grade is achieved on a particular lesson and/or aspects of a lesson are repeated. In another exemplary embodiment, access to different portions of the educational resource, or even to different educational resources may be selectively provided to one ormore students 140 depending upon performance, e.g., grades assigned to one or more annotations. - In various embodiments of the invention, the
server 180 may also incorporate a discussion hosting server 245 that supports a discussion platform and makes this available tostudents 140 via theirdevices 150. The discussion platform may be a server-hosted discussion board that operates autonomously, in the manner of a social-media platform, or may be associated withresources 220. For example, server 245 may perform the functions of resource-management module 230 and facilitate access toresources 220 that have annotation fields into whichstudents 140 may enter comments that server 245 organizes as annotation threads. Server 245 may be part of themain server 180 or may be a separate device. - The
server 180 may also include, in various embodiments of the invention, a repository ordatabase 250 that stores various reports related to the interactions ofstudents 140, theinstructor 120, and/orgraders 160 with the resources 220 (and/or with content related thereto, such as student annotations). For example, therepository 250 may store grade reports generated bygraders 160 or reports for theinstructor 120 based on and/or highlighting questions, comments, and/or annotations generated by thestudents 140. For example, such reports may include links to annotations stored on the discussion server 245. -
FIG. 3 depicts amethod 300 for the automated grading of student annotations in accordance with various embodiments of the present invention. Instep 305, an educational resource or a portion thereof (e.g., from storage 220) is electronically distributed to one ormore student devices 150 vianetwork 110. During use of the educational resource (e.g., reading of one or more passages in an electronic textbook and/or answering questions related to the resource) by thestudents 140, thestudents 140 supply annotations related to the resource via theirstudent devices 150. Instep 310, those annotations are transmitted to theserver 180 vianetwork 110, and instep 315, theserver 180 receives grades from human graders via theirgrading devices 170 based on the annotations received instep 310. For example, an initial set of annotations generated by a class or other group of students 140 (e.g., annotations taken from the first reading assignment for a class) may be graded at least twice utilizinghuman graders 160. For example, the initial annotations may be graded bygraders 160 that include or consist essentially of (1) staff graders, i.e., teaching assistants hand-grading the set of annotations with a research-based rubric, (2) peer graders who, in the process of learning about the scoring rubric that will be used to evaluate their annotations, score a subset of their peers' annotations through a calibration grading exercise (thus, one or more of thegraders 160 may also be a student 140), and/or (3) dedicated human graders not enrolled in the class. - In
step 320, composite grades are determined for each of the initial annotations, by theanalysis engine 235, based on the grades assigned by each of the differenthuman graders 160, thereby minimizing the noise inherent in the utilization of grades from only a single evaluator. For example, the different grades for the initial annotations may be averaged together to derive a composite grade for each annotation. Given the well-known variability among even expert human graders, averaging generally increases the statistical reliability of the grade. - In
step 325, the set of initial annotations with the composite grades is then divided into a training set and a testing set by theanalysis engine 235 for the development of the machine-learning model utilized in embodiments of the present invention. Instep 330, the training set is analyzed byanalysis engine 235 to extract seed features from the graded annotations. Typically, these seed features are simple text attributes heuristically associated with the quality of the annotation—e.g., annotation length and vocabulary sophistication (as measured, for example, by word length or a frequency metric such as term frequency—inverse document frequency, which indicates the importance of a word within an annotation based on its frequency of use among annotations or in written language with subject-matter relevance to the class). The process of identifying seed features may also utilize conventional natural-language processing techniques such as stemming, stop-word removal and/or part-of-speech tagging. In general, the seed features include or consist essentially of, e.g., words, phrases, portions of words or phrases, and/or one or more predetermined answer choices supplied to students in response to questions (for example in a multiple-choice or true/false format) from the training set. - In
step 335, one or more grading features are subsequently produced from combinations and/or permutations of the seed features, and those grading features are utilized instep 340 to predict grades for the annotations within the testing set defined instep 325. This is accomplished by training the machine-learning model on the seed features. For example, the initial, human-graded annotations may be used as a training set for a text-analytic regression procedure (e.g., logistic regression, classification tree, random forest classifier, etc.) that constitutes the machine-learning model. More generally, the machine-learning model may be any suitable analytic framework for analyzing text and making predictions based on a training set, including classification and regression trees (CART), neural networks, or other suitable framework. Machine-learning models are well-characterized in the art and may be implemented without undue experimentation. - For example, in various embodiments, the
analysis engine 235 utilizes a random forest classifier as the basis for the machine-learning model. In one embodiment, human coders grade annotations to form a training set, and the grades they assign to each annotation are averaged. The grades serve as category labels for the machine-learning model. Seed features are extracted from the text and one or, more preferably, an ensemble of classifiers is used to fit the model to predict the category labels from the seed features. The seed features, along with permutations and combinations thereof, form a set of candidate grading features. - The grading features with sufficient predictive reliability against the training set may be selected for use in the model. The predictive reliability of a feature may be deemed sufficient, for example, based on standard error, t value, p value or another statistical metric, for example, a minimum p value required for a seed feature to qualify as a grading feature set at a standard level of 0.01 or less. (The p value reflects the probability that the feature has no predictive value.) Typically, the training set will have 100 or more entries each reflecting a grade assigned to an annotation.
- Following creation of the model and feature selection using the training set, the performance of the model may be evaluated using the testing set. For example, the false positive and false negative predictions obtained against the testing set may be used to detect overfitting, identify and prune features exhibiting multicollinearity, and set a classification threshold that produces a desired level of sensitivity (true positive rate) and specificity (true negative rate). Features may also be assessed against the “inter-coder” reliability, i.e., the fraction of times the different human graders agree on a grade for a particular annotation. For example, a set of grading features that produces predicted grades having values different from the composite grades by less than a predetermined threshold amount (e.g., ±10%, ±5%, ±2%, etc.) may be selected for subsequent automated grading of student annotations. In this manner, the grades in the testing set may be predicted more accurately than via training based on grades from a single human grader.
- As known to those of skill in the art, random forest classifiers operate via the construction of several decision trees based on the training set, and output the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees—thereby correcting for potential overfitting of the training set. Other text-analytic techniques, as noted above, may be utilized by the
analysis engine 235 to determine grading features that predict grades assigned to the annotations in the testing group. As shown inFIG. 3 ,steps instructor 120 or agrader 160. Such human-assigned grades may override grades previously predicted and/or assigned by the machine-learning model and may be utilized, e.g., within the testing set or training set, to further refine the machine-learning model for one ormore students 140. - In
step 345, new student annotations are received by theserver 180. For example, such annotations may include annotations related to a subsequent exercise or received from a different class ofstudents 140 than those whose annotations were utilized to define the grading features. Using the grading features, theanalysis engine 235 assigns grades to such subsequent sets of student annotations instep 350. The grades thus assigned may be displayed, for example, to the instructor 120 (e.g., via instructor device 130) and/or to the individual students 140 (e.g., via their student devices 150) inoptional step 355. - The resource-
management module 230 and analysis engine 235 (and, e.g., a communications module within or corresponding to network interface 205) may be implemented by computer-executable instructions, such as program modules, that are executed by a conventional computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that performs particular tasks or implement particular abstract data types. Those skilled in the art will appreciate that embodiments of the invention may be practiced with various computer system configurations, including multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. Embodiments of the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer-storage media including memory storage devices. - Any suitable programming language may be used to implement without undue experimentation the analytical functions described above. Illustratively, the programming language used may include assembly language, Ada, APL, Basic, C, C++, C*, COBOL, dBase, Forth, FORTRAN, Java, Modula-2, Pascal, Prolog, Python, REXX, and/or JavaScript, for example. Regression-based models (e.g., logistic regression, classification trees and random forests) are readily implemented in the R programming language without undue experimentation (using, e.g., the rpart and randomForest libraries), and neural networks may be implemented in Python or MATLAB. Further, it is not necessary that a single type of instruction or programming language be utilized in conjunction with the operation of embodiments of the invention. Rather, any number of different programming languages may be utilized as is necessary or desirable.
- The
server 180 may also include other removable/nonremovable, volatile/nonvolatile computer storage media. For example, a hard disk drive may read or write to nonremovable, nonvolatile magnetic media. A magnetic disk drive may read from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive may read from or write to a removable, nonvolatile optical disk such as a CD-ROM or other optical media. Other removable/nonremovable, volatile/nonvolatile computer storage media that may be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The storage media are typically connected to the system bus through a removable or non-removable memory interface. - The processing units that execute commands and instructions may be general-purpose processors, but may utilize any of a wide variety of other technologies including special-purpose hardware, a microcomputer, mini-computer, mainframe computer, programmed microprocessor, microcontroller, peripheral integrated circuit element, a CSIC (customer-specific integrated circuit), ASIC (application-specific integrated circuit), a logic circuit, a digital signal processor, a programmable logic device such as an FPGA (field-programmable gate array), PLD (programmable logic device), PLA (programmable logic array), RFID processor, smart chip, or any other device or arrangement of devices that is capable of implementing the steps of the processes of the invention.
- Communication may occur over the Internet, as illustrated, and/or over an intranet, extranet, Ethernet, the public telecommunications infrastructure, or any other system that provides communications. Some suitable communications protocols may include TCP/IP, UDP, or OSI for example. For wireless communications, communications protocols may include Bluetooth, Zigbee, IrDa or other suitable protocol. Furthermore, components of the system may communicate through a combination of wired or wireless paths.
- The terms and expressions employed herein are used as terms and expressions of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described or portions thereof. In addition, having described certain embodiments of the invention, it will be apparent to those of ordinary skill in the art that other embodiments incorporating the concepts disclosed herein may be used without departing from the spirit and scope of the invention. Accordingly, the described embodiments are to be considered in all respects as only illustrative and not restrictive.
Claims (19)
1. An automated grading method for student annotations in an interactive learning application, the method comprising:
(a) distributing an interactive educational resource over a network to a plurality of student devices;
(b) receiving, at a server, an initial set of annotations generated at the student devices in response to the educational resource;
(c) receiving, at the server, a plurality of grades for each of the annotations in the initial set of annotations, each of the grades being provided by a different human grader;
(d) averaging the plurality of grades to produce an average grade for each of the annotations in the initial set of annotations, at least a portion of the initial set of annotations constituting a training set;
(e) extracting portions of annotations within the training set, thereby producing a plurality of seed features;
(f) computationally deriving, from the seed features, one or more grading features predictive of the average grades associated with the training set; and
(g) assigning a grade to a new annotation based on the one or more grading features.
2. The method of claim 1 , wherein step (g) comprises using a machine-learning model to predict the grade assigned to the new annotation based on the one or more grading features, the model being predictive in accordance with a prediction algorithm and generated by steps comprising:
dividing the initial set of annotations into the training set and a testing set, each of the training set and testing set comprising a plurality of annotations and average grades associated therewith; and
identifying the one or more grading features based on predictive reliability in accordance with the prediction algorithm.
3. The method of claim 2 , further comprising the steps of:
computationally predicting, based on the one or more grading features, grades for one or more annotations within the testing set; and
adjusting parameters of the model prior to assigning the grade to the new annotation.
4. The method of claim 2 , wherein the prediction algorithm is a classification tree.
5. The method of claim 4 , wherein the prediction algorithm is a random forest comprising a plurality of regression trees.
6. The method of claim 1 , further comprising, before step (c), distributing the initial set of annotations over the network to a plurality of human graders.
7. The method of claim 1 , further comprising controlling access to the educational resource by the student device at which the new annotation was generated based at least in part on the grade assigned to the new annotation.
8. The method of claim 1 , further comprising displaying the grade assigned to the new annotation on the student device at which the new annotation was generated.
9. The method of claim 1 , wherein step (e) comprises applying natural-language processing to annotations within the training set.
10. An educational system comprising:
a plurality of student devices for executing an interactive educational resource received over a network, the student devices being configured to receive student annotations of the educational resource and transmit the annotations to a server; and
a server in electronic communication with the student devices, the server comprising:
a communication module configured to (i) receive annotations from the student devices, and (ii) receive grades associated with annotations from a plurality of human graders, and
an analysis module configured to (i) associate an average of a plurality of grades received from different human graders with each of an initial set of annotations, at least a portion of the initial set of annotations constituting a training set, (ii) computationally derive one or more grading features predictive of the average grades associated with the training set, and (iii) assign grades to ungraded annotations based on the one or more grading features.
11. The system of claim 10 , wherein the analysis module is configured to extract portions of annotations within the training set, thereby producing a plurality of seed features, wherein the one or more grading features are computationally derived from the seed features.
12. The system of claim 10 , wherein the analysis module uses a machine-learning model to predict the grades assigned to the ungraded annotations based on the one or more grading features, the model being predictive in accordance with a prediction algorithm and generated by steps comprising:
dividing the initial set of annotations into the training set and a testing set, each of the training set and testing set comprising a plurality of annotations and average grades associated therewith; and
identifying the one or more grading features based on predictive reliability in accordance with the prediction algorithm.
13. The system of claim 12 , wherein the analysis module is configured to:
computationally predict, based on the one or more grading features, grades for one or more annotations within the testing set; and
adjust parameters of the model based on the predictions prior to assigning grades to ungraded annotations.
14. The system of claim 12 , wherein the prediction algorithm is a classification tree.
15. The system of claim 14 , wherein the prediction algorithm is a random forest comprising a plurality of regression trees.
16. The system of claim 10 , wherein at least one of the student devices comprises a computer or a handheld device.
17. The system of claim 10 , wherein the communication module is configured to transmit grades assigned by the analysis module to the student devices.
18. The system of claim 10 , further comprising a plurality of grading devices configured to display student annotations of the educational resource, receive grades associated with the student annotations from a human grader, and transmit the grades to the server.
19. The system of claim 18 , wherein at least one of the grading devices comprises a computer or a handheld device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/365,014 US20170154542A1 (en) | 2015-12-01 | 2016-11-30 | Automated grading for interactive learning applications |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562261397P | 2015-12-01 | 2015-12-01 | |
US201562261387P | 2015-12-01 | 2015-12-01 | |
US201562261398P | 2015-12-01 | 2015-12-01 | |
US201562261400P | 2015-12-01 | 2015-12-01 | |
US15/365,014 US20170154542A1 (en) | 2015-12-01 | 2016-11-30 | Automated grading for interactive learning applications |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170154542A1 true US20170154542A1 (en) | 2017-06-01 |
Family
ID=58777068
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/365,004 Active 2037-02-10 US10192456B2 (en) | 2015-12-01 | 2016-11-30 | Stimulating online discussion in interactive learning environments |
US15/364,982 Abandoned US20170154539A1 (en) | 2015-12-01 | 2016-11-30 | Automated personalized feedback for interactive learning applications |
US15/365,014 Abandoned US20170154542A1 (en) | 2015-12-01 | 2016-11-30 | Automated grading for interactive learning applications |
US15/365,019 Active 2037-12-09 US10438498B2 (en) | 2015-12-01 | 2016-11-30 | Instructional support platform for interactive learning environments |
US16/551,889 Active US10692391B2 (en) | 2015-12-01 | 2019-08-27 | Instructional support platform for interactive learning environments |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/365,004 Active 2037-02-10 US10192456B2 (en) | 2015-12-01 | 2016-11-30 | Stimulating online discussion in interactive learning environments |
US15/364,982 Abandoned US20170154539A1 (en) | 2015-12-01 | 2016-11-30 | Automated personalized feedback for interactive learning applications |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/365,019 Active 2037-12-09 US10438498B2 (en) | 2015-12-01 | 2016-11-30 | Instructional support platform for interactive learning environments |
US16/551,889 Active US10692391B2 (en) | 2015-12-01 | 2019-08-27 | Instructional support platform for interactive learning environments |
Country Status (1)
Country | Link |
---|---|
US (5) | US10192456B2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019191677A1 (en) * | 2018-03-30 | 2019-10-03 | Pearson Education, Inc. | Systems and methods for automated and direct network positioning |
US20200051451A1 (en) * | 2018-08-10 | 2020-02-13 | Actively Learn, Inc. | Short answer grade prediction |
US20210192973A1 (en) * | 2019-12-19 | 2021-06-24 | Talaera LLC | Systems and methods for generating personalized assignment assets for foreign languages |
US20220005595A1 (en) * | 2020-07-03 | 2022-01-06 | Abdul Karim Qayumi | System and method for virtual online assessment of medical training and competency |
US20220383767A1 (en) * | 2021-05-27 | 2022-12-01 | International Business Machines Corporation | Semi-automated evaluation of long answer exams |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11170658B2 (en) * | 2011-03-22 | 2021-11-09 | East Carolina University | Methods, systems, and computer program products for normalization and cumulative analysis of cognitive post content |
US11120342B2 (en) | 2015-11-10 | 2021-09-14 | Ricoh Company, Ltd. | Electronic meeting intelligence |
US10192456B2 (en) * | 2015-12-01 | 2019-01-29 | President And Fellows Of Harvard College | Stimulating online discussion in interactive learning environments |
US10860985B2 (en) | 2016-10-11 | 2020-12-08 | Ricoh Company, Ltd. | Post-meeting processing using artificial intelligence |
US11307735B2 (en) | 2016-10-11 | 2022-04-19 | Ricoh Company, Ltd. | Creating agendas for electronic meetings using artificial intelligence |
US20190026473A1 (en) * | 2017-07-21 | 2019-01-24 | Pearson Education, Inc. | System and method for automated feature-based alert triggering |
US10867128B2 (en) | 2017-09-12 | 2020-12-15 | Microsoft Technology Licensing, Llc | Intelligently updating a collaboration site or template |
US20190087832A1 (en) | 2017-09-15 | 2019-03-21 | Pearson Education, Inc. | Digital credential field data mapping |
US20190087391A1 (en) * | 2017-09-18 | 2019-03-21 | Microsoft Technology Licensing, Llc | Human-machine interface for collaborative summarization of group conversations |
US10742500B2 (en) * | 2017-09-20 | 2020-08-11 | Microsoft Technology Licensing, Llc | Iteratively updating a collaboration site or template |
US11062271B2 (en) | 2017-10-09 | 2021-07-13 | Ricoh Company, Ltd. | Interactive whiteboard appliances with learning capabilities |
US10956875B2 (en) | 2017-10-09 | 2021-03-23 | Ricoh Company, Ltd. | Attendance tracking, presentation files, meeting services and agenda extraction for interactive whiteboard appliances |
US11030585B2 (en) | 2017-10-09 | 2021-06-08 | Ricoh Company, Ltd. | Person detection, person identification and meeting start for interactive whiteboard appliances |
US11836683B2 (en) * | 2018-01-05 | 2023-12-05 | Wyn.Net, Llc | Systems and methods for electronic lesson management |
US11533272B1 (en) * | 2018-02-06 | 2022-12-20 | Amesite Inc. | Computer based education methods and apparatus |
US11749131B1 (en) * | 2018-10-01 | 2023-09-05 | Educational Testing Service | E-book reading comprehension analytics |
US11443647B2 (en) * | 2019-02-08 | 2022-09-13 | Pearson Education, Inc. | Systems and methods for assessment item credit assignment based on predictive modelling |
US11263384B2 (en) | 2019-03-15 | 2022-03-01 | Ricoh Company, Ltd. | Generating document edit requests for electronic documents managed by a third-party document management service using artificial intelligence |
US11720741B2 (en) | 2019-03-15 | 2023-08-08 | Ricoh Company, Ltd. | Artificial intelligence assisted review of electronic documents |
US11573993B2 (en) | 2019-03-15 | 2023-02-07 | Ricoh Company, Ltd. | Generating a meeting review document that includes links to the one or more documents reviewed |
US11080466B2 (en) * | 2019-03-15 | 2021-08-03 | Ricoh Company, Ltd. | Updating existing content suggestion to include suggestions from recorded media using artificial intelligence |
US11392754B2 (en) | 2019-03-15 | 2022-07-19 | Ricoh Company, Ltd. | Artificial intelligence assisted review of physical documents |
US11270060B2 (en) * | 2019-03-15 | 2022-03-08 | Ricoh Company, Ltd. | Generating suggested document edits from recorded media using artificial intelligence |
US11482127B2 (en) * | 2019-03-29 | 2022-10-25 | Indiavidual Learning Pvt. Ltd. | System and method for behavioral analysis and recommendations |
US10902190B1 (en) * | 2019-07-03 | 2021-01-26 | Microsoft Technology Licensing Llc | Populating electronic messages with quotes |
CN110727822B (en) * | 2019-11-19 | 2022-02-08 | 北京网聘咨询有限公司 | Online learning system based on personalized recommendation |
CN113537506B (en) * | 2020-04-22 | 2023-08-29 | 百度在线网络技术(北京)有限公司 | Test method, device, equipment and medium for machine learning effect |
CN112163789B (en) * | 2020-10-22 | 2021-04-30 | 上海易教科技股份有限公司 | Teacher workload evaluation system and method for online education |
CN112735212B (en) * | 2020-12-30 | 2022-08-26 | 北京安博盛赢教育科技有限责任公司 | Online classroom information interaction discussion-based method and device |
US11921763B2 (en) | 2021-02-24 | 2024-03-05 | Open Weaver Inc. | Methods and systems to parse a software component search query to enable multi entity search |
US11947530B2 (en) | 2021-02-24 | 2024-04-02 | Open Weaver Inc. | Methods and systems to automatically generate search queries from software documents to validate software component search engines |
US11960492B2 (en) | 2021-02-24 | 2024-04-16 | Open Weaver Inc. | Methods and systems for display of search item scores and related information for easier search result selection |
US20220291921A1 (en) * | 2021-02-26 | 2022-09-15 | Open Weaver Inc. | Methods and systems to classify software components based on multiple information sources |
EP4320608A1 (en) * | 2021-04-08 | 2024-02-14 | Brainpop IP LLC | Systems and methods for learner growth tracking and assessments |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050089834A1 (en) * | 2003-10-23 | 2005-04-28 | Shapiro Jeffrey S. | Educational computer program |
US20070037131A1 (en) * | 2005-08-11 | 2007-02-15 | Anderson Joseph M | Incentivized educational device and method |
US20140234810A1 (en) * | 2013-02-15 | 2014-08-21 | Educational Testing Service | Systems and Methods for Determining Text Complexity |
US20140272910A1 (en) * | 2013-03-01 | 2014-09-18 | Inteo, Llc | System and method for enhanced teaching and learning proficiency assessment and tracking |
US20140370485A1 (en) * | 2013-04-19 | 2014-12-18 | Educational Testing Service | Systems and Methods for Generating Automated Evaluation Models |
US20150243181A1 (en) * | 2014-02-27 | 2015-08-27 | Educational Testing Service | Systems and Methods for Automated Scoring of Textual Responses to Picture-Based Items |
US20150269857A1 (en) * | 2014-03-24 | 2015-09-24 | Educational Testing Service | Systems and Methods for Automated Scoring of a User's Performance |
US20160133147A1 (en) * | 2014-11-10 | 2016-05-12 | Educational Testing Service | Generating Scores and Feedback for Writing Assessment and Instruction Using Electronic Process Logs |
US20170229033A1 (en) * | 2014-08-07 | 2017-08-10 | Amit Saini | Systems and methods for electronic evaluation of candidates |
Family Cites Families (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4996642A (en) | 1987-10-01 | 1991-02-26 | Neonics, Inc. | System and method for recommending items |
US5724567A (en) * | 1994-04-25 | 1998-03-03 | Apple Computer, Inc. | System for directing relevance-ranked data objects to computer users |
US6209100B1 (en) * | 1998-03-27 | 2001-03-27 | International Business Machines Corp. | Moderated forums with anonymous but traceable contributions |
US7197470B1 (en) * | 2000-10-11 | 2007-03-27 | Buzzmetrics, Ltd. | System and method for collection analysis of electronic discussion methods |
US20030014311A1 (en) * | 2000-12-20 | 2003-01-16 | Chua James Chien Liang | Method and apparatus for rewarding contributors |
US6789047B1 (en) | 2001-04-17 | 2004-09-07 | Unext.Com Llc | Method and system for evaluating the performance of an instructor of an electronic course |
US20050160113A1 (en) * | 2001-08-31 | 2005-07-21 | Kent Ridge Digital Labs | Time-based media navigation system |
US7127208B2 (en) * | 2002-01-23 | 2006-10-24 | Educational Testing Service | Automated annotation |
US7904510B2 (en) | 2004-02-23 | 2011-03-08 | Microsoft Corporation | Systems and methods for managing discussion threads based on ratings |
US7437382B2 (en) | 2004-05-14 | 2008-10-14 | Microsoft Corporation | Method and system for ranking messages of discussion threads |
US20060026593A1 (en) * | 2004-07-30 | 2006-02-02 | Microsoft Corporation | Categorizing, voting and rating community threads |
US7945625B2 (en) | 2005-07-12 | 2011-05-17 | Microsoft Corporation | Automated moderation of discussion lists |
CA2615659A1 (en) | 2005-07-22 | 2007-05-10 | Yogesh Chunilal Rathod | Universal knowledge management and desktop search system |
US7779347B2 (en) * | 2005-09-02 | 2010-08-17 | Fourteen40, Inc. | Systems and methods for collaboratively annotating electronic documents |
US20070067405A1 (en) | 2005-09-20 | 2007-03-22 | Eliovson Joshua M | Moderated anonymous forum |
CA2525267A1 (en) | 2005-10-28 | 2007-04-28 | Ibm Canada Limited - Ibm Canada Limitee | Systems, methods and tools for aggregating subsets of opinions from group collaborations |
US20070150802A1 (en) * | 2005-12-12 | 2007-06-28 | Canon Information Systems Research Australia Pty. Ltd. | Document annotation and interface |
US20070160963A1 (en) * | 2006-01-10 | 2007-07-12 | International Business Machines Corporation | Candidate evaluation tool |
NZ592958A (en) * | 2006-03-17 | 2012-03-30 | Sony Corp | Method and media for organising group content viewing and group communications during same |
US8230351B2 (en) * | 2006-04-11 | 2012-07-24 | Sri International | Method and apparatus for collaborative work |
US8540517B2 (en) * | 2006-11-27 | 2013-09-24 | Pharos Innovations, Llc | Calculating a behavioral path based on a statistical profile |
US20090035733A1 (en) | 2007-08-01 | 2009-02-05 | Shmuel Meitar | Device, system, and method of adaptive teaching and learning |
US20090063991A1 (en) | 2007-08-27 | 2009-03-05 | Samuel Pierce Baron | Virtual Discussion Forum |
US7810033B2 (en) * | 2007-10-31 | 2010-10-05 | International Business Machines Corporation | Methods and systems involving text analysis |
US8934832B2 (en) * | 2008-06-17 | 2015-01-13 | Laureate Education, Inc. | System and method for collaborative development of online courses and programs of study |
US8225348B2 (en) * | 2008-09-12 | 2012-07-17 | At&T Intellectual Property I, L.P. | Moderated interactive media sessions |
US8261193B1 (en) | 2009-04-21 | 2012-09-04 | Jackbe Corporation | Method and system for capturing mashup data for collective intelligence and user-generated knowledge |
US20160035230A1 (en) * | 2009-08-07 | 2016-02-04 | Vital Source Technologies, Inc. | Assessing a user's engagement with digital resources |
US20140154657A1 (en) * | 2012-11-02 | 2014-06-05 | Coursesmart Llc | System and method for assessing a user's engagement with digital resources |
US9514435B2 (en) * | 2009-08-17 | 2016-12-06 | Accenture Global Services Limited | System for targeting specific users to discussion threads |
US20120196267A1 (en) | 2009-10-08 | 2012-08-02 | Onsotong Co., Ltd. | Online discussion ability authentication method and system for performing method |
WO2011050495A1 (en) | 2009-10-29 | 2011-05-05 | Google Inc. | Ranking user generated web content |
US8311792B1 (en) | 2009-12-23 | 2012-11-13 | Intuit Inc. | System and method for ranking a posting |
US8628331B1 (en) | 2010-04-06 | 2014-01-14 | Beth Ann Wright | Learning model for competency based performance |
US8756224B2 (en) | 2010-06-16 | 2014-06-17 | Rallyverse, Inc. | Methods, systems, and media for content ranking using real-time data |
US8966569B2 (en) | 2010-07-27 | 2015-02-24 | Globalytica, Llc | Collaborative structured analysis system and method |
US20120141968A1 (en) | 2010-12-07 | 2012-06-07 | Microsoft Corporation | Evaluation Assistant for Online Discussion |
US8694490B2 (en) | 2011-01-28 | 2014-04-08 | Bitvore Corporation | Method and apparatus for collection, display and analysis of disparate data |
US9645986B2 (en) * | 2011-02-24 | 2017-05-09 | Google Inc. | Method, medium, and system for creating an electronic book with an umbrella policy |
US20120244507A1 (en) * | 2011-03-21 | 2012-09-27 | Arthur Tu | Learning Behavior Optimization Protocol (LearnBop) |
US9619483B1 (en) | 2011-03-25 | 2017-04-11 | Amazon Technologies, Inc. | Ranking discussion forum threads |
US9740785B1 (en) | 2011-03-25 | 2017-08-22 | Amazon Technologies, Inc. | Ranking discussion forum threads |
US9330366B2 (en) | 2011-05-06 | 2016-05-03 | David H. Sitrick | System and method for collaboration via team and role designation and control and management of annotations |
US8806352B2 (en) | 2011-05-06 | 2014-08-12 | David H. Sitrick | System for collaboration of a specific image and utilizing selected annotations while viewing and relative to providing a display presentation |
US20130096892A1 (en) * | 2011-10-17 | 2013-04-18 | Alfred H. Essa | Systems and methods for monitoring and predicting user performance |
US10152542B2 (en) | 2012-04-09 | 2018-12-11 | Oath Inc. | Ranking and ordering of user generated content |
WO2013157015A2 (en) | 2012-04-16 | 2013-10-24 | Chunilal Rathod Yogesh | A method and system for display dynamic & accessible actions with unique identifiers and activities. |
EP2856327A4 (en) | 2012-05-24 | 2016-02-17 | Renaissance Learning Inc | Interactive organization of comments on an online social platform |
US9189965B2 (en) | 2012-06-29 | 2015-11-17 | International Business Machines Corporation | Enhancing posted content in discussion forums |
US9805614B2 (en) | 2012-09-17 | 2017-10-31 | Crowdmark Inc. | System and method for enabling crowd-sourced examination marking |
US9542669B1 (en) | 2013-03-14 | 2017-01-10 | Blab, Inc. | Encoding and using information about distributed group discussions |
US20140272849A1 (en) * | 2013-03-15 | 2014-09-18 | Yahoo! Inc. | System and method providing positive social and economic motivators for goal achievement |
US20140272905A1 (en) * | 2013-03-15 | 2014-09-18 | Adapt Courseware | Adaptive learning systems and associated processes |
US20150006425A1 (en) * | 2013-06-28 | 2015-01-01 | Hanan Ayad | SYSTEMS AND METHODS FOR GENERATING VISUALIZATIONS INDICATIVE OF LEARNER PERFORMANCE IN AN eLEARNING SYSTEM |
US20150050637A1 (en) * | 2013-08-16 | 2015-02-19 | Big Brothers Big Sisters of Eastern Missouri | System and method for early warning and recognition for student achievement in schools |
US20150079575A1 (en) | 2013-09-18 | 2015-03-19 | Apollo Group, Inc. | Method and System for Facilitating Discussion of Issues Pertaining to Subject Matter Taught By A Course |
US20150186455A1 (en) * | 2013-12-30 | 2015-07-02 | Google Inc. | Systems and methods for automatic electronic message annotation |
US20150209677A1 (en) | 2014-01-30 | 2015-07-30 | Scott Thomas O'Brien | Method and apparatus for an online comment contest service |
US20160104261A1 (en) * | 2014-10-08 | 2016-04-14 | Zoomi, Inc. | Systems and methods for integrating an e-learning course delivery platform with an enterprise social network |
US9860308B2 (en) | 2014-11-25 | 2018-01-02 | International Business Machines Corporation | Collaborative creation of annotation training data |
US10013890B2 (en) | 2014-12-11 | 2018-07-03 | International Business Machines Corporation | Determining relevant feedback based on alignment of feedback with performance objectives |
US9928556B2 (en) * | 2014-12-31 | 2018-03-27 | Facebook, Inc. | Content quality evaluation and classification |
US10192456B2 (en) * | 2015-12-01 | 2019-01-29 | President And Fellows Of Harvard College | Stimulating online discussion in interactive learning environments |
US9760556B1 (en) * | 2015-12-11 | 2017-09-12 | Palantir Technologies Inc. | Systems and methods for annotating and linking electronic documents |
-
2016
- 2016-11-30 US US15/365,004 patent/US10192456B2/en active Active
- 2016-11-30 US US15/364,982 patent/US20170154539A1/en not_active Abandoned
- 2016-11-30 US US15/365,014 patent/US20170154542A1/en not_active Abandoned
- 2016-11-30 US US15/365,019 patent/US10438498B2/en active Active
-
2019
- 2019-08-27 US US16/551,889 patent/US10692391B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050089834A1 (en) * | 2003-10-23 | 2005-04-28 | Shapiro Jeffrey S. | Educational computer program |
US20070037131A1 (en) * | 2005-08-11 | 2007-02-15 | Anderson Joseph M | Incentivized educational device and method |
US20140234810A1 (en) * | 2013-02-15 | 2014-08-21 | Educational Testing Service | Systems and Methods for Determining Text Complexity |
US20140272910A1 (en) * | 2013-03-01 | 2014-09-18 | Inteo, Llc | System and method for enhanced teaching and learning proficiency assessment and tracking |
US20140370485A1 (en) * | 2013-04-19 | 2014-12-18 | Educational Testing Service | Systems and Methods for Generating Automated Evaluation Models |
US20150243181A1 (en) * | 2014-02-27 | 2015-08-27 | Educational Testing Service | Systems and Methods for Automated Scoring of Textual Responses to Picture-Based Items |
US20150269857A1 (en) * | 2014-03-24 | 2015-09-24 | Educational Testing Service | Systems and Methods for Automated Scoring of a User's Performance |
US20170229033A1 (en) * | 2014-08-07 | 2017-08-10 | Amit Saini | Systems and methods for electronic evaluation of candidates |
US20160133147A1 (en) * | 2014-11-10 | 2016-05-12 | Educational Testing Service | Generating Scores and Feedback for Writing Assessment and Instruction Using Electronic Process Logs |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019191677A1 (en) * | 2018-03-30 | 2019-10-03 | Pearson Education, Inc. | Systems and methods for automated and direct network positioning |
US11138897B2 (en) | 2018-03-30 | 2021-10-05 | Pearson Education, Inc. | Systems and methods for automated and direct network positioning |
US11250720B2 (en) | 2018-03-30 | 2022-02-15 | Pearson Education, Inc. | Systems and methods for automated and direct network positioning |
US20200051451A1 (en) * | 2018-08-10 | 2020-02-13 | Actively Learn, Inc. | Short answer grade prediction |
US20210192973A1 (en) * | 2019-12-19 | 2021-06-24 | Talaera LLC | Systems and methods for generating personalized assignment assets for foreign languages |
US20220005595A1 (en) * | 2020-07-03 | 2022-01-06 | Abdul Karim Qayumi | System and method for virtual online assessment of medical training and competency |
US20220383767A1 (en) * | 2021-05-27 | 2022-12-01 | International Business Machines Corporation | Semi-automated evaluation of long answer exams |
US11967253B2 (en) * | 2021-05-27 | 2024-04-23 | International Business Machines Corporation | Semi-automated evaluation of long answer exams |
Also Published As
Publication number | Publication date |
---|---|
US10438498B2 (en) | 2019-10-08 |
US20170154543A1 (en) | 2017-06-01 |
US10692391B2 (en) | 2020-06-23 |
US20170154539A1 (en) | 2017-06-01 |
US20190385467A1 (en) | 2019-12-19 |
US10192456B2 (en) | 2019-01-29 |
US20170154541A1 (en) | 2017-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170154542A1 (en) | Automated grading for interactive learning applications | |
Alam | Harnessing the Power of AI to Create Intelligent Tutoring Systems for Enhanced Classroom Experience and Improved Learning Outcomes | |
US10191901B2 (en) | Enrollment pairing analytics system and methods | |
Stone | Guiding students to develop an understanding of scientific inquiry: A science skills approach to instruction and assessment | |
Corrigan-Magaldi et al. | Faculty-facilitated remediation: A model to transform at-risk students | |
Deng et al. | Personalized learning in a virtual hands-on lab platform for computer science education | |
Andrews et al. | Exploring the relationship between teacher knowledge and active-learning implementation in large college biology courses | |
US20190114937A1 (en) | Grouping users by problematic objectives | |
Reinders | Learning analytics for language learning and teaching. | |
US10541884B2 (en) | Simulating a user score from input objectives | |
US20190114346A1 (en) | Optimizing user time and resources | |
US9508266B2 (en) | Cross-classroom and cross-institution item validation | |
Yuskovych-Zhukovska et al. | Application of artificial intelligence in education. Problems and opportunities for sustainable development | |
Echiverri et al. | Class discussion and class participation: Determination of their relationship | |
Salehian Kia et al. | Exploring the relationship between personalized feedback models, learning design and assessment outcomes | |
US11803928B2 (en) | Promoting a tutor on a platform | |
Arpaci | Design and development of educational multimedia: the software development process for mobile learning | |
Rajapboyevna et al. | The ADDIE Model | |
Yeung et al. | Exploring Characteristics of Fine-Grained Behaviors of Learning Mathematics in Tablet-Based E-Learning Activities. | |
Voithofer et al. | Data sources for educators: mining meaningful data for course and program decision making | |
Sander | Using learning analytics to predict academic outcomes of first-year students in higher education | |
Voithofer et al. | 5. Data Sources for Educators | |
Poplavska et al. | Application of Artificial Intelligence in Education. Problems and Opportunities for Sustainable Developmen | |
Toolan | Strategies for Improving Retention in Online Learning. | |
Pinto et al. | Deep Learning for Educational Data Science |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PRESIDENT AND FELLOWS OF HARVARD COLLEGE, MASSACHU Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KING, GARY;LUKOFF, BRIAN;MAZUR, ERIC;AND OTHERS;SIGNING DATES FROM 20170214 TO 20170301;REEL/FRAME:041435/0127 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |