AU2006246317A1 - Comparing text based documents - Google Patents

Comparing text based documents Download PDF

Info

Publication number
AU2006246317A1
AU2006246317A1 AU2006246317A AU2006246317A AU2006246317A1 AU 2006246317 A1 AU2006246317 A1 AU 2006246317A1 AU 2006246317 A AU2006246317 A AU 2006246317A AU 2006246317 A AU2006246317 A AU 2006246317A AU 2006246317 A1 AU2006246317 A1 AU 2006246317A1
Authority
AU
Australia
Prior art keywords
document
essay
word
root
representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2006246317A
Inventor
Heinz Dreher
Robert Francis Williams
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Curtin University of Technology
Original Assignee
Curtin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2005902424A external-priority patent/AU2005902424A0/en
Application filed by Curtin University of Technology filed Critical Curtin University of Technology
Priority to AU2006246317A priority Critical patent/AU2006246317A1/en
Priority claimed from PCT/AU2006/000630 external-priority patent/WO2006119578A1/en
Publication of AU2006246317A1 publication Critical patent/AU2006246317A1/en
Abandoned legal-status Critical Current

Links

Description

WO2006/119578 PCT/AU2006/000630 COMPARING TEXT BASED DOCUMENTS FIELD OF THE INVENTION 5 The present invention relates to comparing text based documents using an automated process to obtain an indication of the similarity of the documents. The present invention has application in many areas including but not limited to document searching and automated essay 10 grading. BACKGROUND In simple terms internet search engines scan web pages 15 (which are text based documents) for nominated words and return result of web pages that match the nominated words. Internet search engines are not known for finding documents that are based on similar concepts but which do not use the nominated words. 20 Automated essay grading is more complex. Here the aim is to grade an essay (text based document) on its content compared to an expected answer not on a particular set of words. 25 SUMMARY OF THE PRESENT INVENTION According to a first aspect of the present invention there is provided a method of comparing text based documents 30 comprising: lexically normalising each word of the text of a first document to form a first normalised representation; building a vector representation of the first WO2006/119578 PCT/AU2006/000630 - 2 document from the first normalised representation; lexically normalising each word of the text of a second document to form a second normalised representation; 5 building a vector representation of the second document from the second normalised representation; comparing the alignment of the vector representations to produce a score of the similarity of the second document to the first document. 10 Preferably the lexical normalisation converts each word in the document into a representation of a root concept as defined in a thesaurus. Each word is used to look up the root concept of the word in the thesaurus. Preferably 15 each root word is allocated a numerical value. Thus the normalisation process in some embodiments produces a numeric representation of the document. Each normalised root concept forms a dimension of the vector representation. Each root concept is counted. The count 20 of each normalised root concept forms the length of the vector in the respective dimension of the vector representation. Preferably the comparison of the alignment of the vector 25 representations produces the score by determining the cosine of an angle (theta) between the vectors. Typically the cos(theta) is calculated from the dot product of the vectors and the length of the vectors. 30 In some embodiments the number of root concepts in the document is counted. In an embodiment each root concept of non-zero count provides a contribution to a count of WO2006/119578 PCT/AU2006/000630 - 3 concepts in each document. Certain root concepts may be excluded from the count of concepts. Preferably the count of concepts of the second document is compared to the count of concepts of the first document to produce a 5 contribution to the score of the similarity of the second document to the first document. Typically the contribution of each root concept of non-zero count is one. Preferably the comparison is a ratio. 10 In a preferred embodiment the first document is a model answer essay, the second document is an essay to be marked and the score is a mark for the second essay. According to a second aspect of the present invention 15 there is provided a system for comparing text based documents comprising: means for lexically normalising each word of the text of a first document to form a first normalised representation; 20 means for building a vector representation of the first document from the first normalised representation; means for lexically normalising each word of the text of a second document to form a second normalised representation; 25 means for building a vector representation of the second document from the second normalised representation; means for lexically normalising the text of a first document; means for comparing the alignment of the vector 30 representations to produce a score of the similarity of the second document to the.first document.
WO2006/119578 PCT/AU2006/000630 -4 According to a third aspect of the present invention there is provided a method of comparing text based documents comprising: partitioning words of a first document into noun 5 phrases and verb clauses; partitioning words of a second document into noun phrases and verb clauses; comparing the partitioning of the first document to the second document to produce a score of the similarity 10 of the second document to the first document. In one embodiment each word in the document is lexically normalised into root concepts. 15 Preferably the comparison of the partitioning of the documents is conducted by determining a ratio of the number of one or more types of noun phrase components in the second document to the number of corresponding types of noun phrase components in the first document and a 20 ratio of the number of one or more types of verb clause components in the second document to the number of corresponding types of verb clause components in the first document, wherein the ratios contribute the score. 25 Preferably the types of noun phrase components are: noun phrase nouns, noun phrase adjectives, noun phrase prepositions and noun phrase conjunctions. Preferably the types of clause components are: verb clause verbs, verb clause adverbs, verb clause auxiliaries, verb clause 30 prepositions and verb clause conjunctions.
WO2006/119578 PCT/AU2006/000630 - 5 In a preferred embodiment the first document is a model answer essay, the second document is an essay to be marked and the score is a mark for the second essay. 5 According to a fourth aspect of the present invention there is provided a system for comparing text based documents comprising: means for partitioning words of a first document into noun phrases and verb clauses; 10 means for partitioning words of a second document into noun phrases and verb clauses; means for comparing the partitioning of the first document to the second document to produce a score of the similarity of the second document to the first document. 15 According to a fifth aspect of the present invention there is provided a method of comparing text based documents comprising: lexically normalising each word of the text of a 20 first document to form a first normalised representation; determining the number of root concepts in the first document from the first normalised representation; lexically normalising each word of the text of a second document to form a second normalised 25 representation; determining the number of root concepts in the second document from the second normalised representation; comparing the number of root concepts in the first document to the number of root concepts in the second 30 document to produce a score of the similarity of the second document to the first document.
WO2006/119578 PCT/AU2006/000630 - 6 According to a sixth aspect of the present invention there is provided a system for comparing text based documents comprising: means for lexically normalising each word of the text 5 of a first document to form a first normalised representation; means for determining the number of root concepts in the first document from the first normalised representation; 10 means for lexically normalising each word of the text of a second document to form a second normalised representation; means for determining the number of root concepts in the second document from the second normalised 15 representation; means for comparing the number of root concepts in the first document to the number of root concepts in the second document to produce a score of the similarity of the second document to the first document. 20 According to a seventh aspect of the present invention there is provided a method of grading a text based essay document comprising: providing a model answer; 25 providing a plurality of hand marked essays; providing a plurality of essays to be graded; providing an equation for grading essays, wherein the equation has a plurality of measures with each measure having a coefficient, the equation producing a score of 30 the essay being calculated by summing each measure as modified by its respective coefficient, each measure being determined by comparing each essay to be graded with the model essay; WO2006/119578 PCT/AU2006/000630 - 7 determining the coefficients from the hand marked essays; applying the equation to each essay to be graded to produce a score for each essay. 5 Preferably determining the coefficients from the hand marked essays is performed by linear regression. Preferably the measures include the scores produced by the 10 methods of comparing text based documents described above. According to an eighth aspect of the present invention there is provided a system for grading a text based essay document comprising: 15 means for determining coefficients in an equation from a plurality of hand marked essays, wherein the equation is for grading an essay to be marked, the equation comprising a plurality of measures with each measure having one of the coefficients, the equation 20 producing a score for the essay which is calculated by summing each measure as modified by its respective coefficient, means for determining each measure by comparing each essay to be graded with the model essay; 25 means for applying the equation to each essay to be graded to produce a score for each essay from the determined coefficients and determined measures. According to a ninth aspect of the present invention there 30 is provided a method of providing visual feedback on an essay grading comprising: displaying a count of each root concept in the graded WO2006/119578 PCT/AU2006/000630 -8 essay and a count of each root concepts expected in the answer. Preferably each root concept corresponds to a root meaning 5 of a word as defined by a thesaurus. In some embodiments the count of each root concept is determined by lexically normalising each word in the graded essay to produce a representation of the root meanings in the graded essay and counting the occurrences of each root meaning. The 10 count of root concepts in the answer is counted in the same way from a model answer. Preferably the display is graphical. More preferably the display is a bar graph for each root concept. 15 In an embodiment the method further comprises selecting a concept in the essay and displaying words belonging to that concept in the essay. Preferably words related to other concepts in the answer are also displayed. 20 Preferably this display is by highlighting. In another embodiment the method further comprises selecting a concept in expected essay and displaying words belonging to that concept in the essay. Preferably words 25 related to other concepts in the answer are also displayed. Preferably this display is by highlighting. Preferably the method further comprises displaying synonyms to selected root concept. 30 According to a tenth aspect of the present invention there is provided a system for providing visual feedback on an essay grading comprising: WO2006/119578 PCT/AU2006/000630 -9means for displaying a count of each root concept in the graded essay and a count of each root concepts expected in the answer. 5 According to an eleventh aspect of the present invention there is provided a method of numerically representing a document comprising: lexically normalising each word of the document; partitioning the normalised words of the document 10 into parts, which each part designates as one of a noun phrase or a verb clause. Preferably a plurality of words are used to determine whether each part is a noun phase or a verb clause. In an 15 embodiment the first three words of each part are used to determine whether the part is a noun phrase or a verb clause. In some embodiments each word in a part is allocated to a column-wise slot of a noun phrase or verb clause table. 20 Each slot of the table is allocated to a grammatical type of word. Words are allocated sequentially to slots in the 25 appropriate table if they are of the grammatical type of the next slot. In the event that the next word does not belong in the next slot, the slot is left blank and the sequential allocation of slots moves on one position. 30 In the event that the next word does not belong to the table type of the current part then this indicates an end to the current part.
WO2006/119578 PCT/AU2006/000630 - 10 In some embodiments the tables have a plurality of rows such that when the next word does not fit into the rest of the row following placement of the current word in the current part, but the word does not indicate an end to the 5 current part then it is placed in the next row of the table. According to a twelfth aspect of the present invention there is provided a system for numerically representing a 10 document comprising: means for lexically normalising each word of the document; means for partitioning the normalised words of the document into parts, which each part designates as one of 15 a noun phrase or a verb clause. According to a thirteenth aspect of the present invention there is provided a computer program configured to control a computer to perform any one of the above defined 20 methods. According to a fourteenth aspect of the present invention there is provided a computer program configured to control a computer to operate as any one of the above defined 25 systems. According to an fifteenth aspect of the present invention there is provided a computer readable storage medium comprising a computer program as defined above. 30 SUMMARY OF DIAGRAMS WO2006/119578 PCT/AU2006/000630 - 11 In order to provide a better understanding of the present invention preferred embodiments will now be described in greater detail, by way of example only, with reference to the accompanying drawings, in which: 5 Figure 1 is a schematic representation of a preferred embodiment of an apparatus for comparing text based documents according to an embodiment of the present invention; 10 Figure 2 is a schematic flowchart of a method of comparing text based documents according to an embodiment of the present invention, in which the text based documents are a model answer essay and essays for grading; Figure 3 is a graphical display of a vector representation 15 of 3 documents; Figure 4 is a screen shot of a window produced by a computer program of an embodiment of the present invention in which an essay is graded according to a method of an embodiment of the present invention; 20 Figure 5 is a screen shot of a window produced by the computer program in which concepts of the graded essay are compared to concepts of a model answer; Figure 6 is a window showing a list of synonyms; Figure 7 is a set of flow charts of some embodiments of 25 the present invention; and Figure 8 is a flow chart of an embodiment of the present invention. DETAILED DESCRIPTION OF PREFERRED EMBODIMENT OF THE 30 PRESENT INVENTION Referring to Figure 1 there is a system 10 for comparing text based documents, typically in the form of a computer WO2006/119578 PCT/AU2006/000630 - 12 having a processor and memory loaded with suitable software to control the computer to operate as the system for comparing text based documents 10. The system 10 includes an input 12 for receiving input from a user and 5 for receiving electronic text based documents containing at least one word; a processor 14 for performing calculations to compare text based documents; a storage means 16, such as a hard disk drive or memory, for temporarily storing the text based documents for 10 comparison and the computer program from controlling the processor 14; and an output 18, such as a display for providing the result of the comparison. The system 10 is operated according to the method shown in 15 Figure 2. Initially a set of answers is prepared according to the process 100. An essay is set at 102 outlining the topic of the essays to be marked. Answers to the essay topic are written at 104. The answers need to be electronic text documents or converted into 20 electronic text documents. A sample of answers is separated at 106 for hand grading by one or more markers. The sample is preferably at least 10 answers. It has been found that a rule of thumb is 25 that roughly 5 times the number of predictors should be used as the number of documents in the sample. For the equation below as least 50 and preferably 100 documents should be in the sample. Typically a marking key 112 is devised from the essay topic 102. One or preferably more 30 markers hand (manually) grades the sample. Where more than one person grades the same paper, which is desirable, an average grade for the hand graded sample is produced.
WO2006/119578 PCT/AU2006/000630 - 13 The remainder of the answers 104 form the answers for automatic grading 108. A model answer 110 is required. The model answer can be 5 written at 114 from the marking key or the best answer 116 of the sample of answers for hand grading 106 can be used as the model answer. Each of the text based answers, that is, the model answer 10 110, the sample of hand graded answers 106 and the remainder of the answers for automatic grading 108 are inputted 202 into the system 10 through input 12. The automatic essay grading technique 200 is then 15 followed. From each of the inputs 202 of the model answer 110, sample of answers that have been hand graded 106 and remaining answers for automatic grading 108 are each processed into a required structure as will be described further below. These steps are 204, 206 and 208 20 respectively. The processed model answer from 204 is then compared at 210 with each processed hand graded answer from 206 to produce a set of measures, as will be defined in more detail below. The measures are essentially one or more values that compare each of the hand graded answers 25 with the model answer using a plurality of techniques. The measures are then used to find coefficients of a scoring equation as will be described further below. Each of the measures for each hand graded answer is 30 compared 212 to the score provided during hand grading and a model building technique used to find the coefficients that best produce the hand graded scores from each of the measures. Typically this will be by a linear regression WO2006/119578 PCT/AU2006/000630 - 14 technique. Although it will be appreciated that other modelling techniques may be used. Each of the essay answers requiring automatic grading from 5 208 are compared 214 with the model answer from 204 to produce measures for each answer. The coefficients determined at 212 are then applied to the measures for each essay at 216 to produce a score for each essay. A set of scores is then output at 218. The essay answer can 10 then be viewed using the display technique described further below to provide feedback to the essay writer. Equation For Score 15 The following equation is used to compute an essay score: Score = C*CosTheta + D*VarRatio + otherfactors. The term otherfactors is intended to rate the overall 20 merit of the essay rather than the essay's answer to the topic and takes into account things like style, readability, spelling and grammatical errors. The CosTheta and VarRatio assess the extent that the essay answered the question. 25 C and D are weighting variables. A more detailed equation to calculate the essay score follows: 30 Score = Intercept + A*FleschReadingEase + B*FleschKincaidGradeLevel + C*CosTheta + D*VarRatio WO2006/119578 PCT/AU2006/000630 - 15 + E*RatioNPNouns + F*RatioNPAdjectives + G*RatioNPPrepositions + H*RatioNPConjunctions + I*RatioVPVerbs + J*RatioVPAdverbs + K*RatioVPAuxilliaries + L*RatioVPPrepositions 5 + M*RatioVPConjunctions +N*NoParagraphs + O*NoPhrases + P*NoWords + Q*NoSentencesPerParagraph + R*NoWordsPerSentence + S*NoCharactersPerWord + T*NoSpellingErrors + U*NoGrammaticalErrors 10 where A - U are the regression coefficients computed on the corresponding variables in the essay training set. 15 Most of the time, many of these coefficients will be zero. Intercept is the value of the intercept calculated for the regression equation (this can be thought of as the value of the intersection with the y axis); FleschReadingEase is the Flesch reading ease computed by 20 Microsoft Word for the student essay (Ease); FleschKincaidGradeLevel is the Flesch-Kincaid reading level computed by Microsoft Word for the student essay (Level); CosTheta is computed as per the explanation further below; 25 VarRatio is computed as per the explanation further below; RatioNPNouns is the ratio of nouns in noun phrases in the student essay compared to the model essay; RatioNPAdjectives is the ratio of adjectives in noun phrases in the student essay compared to the model essay; 30 RatioNPPrepositions is the ratio of prepositions in noun phrases in the student essay compared to the model essay; RatioNPConjunctions is the ratio of conjunctions in noun phrases in the student essay compared to the model essay; WO2006/119578 PCT/AU2006/000630 - 16 RatioVPVerbs is the ratio of verbs in verb clauses in the student essay compared to the model essay; RatioVPAdverbs is the ratio of adverbs in verb clauses in the student essay compared to the model essay; 5 RatioVPAuxilliaries is the ratio of auxiliaries in verb clauses in the student essay compared to the model essay; RatioVPPrepositions is the ratio of prepositions in verb clauses in the student essay compared to the model essay; RatioVPConjunctions is the ratio of conjunctions in verb 10 clauses in the student essay compared to the model essay; NoParagraphs is the number of paragraphs in the student essay; NoPhrases is the total number of Noun Phrases and Verb Clauses in the student essay; 15 NoWords is the number of words in the student essay; NoSentencesPerParagraph is the average number of sentences in all paragraphs in the student essay; NoWordsPerSentence is the average number of words in all sentences in the student essay; 20 NoCharactersPerWord is the average number of characters in all words in the student essay; NoSpellingErrors is total number of spelling errors computed by Microsoft Word in the student essay; and NoGrammaticalErrors is computed as the number of 25 grammatical errors computed by Microsoft Word in the student essay. The following is alternative equation which can be used to compute an essay score: 30 Score = A*FleschReadingEase + B*FleschKincaidGradeLevel + C*CosTheta + D*VarRatio WO2006/119578 PCT/AU2006/000630 - 17 + E*%SpellingErrors + F*%GrammaticalErrors + G*ModelLength + H*StudentLength + I*StudentDotProduct + J*NoStudentConcepts + K*NoModelConcepts + L*NoSentences 5 + M*NoWords + N*NonConceptualisedWordSRatio + O*RatioNPNouns + P*RatioNPAdjectives + Q*RatioNPPrepositions + R* RatioNPConjunctions + S*RatioVPVerbs + T*RatioVPAdverbs 10 + U*RatioVPAuxilliaries + V*RatioVPPrepositions + W*RatioVPConjunctions where A - W are the regression coefficients computed on the corresponding variables in the essay training set. 15 Most of the time, many of these coefficients will be zero. FleschReadingEase is the Flesch reading ease computed by Microsoft Word for the student essay; FleschKincaidGradeLevel is the Flesch-Kincaid reading 20 level computed by Microsoft Word for the student essay; CosTheta is computed as per the explanation further below; VarRatio is computed as per the explanation further below; %SpellingErrors is computed as the number of spelling errors computed by Microsoft Word expressed as a 25 percentage of total words in the student essay; %GrammaticalErrors is computed as the number of grammatical errors computed by Microsoft Word expressed as a percentage of total sentences in the student essay; ModelLength is the vector length of the model answer 30 vector derived as per the explanation further below; StudentLength is the vector length of the model answer vector derived as per the explanation further below; WO2006/119578 PCT/AU2006/000630 - 18 StudentDotProduct is the vector dot product of the student and model vectors derived as per the explanation further below; NoStudentConcepts is the number of concepts covered for 5 which words appear in the student essay; NoModelConcepts is the number of concepts for which words appear in the model essay; NoSentences is the number of sentences in the student essay; 10 NoWords is the number of words in the student essay; NonConceptualisedWordSRatio is the number of words in the student essay that could not be found in the thesaurus, expressed as a ratio of the total number of words in the student essay; 15 RatioNPNouns is the ratio of nouns in noun phrases in the student essay compared to the model essay; RatioNPAdjectives is the ratio of adjectives in noun phrases in the student essay compared to the model essay; RatioNPPrepositions is the ratio of prepositions in noun 20 phrases in the student essay compared to the model essay; RatioNPConjunctions is the ratio of conjunctions in noun phrases in the student essay compared to the model essay; RatioVPVerbs is the ratio of verbs in verb clauses in the student essay compared to the model essay; 25 RatioVPAdverbs is the ratio of adverbs in verb clauses in the student essay compared to the model essay; RatioVPAuxilliaries is the ratio of auxiliaries in verb clauses in the student essay compared to the model essay; RatioVPPrepositions is the ratio of prepositions in verb 30 clauses in the student essay compared to the model essay; and RatioVPConjunctions is the ratio of conjunctions in verb clauses in the student essay compared to the model essay.
WO2006/119578 PCT/AU2006/000630 - 19 Where a coefficient is near zero it may be changed to zero to simplify the equation. Where the coefficient is zero that component of the equation (i.e. the coefficient and 5 the variable to which the coefficient is applied) may be removed from the equation. To compare the essays to the model essay, they need to be transformed into a structure suitable for comparison. The 10 process of transforming the essays is as follows: every word in each essay is lexically normalised by looking up the root concept of each word using a thesaurus; and a conceptual model of the structure of the essay is 15 built. Conceptual Model To build the conceptual model, the essay is segmented in to noun phrases and verb clauses by a technique hereafter 20 described as "chunking" to get the structure of sentences in terms of subject and predicate, as represented by Noun Phrases (NP) and Verb Phrases (VP). Generally the NP nominates the subject of discussion, and the VP the actions being performed on or by the subject. However VPs 25 are notoriously complex to deal with in comparison to NPs, because they typically can have many clusters of a Verb Clause (VC) and a NP together. It is far easier to identify VCs instead of the complex VPs. The basis of the technique used is to represent the meaning of the words 30 making up the NPs and VCs in a sequence of structured slots containing a numerical value representing the thesaurus index number for the root meaning of the word in the slot. A numerical summary of the meaning of the WO2006/119578 PCT/AU2006/000630 - 20 sentences in the document being considered is thus built up. The exact structure of the NP and VC slots is discussed 5 further below, but to illustrate the concept and to give a practical example, consider the following. A typical sentence would comprise alternating NPs and VCs as follows. A typical first NP slot word and numerical contents would be: 10 DET ADJ ADJ N The small black dog 100143 97 678 15 DET is a determiner, ADJ is an adjective and N is a noun. A typical first VC slot word and numerical contents would be: 20 V ADV ADV walked slowly down 34 987 67 V is a verb. 25 A typical concluding NP slot word and numerical contents would be: DETN 30 thestreet 100 234 WO2006/119578 PCT/AU2006/000630 - 21 The numbers in these examples are thesaurus index numbers for the corresponding words. The numbers here are fictitious, for illustration purposes only. A sentence generally consists of groups of alternating NPs and VCs, 5 not necessarily in that order, so a sentence summary would be represented by a group of NP slots and VC slots containing numerical thesaurus indices. A document summary would then consist of a collection of these groups. Note that a sentence does not have to start with a NP, but can 10 start equally well with a VP. NP Structure Martha Kolln (Kolln, M. (1994) Understanding English Grammar, MacMillan, New York.) on page 433 states a rule for defining 15 an NP under transformational grammar as follows (1) NP = (DET) + (ADJ) + N +(PREP PHR) + (S) and on page 429 a Prep Phr as follows 20 PREP PHR = PREP + NP PREP PHR is a preposition phrase and S is a subject. 25 When considering the slots to be provided for a NP, (1) above can now be rewritten as (2) NP = DET ADJ N PREP NP S 30 The basic component of an NP appears to be (3) NP = DET ADJ N WO2006/119578 PCT/AU2006/000630 - 22 and some appended structures. It has been found in practice that (4) NP = DET ADJ ADJ ADJ N 5 to be a better structure. If we take this as a basic core structure in a NP, the complete NP structure can be built in terms of this core structure by linking multiple occurrences of this core structure by PREPs. It has been 10 found in practice that we should also allow linking by CONJs (conjunctions). So finally we conclude that the basic component should be (5) NP = CONJ PREP: DET ADJ ADJ ADJ N 15 where the 2 slots before the colon are the linking slots, and those following the content slots. Practice indicates that we should allow about 40 occurrences of this basic component as the NP slot template should handle many 20 practical NPs encountered in general English text. In fact the current implementation of the program allows for unlimited occurrences of this basic component. Table 1 shows the first 10 rows of this array. 25 Table 1. Noun Phrase Semantic Structure WO2006/119578 PCT/AU2006/000630 - 23 The first core component in the sentence generally will have the CONJ and PREP slots set to blank (in fact the number 0). Any empty slots will likewise be set to 0. 5 VC Structure Kolln (1994) on page 428 states a rule for defining a VP under transformational grammar as follows: 10 (6) VP = AUX + V + (COMP) + (ADV) AUX is an auxiliary. COMP is explained as an NP or ADJ, so by removing this from the VP we end up with a VC as follows 15 (7) VC= AUX + V + ADV It has been found in practice that if we modify this VC definition by the addition of extra AUXs and ADVs we 20 obtain a more useful structure as (8) VC= AUX AUX ADV ADV V AUX AUX ADV ADV VCs can often be introduced with CONJs, and it has been 25 found in practice that we should also allow PREPs in a VC, so a complete VC definition would be (9) VC = CONJ PREP AUX AUX ADV ADV V AUX AUX ADV ADV 30 We should allow for 40 occurrences of this basic VC component to handle VCs encountered in practice. In fact the current implementation of the program allows for WO 2006/119578 PCT/AU2006/000630 - 24 unlimited occurrences of this basic component. Table 2 shows the first 10 rows of this array. Table 2. Verb Clause Semantic Structure 5 If a sentence happens to start with a VC, then the CONJ slot will be set to blank (in fact the number 0). Any empty slots will likewise be set to 0. 10 Table 3 shows positions of sentence components to determine phrase type for 3 positions, table 4 shows the phrase type for more positions. In these tables P is
PREP.
WO 2006/119578 PCT/AU2006/000630 - 25 Table 3 Table 3 cont'd Phrase Phrase POS1 POS2 POS3 Type POS1 POS2 POS3 Type CONJ P DET NP P P DET NP CONJ P ADJ NP P P ADJ NP CONJ P N NP P P N NP CONJ P CONJ NP P P CONJ NP CONJ P AUX VC P P AUX VC CONJ P ADV VC P P ADV VC CONJ P V VC P P V VC CONJ P P VC P P P VC CONJ CONJ DET NP P CONJ DET NP CONJ CONJ ADJ NP P CONJ ADJ NP CONJ CONJ N NP P CONJ N NP CONJ CONJ CONJ NP P CONJ CONJ NP CONJ CONJ AUX VC P CONJ AUX VC CONJ CONJ ADV VC P CONJ ADV VC CONJ CONJ V VC P CONJ V VC CONJ CONJ P VC P CONJ P VC CONJ DET NP P DET NP CONJ ADJ NP P ADJ NP CONJ N NP P N NP CONJ AUX VC P AUX VC CONJ ADV VC P ADV VC CONJ V VC P V VC CONJ DET NP DET NP CONJ ADJ NP ADJ NP CONJ N NP N NP CONJ AUX VC AUX VC CONJ ADV VC ADV VC CONJ V VC V VC P DET NP DET NP P ADJ NP ADJ NP P N NP N NP P AUX VC AUX VC P ADV VC ADV VC P V VC V VC Table 4 Slot 0 1 2 3 4 5 6 7j8 9_ 10 Phrase NOUN CONJ P DET ADJ ADJ ADJ N Type VERB CONJ P AUX AUX ADV ADV V AUX I AUX I ADV I ADV WO2006/119578 PCT/AU2006/000630 - 26 Figure 8 shows the process 300 of analysing a sentence to partition it in to noun phases and verb clauses. The process 300 commences at the beginning of each sentence which has not been typed into a noun phrase or a verb 5 phrase at 302. The positions (POS) within the document of the first three words are obtained at 304. More or less words may be used, but three has been found to be particularly useful. 10 While there is at least one word left in the sentence, the process continues through to loop stage 318. The three words at each of the three positions are looked up in the Pattern Table (Table 3) to determine if it is a NP or VC at 308. If the pattern is not recognised, it is invalid 15 the analysis moves on to the next sentence or it moves on until it recognises another NP or VC. It is determined whether the current phrase type is different from the current type allocated to the sentence. 20 If this is the beginning of the sentence then the answer will necessarily be no, if however the phrase type does change then this indicates at 312 that end of the current phrase and the beginning of the new phrase has been reached. The indexing of the words advances as described 25 further below in relation to 316. In the event that this is the first phrase of the sentence or that the type determined in 380 remains the same, then at 314 the current word is added to the current phrase type. Then at 316 the process advances with the second word is moved to 30 the first word position, the third word becomes the second word position and a new word is read into the third word position, if there are any words left in the sentence. The process then loops back up to 306 while there is at WO 2006/119578 PCT/AU2006/000630 - 27 least one word left. If there are not any words left then the process ends. The following shows an example of the implementation of 5 these structures in practice for the following text: This essay will discuss why it's a good idea for the Government to raise school leaving age to 17. It will also state why most people in Australia agree with the Government 10 on this particular topic. Paragraph 1 Sentence 1 Phrase 1 (Noun) 15 Row 1 This essay IDET IN I 150821238 Phrase 2 (Verb) 20 Row 1 will discuss Iwhyl AUX IV IADVI 120341238 199 1 Phrase 3 (Noun) 25 Row 1 it's IN I 25 i Row 2 30 jaj INI 1-I Row 3 Good ideal 35 IADJ IN I 317 317 1 Row 4 Iforl the ] Government IP IDETIN 40 17051507163 Phrase 4 (Verb) Row 1 ItoIraisel IP IV I 45 1711307 I Phrase 5 (Noun) Row 1 School I IN I 50 1307 1 WO 2006/119578 PCT/AU2006/000630 - 28 Phrase 6 (Verb) Row 1 Leaving Iv I 5 I-1 Phrase 7 (Noun) Row 1 agel IN I 10 5531 Row 2 Ito 17.1 PIN I 71 -1 I 15 Paragraph 2 Sentence 1 Phrase 1 (Noun) Row 1 I t 20 IN 125 Row 2 will IN I 25 1131 1 Phrase 2 (Verb) Row 1 lalsol IADV I 30 18 I Phrase 3 (Noun) Row 1 Istatel IN I 35 1438 1 Phrase 4 (Verb) Row 1 lwhy l I ADVI 40 199 1 Phrase 5 (Noun) Row 1 mostIpeoplel IDET IN 1 45 50421373 I Row 2 inlAustraliaI P IN 701502 50 Phrase 6 (Verb) Row 1 lagreeI IV I 20 55 Phrase 7 (Noun) Row 1 with the Government. IP IDETIN 171421507163 WO2006/119578 PCT/AU2006/000630 - 29 Sentence 2 Phrase 1 (Noun) Row 1 lon l this l particular topic.1 5 P IDET IADJ IN I 170150821310 1455 I This chunking method produces a computationally efficient numerical representation of the document. 10 Determine Measures Having processed each essay into the required structure the following methods are used to determine the respective measures. 15 Vector Representation To produce the following measures a vector representation of each essay is built: CosTheta; VarRatio; ModelLength; StudentLength; and 20 StudentDotProduct. The vector representation of each essay is built as follows. Each possible root concept in the thesaurus is allocated to a dimension in a hyper-dimensional set of 25 axes. A count is made of each word contributing to each root concept, which becomes the length of a vector in the respective dimension of the vector formed in hyper dimensional space. 30 Thus counts of each lexically normalised word into root concepts are used for the vector representation. There is a comprehensive discussion on the construction of an electronic thesaurus and building a vector 35 representation of the content of a document for automatic information retrieval in Salton, G. (1968) Automatic WO2006/119578 PCT/AU2006/000630 - 30 Information Organization and Retrieval, McGraw-Hill, New York. However the following example is illustrative. Consider 5 the following start of sentence fragments from successive sentences in 3 separate documents: Document Number Document Text 10 (1) The little boy... A small male... (2) A lazy boy... A funny girl... (3) The large boy... Some minor day... Suppose a thesaurus exists with the following root words 15 (concept numbers) and words: Concept Number Words 1. the, a 20 2. little, small, minor 3. boy, male 4. large 5. funny 6. girl 25 7. some 8. day 9. lazy Three dimensional vector representations of the above 30 document fragments on the first 3 concept numbers (1-3) can be constructed by counting the number of times a word in that concept number appears in the document fragments. These vectors are: WO2006/119578 PCT/AU2006/000630 - 31 Document No Vector on first Explanation 3 concepts (1) [2, 2, 2] [The, a; little, 5 small; boy, male] (2) [2, 0, 11 [A, a; ; boy] (3) [1, 1, 1] [The; minor; boy] The graph in Figure 3 shows these 3 dimensional vectors 10 pictorially. In general, these ideas are extended to the approximately 812 concepts in the Macquarie Thesaurus, and all words in the documents. This means that the vectors are constructed 15 in approximately 812 dimensions, and the vector theory carries over to these dimensions in exactly the same way it is of course hard to visualise the vectors in this hyperspace. 20 From this vector representation of the essay the ModelLength and StudentLength variable are calculated by determining the length of the vector in the normal manner, ie. Length = SquareRoot (x*x + y*y + ... + z*z), 25 where the vector is: vector (x, y, ..., z). Also the StudentDotProduct variable can be calculated by determining the vector dot product computed between the model and student essay vectors in the normal manner, ie. 30 DotProduct = (xl*x2 + yl*y2 ... +zl*z2), where the vectors are Vectorl(xl, yl, ... , zl) and Vector 2(x2, y2, ... , z2).
WO2006/119578 PCT/AU2006/000630 - 32 Next the variable CosTheta can be calculated in the normal manner, ie. Cos(theta) = DotProduct(vl,v2) / (length(vl) * length (v2)). 5 If we assume that document 1 is the model answer, then we can see how close semantically documents 2 and 3 are to the model answer by looking at the closeness of their corresponding vectors. The angle between the vectors 10 varies according to how "close" the vectors are. A small angle indicates that the documents contain similar content, whereas a large angle indicates that they do not have much common content. Angle Thetal is the angle between the model answer vector and the vector for 15 document 2, and angle Theta2 is the angle between the model answer vector and the vector for document 3. The cosines of Thetal and Theta2 can be used as measures of this closeness. If documents 2 and 3 were identical to 20 the model answer, their vectors would be identical to the model answer vector, and would be collinear with it, and have a cosine of 1. If on the other hand, they were completely different, and therefore orthogonal to the model answer vector, their cosines would be 0. 25 Generally in practice, a document's cosine is between these upper and lower limits. The variable CosTheta used in the scoring algorithm is 30 this cosine computed for the document being scored.
WO2006/119578 PCT/AU2006/000630 - 33 The variable VarRatio is determined from the number of non-zero dimensions in the student answer divided by the number of non-zero dimensions in the model answer. 5 For example, the number of concepts that are present in the model answer (document 1) above is 3. This can be determined from the number of non-zero counts in the numerical vector representation. 10 The number of concepts that are present in document 2 above is 2 - the second vector index is 0. To compute the VarRatio for this document 2 we divide the non-zero concept count for document 2 by the non-zero concept count in the model answer i.e. VarRatio = 2/3 = 0.67. The 15 corresponding VarRatio for document 3 is 3/3 = 1.00. This simple variable provides a remarkably strong predictor of essay scores, and is generally present as one of the components in the scoring algorithm. 20 To produce the following measures of the conceptual model is used: NoStudentConcepts; NoModelConcepts; NonConceptualisedWordSRatio; RatioNPNouns; 25 RatioNPAdjectives; RatioNPPrepositions; RatioNPConjunctions; RatioVPVerbs; RatioVPAdverbs; RatioVPAuxilliaries; RatioVPPrepositions; and RatioVPConjunctions. 30 These are determined as described above. The score and calculation of the measures is shown in Figure 4.
WO2006/119578 PCT/AU2006/000630 - 34 Once the essay is graded feedback can be given where the essay was covering the correct concepts and where it was not. As shown in Figure 5 a count of each root concept in 5 the graded essay and a count of each root concept expected in the answer is displayed by the height of a bar for each concept. Further a word in the essay can be selected and similar 10 concepts in the essay will be displayed by highlighting them. Also by selecting a concept in model answer essay and displaying similar concepts in the marked essay are 15 displayed is by highlighting. It is also possible to displaying synonyms of a selected root concept as shown in Figure 6. 20 EXAMPLE A regression equation was developed from about 100 human graded training essays and an ideal or model answer. The document vectors described above are constructed. Values 25 are then computed for many variables from the relationships between the content and vectors of the model answer and the training essays. Once the training has been performed, and the grading algorithm built, each unmarked essay is processed to obtain the values for the 30 independent variables, and the regression equation is then applied. Generally CosTheta and VarRatio are significant predictors in the scoring equation.
WO2006/119578 PCT/AU2006/000630 - 35 In a trial, Year 10 high school students hand wrote essays on paper on the topic of "The School Leaving Age". Three trained human graders then graded these essays against a marking rubric. The essays, 390 in total, were then 5 transcribed to Microsoft Word document format. The essay with the highest average human score was selected as the model answer. It had a score of 48.5 out of a possible 54, or 90%. In one test of the system, 100 essays were used to build the scoring algorithm. The scoring algorithm was 10 built using the first 100 essays in the trial when ordered in ascending order of the identifier. The prediction equation is was determined to be: Grade -22.35 15 + 11.00*CosTheta+ 15.70*VarRatio +7.64*Characters Per Word + 0.20*Number of NP Adjectives This produces a grade out of 54. In this example only 4 independent variables are needed for the predictor 20 equation. The remaining 290 essays were then graded by the equation. The mean score for the human average grade for these 290 essays was 30.34, while the mean grade given by the 25 computer automated grading was 29.45, a difference of 0.89. The correlation between the human and automated grades was 0.79. The mean absolute difference between the two was 3.90, representing an average error rate of 7.23% when scored out of 54 (the maximum possible human score). 30 The correlations between the three humans amongst themselves were 0.81, 0.78 and 0.81.
WO2006/119578 PCT/AU2006/000630 - 36 The benefits of averaging the scores from the human graders are shown by the fact that the correlation between the automated grading scores and the mean score of the three humans is higher, at 0.79, than the individual 5 correlations at 0.67, 0.75 and 0.75. Coefficients of the significant predictors, and the intercept, can be positive or negative. For example it would be expected that the coefficient of the CosTheta 10 predictor would be positive, and the coefficient of SpellingErrors would be negative. However because of mathematical quirks in the data, this may not always occur. 15 Various transformations of the predictor measures could also be used. They could include square roots and logarithms. These are typical transformations that are often useful in linear regression. The fourth root of the number of words in an essay is commonly found to be a 20 useful predictor. Other examples of equations that have been calculated in test batches of essays include the following. 25 Grade = 31.49 + 18.92*CosTheta + 17.07*VarRatio - 0.23*Ease 1.02*Level for a score out of 54. Grade = 27. + 16.07*CosTheta + 19.06*VarRatio - 0.21*Ease 30 0.71*Level for a score out of 54. Grade = -19.59 + 7.16*CosTheta + 12.64*VarRatio + 0.07*Number of NP Adjectives+ 1.82*Level WO2006/119578 PCT/AU2006/000630 - 37 for a score out of 30. It is noted that the score can easily be scaled to, for example, be expressed as a percentage. As an example 5 where the score is out of 54, the score can be multiplied by 100 and divided by 54 to get a percentage score. The coefficients for CosTheta and VarRatio are typically between about 10 and 20 for a score out of about 30 to 50. 10 To obtain a percentage score coefficients of about 20 to 40 can be used. While it is possible to device a generic equation, for example: score = 20 + 40*CosTheta + 40*VarRatio - 10*SpellingErrors - 10*Grammatical Errors 15 better results are obtained by use of the regression analysis to determine the coefficients rather than fixing them as generic values. A detailed set of flow charts is contained in Figure 7. A 20 set of pseudo code explaining the flow charts is listed in Appendix 1. A skilled addressee will realise that modifications and variations may be made to the present invention without 25 departing from the basic inventive concept. The present invention can be used in applications other than essay grading, such as in the area of document searching, where the "model answer" document is a document 30 containing the search terms. Other applications and the manner of use of the present invention in those other applications will be apparent to those skilled in the art.
WO2006/119578 PCT/AU2006/000630 - 38 The present invention can be used in applications other than essay grading, such as in the area of machine document translation. 5 Such modifications and variations are intended to fall within the scope of the present invention, the nature of which is to be determined form the foregoing description.
WO 2006/119578 PCT/AU2006/000630 - 39 Appendix 1. Pseudo Code for Automated Essay Grading System - an Explanation of the Flow Chart of Figure 3 5 1.0 MarkIT * Structure Document (Model Answer) (2.0) * Structure Document (Student Answer) (2.0) * Compute Ratios Between Model Answer and Student Answer (10.2) * Compute Student Mark 10 2.0 Structure Document (document) * Chunk document into paragraphs (2.1) * For each paragraph in the document (3.0) o Set all concepts hit counts to zero (9.2) s15 o Chunk paragraph into sentences (3.1) o For each sentence in the paragraph (4.0) * Word list = Chunk sentence into words (4.1.1) * Get a list of non-empty from word list (4.1.2) * Tag each non-empty words with its Part of Speech (POS) [third 20 party] * Chunk Sentence Into Phrases (4.1.4) o Compute total hit counts for each concept by adding up the concept's hit count and their related concepts' hit counts (9.3, 8.1) o Contextualise each word (3.2, 4.2, 5.2, 6.2, 7.2) 25 * Compute grammatical statistics (10.1) 4.1.4 Chunk sentence into phrases (word list) * Current phrase type = Untyped * Get the first three words from word list into wordl1, word2 and word3 30 * While word null o New phrase type= Look up phrase type (wordl's POS, word2's POS, word2's POS) in table 1, from top to bottom (5.3) o If new phrase type <> current phrase type * Current phrase = new phrase 35 o Add word 1 to current phrase (5.1) o Wordl = word2, word2 = word3, word3 = next word from word list 5.1 Add Word into a phrase (word) * Successful= Add word into current phrase row (6.1) 40 * If not successful o Current phrase row = new phrase row o New phrase row's current slot = 0 o Add word into current phrase row (6.1) WO 2006/119578 PCT/AU2006/000630 - 40 6.1 Add Word into a phrase row (word) * If row type > INVALID and word's POS <> NO POS o Search for next POS slot from current slot (inclusive) onwards (table 2) o If end of the row 5 * Return false o Else * Slot word * Current slot = current slot + 1 * Set word's concept (7.1) 10 * Return true * Else o Slot word o If word's POS <>NO POS * Set word's concept (7.1) 15 o Return true 7.1 Set word's concept * Get concept list (word, POS) (9.4) * If concept list =- null 20 o Stemmed word = Stem word using Porter Stemmer [third-party] o Get concept list (stemmed word, POS) (9.4) 9.4 Get concept list (word, POS) * Concept list = Look up concepts related to word & POS in the database system 25 * If concept list <> null o For each concept number <= MAX CONCEPTNUMBER * Concept[number]'s hit count++ * Return concept list 30 7.2 Set word's most relevant concept * If concept list <> null o Most relevant concept = one of the concepts with the highest total hit count 35

Claims (48)

1. A method of comparing text based documents comprising: 5 lexically normalising each word of the text of a first document to form a first normalised representation; building a vector representation of the first document from the first normalised representation; lexically normalising each word of the text of a 10 second document to form a second normalised representation; building a vector representation of the second document from the second normalised representation; comparing the alignment of the vector representations 15 to produce a score of the similarity of the second document to the first document.
2. A method as claimed in claim 1, wherein the lexical normalisation converts each word in the respective 20 document into a representation of a root concept as defined in a thesaurus.
3. A method as claimed in claim 2, wherein each word is used to look up the root concept of the word in the 25 thesaurus.
4. A method as claimed in claim 2 or 3, wherein each root word is allocated a numerical value. 30 5. A method as claimed in any one of claims 1 to 4, wherein the normalisation process produces a numeric representation of the document. WO2006/119578 PCT/AU2006/000630 - 42 6. A method as claimed in any one of claims 2 to 4, wherein each normalised root concept forms a dimension of the vector representation. 5 7. A method as claimed in claim 6, wherein the number of occurrences of each normalised root concept is counted.
8. A method as claimed in claim 7, wherein the count of each normalised root concept forms the length of the 10 vector in the respective dimension of the vector representation.
9. A method as claimed in any one of claims 1 to 8, wherein the comparison of the alignment of the vector 15 representations produces the score by determining the cosine of an angle (theta) between the vectors.
10. A method as claimed in claim 9, wherein the cos(theta) is calculated from the dot product of the 20 vectors and the length of the vectors.
11. A method as claimed in claim 2 to 4 and 6 to 8, wherein the number of root concepts in each document is counted. 25
12. A method as claimed in claim 11, wherein the count of concepts of the second document is compared to the count of concepts of the first document to produce a contribution to the score of the similarity of the second 30 document to the first document. WO2006/119578 PCT/AU2006/000630 - 43 13. A method as claimed in claim 12, wherein the contribution of each root concept of non-zero count is one. 5 14. A method as claimed in any one of claims 12 or 13, wherein the comparison is a ratio.
15. A method as claimed in any one of claims 1 to 14, wherein the first document is a model answer essay, the 10 second document is an essay to be marked and the score is a mark for the second essay.
16. A method as claimed in any one of claims 1 to 15, further comprising: 15 partitioning words of the first document into noun phrases and verb clauses; partitioning words of the second document into noun phrases and verb clauses; comparing the partitioning of the first document to 20 the second document to produce a contribution to the score of the similarity of the second document to the first document.
17. A system for comparing text based documents 25 comprising: means for lexically normalising each word of the text of a first document to form a first normalised representation; means for building a vector representation of the 30 first document from the first normalised representation; means for lexically normalising each word of the text of a second document to form a second normalised representation; WO2006/119578 PCT/AU2006/000630 - 44 means for building a vector representation of the second document from the second normalised representation; means for lexically normalising the text of a first document; 5 means for comparing the alignment of the vector representations to produce a score of the similarity of the second document to the first document.
18. A system as claimed in claim 17, further comprising 10 means for looking up a thesaurus to find a root concept from each word in the respective document and for providing said root concept to the respective means for lexically normalising each word in the respective document, wherein said respective means converts each word 15 into a representation of the corresponding root concept.
19. A system as claimed in claim 18, wherein the respective means for building a vector representation forms a dimension of the vector representation from each 20 normalised root concept.
20. A system as claimed in claim 19, wherein the respective means for building a vector representation counts the number of occurrences of each normalised root 25 concept and said count forms the length of the vector in the respective dimension of the vector representation.
21. A system as claimed in any one of claims 17 to 20, wherein the means for comparing the alignment of the 30 vector representations produces the score by determining the cosine of an angle (theta) between the vectors. WO2006/119578 PCT/AU2006/000630 - 45 22. A system as claimed in claim 21, wherein the means for comparing the alignment of the vector representations is configured to calculate the cos(theta) from the dot product of the vectors and the length of the vectors. 5
23. A system as claimed in claim 20, wherein the respective means for building a vector representation counts the number of non-zero root concepts in the respective document. 10
24. A system as claimed in claim 23, wherein the means for comparing the alignment of the vector representations compares the count of concepts of the second document to the count of concepts of the first document to produce a 15 contribution to the score of the similarity of the second document to the first document.
25. A method of comparing text based documents comprising: 20 partitioning words of a first document into noun phrases and verb clauses; partitioning words of a second document into noun phrases and verb clauses; comparing the partitioning of the first document to 25 the second document to produce a score of the similarity of the second document to the first document.
26. A method as claimed in claim 25, wherein each word in the document is lexically normalised into root concepts. 30
27. A method as claimed in claim 25 or 26, wherein the comparison of the partitioning of the documents is conducted by determining a ratio of the number of one or WO2006/119578 PCT/AU2006/000630 - 46 more types of noun phrase components in the second document to the number of corresponding types of noun phrase components in the first document and a ratio of the number of one or more types of verb clause components in 5 the second document to the number of corresponding types of verb clause components in the first document, wherein the ratios contribute the score.
28. A method as claimed in claim 27, wherein the types of 10 noun phrase components are: noun phrase nouns, noun phrase adjectives, noun phrase prepositions and noun phrase conjunctions.
29. A method as claimed in claim 27 or 28, wherein the 15 types of clause components are: verb clause verbs, verb clause adverbs, verb clause auxiliaries, verb clause prepositions and verb clause conjunctions.
30. A method as claimed in claim 24, wherein the first 20 document is a model answer essay, the second document is an essay to be marked and the score is a mark for the second essay.
31. A system for comparing text based documents 25 comprising: means for partitioning words of a first document into noun phrases and verb clauses; means for partitioning words of a second document into noun phrases and verb clauses; 30 means for comparing the partitioning of the first document to the second document to produce a score of.the similarity of the second document to the first document. WO2006/119578 PCT/AU2006/000630 - 47 32. A method of comparing text based documents comprising: lexically normalising each word of the text of a first document to form a first normalised representation; 5 determining the number of root concepts in the first document from the first normalised representation; lexically normalising each word of the text of a second document to form a second normalised representation; 10 determining the number of root concepts in the second document from the second normalised representation; comparing the number of root concepts in the first document to the number of root concepts in the second document to produce a score of the similarity of the 15 second document to the first document.
33. A method as claimed in claim 32, further comprising: partitioning words of the first document into noun phrases and verb clauses; 20 partitioning words of the second document into noun phrases and verb clauses; comparing the partitioning of the first document to the second document to produce a contribution to the score of the similarity of the second document to the first 25 document.
34. A system for comparing text based documents comprising: means for lexically normalising each word of the text 30 of a first document to form a first normalised representation; means for determining the number of root concepts in the first document from the first normalised WO2006/119578 PCT/AU2006/000630 - 48 representation; means for lexically normalising each word of the text of a second document to form a second normalised representation; 5 means for determining the number of root concepts in the second document from the second normalised representation; means for comparing the number of root concepts in the first document to the number of root concepts in the 10 second document to produce a score of the similarity of the second document to the first document.
35. A method of grading a text based essay document comprising: 15 providing a model answer; providing a plurality of hand marked essays; providing a plurality of essays to be graded; providing an equation for grading essays, wherein the equation has a plurality of measures with each measure 20 having a coefficient, the equation producing a score of the essay being calculated by summing each measure as modified by its respective coefficient, each measure being determined by comparing each essay to be graded with the model essay; 25 determining the coefficients from the hand marked essays; applying the equation to each essay to be graded to produce a score for each essay. 30 36. A method according to claim 35, wherein determining the coefficients from the hand marked essays is performed by linear regression. WO2006/119578 PCT/AU2006/000630 - 49 37. A method according to claim 35 or 36, wherein the measures include the scores produced by any one of the methods of comparing text based documents as claimed in any one of claims 1 to 16, 25 to 30 or 32 to 33. 5
38. A system for grading a text based essay document comprising: means for determining coefficients in an equation from a plurality of hand marked essays, wherein the 10 equation is for grading an essay to be marked, the equation comprising a plurality of measures with each measure having one of the coefficients, the equation producing a score for the essay which is calculated by summing each measure as modified by its respective 15 coefficient, means for determining each measure by comparing each essay to be graded with the model essay; means for applying the equation to each essay to be graded to produce a score for each essay from the 20 determined coefficients and determined measures.
39. A method of providing visual feedback on an essay grade comprising: displaying a count of each root concept in the graded 25 essay and a count of each root concepts expected in the answer.
40. A method as claimed in claim 39, wherein each root concept corresponds to a root meaning of a word as defined 30 by a thesaurus.
41. A method as claimed in claim 39 or 40, wherein the count of each root concept is determined by lexically WO2006/119578 PCT/AU2006/000630 - 50 normalising each word in the graded essay to produce a representation of the root meanings in the graded essay and counting the occurrences of each root meaning in the graded essay. 5
42. A method as claimed in claim 41, wherein the count of each root concept is determined by lexically normalising each word in the model essay to produce a representation of the root meanings in the model essay and counting the 10 occurrences of each root meaning in the model essay.
43. A method as claimed in any one of claims 39 to 42, further comprising selecting a concept in the graded essay and displaying words belonging to that concept in the 15 graded essay.
44. A method as claimed in claim 43, wherein words related to other concepts in the graded essay are also displayed. 20
45. A method as claimed in any one of claims 39 to 44, further comprising selecting a concept in model essay and displaying words belonging to that concept in the model essay. 25
46. A method as claimed in claim 45, wherein words related to other concepts in the model essay are also displayed. 30 47. A method as claimed in any one of claims 39 to 46, further comprising displaying synonyms to a selected root concept. WO2006/119578 PCT/AU2006/000630 - 51 48. A system for providing visual feedback on an essay grading comprising: means for displaying a count of each root concept in the graded essay and a count of each root concepts 5 expected in the answer.
49. A method of numerically representing a document comprising: lexically normalising each word of the document; 10 partitioning the normalised words of the document into parts, which each part designates as one of a noun phrase or a verb clause.
50. A method as claimed in claim 49, wherein a plurality 15 of words are used to determine whether each part is a noun phase or a verb clause.
51. A method as claimed in claim 49, wherein the first three words of each part are used to determine whether the 20 part is a noun phrase or a verb clause.
52. A method as claimed in any one of claims 49 to 51, wherein each word in a part is allocated to a column-wise slot of a noun phrase or verb clause table. 25
53. A method as claimed in claim 52, wherein each slot of the table is allocated to a grammatical type of word.
54. A method as claimed in claim 53, wherein words are 30 allocated sequentially to slots in the appropriate table if they are of the grammatical type of the next slot. WO2006/119578 PCT/AU2006/000630 - 52 55. A method as claimed in claim 54, wherein in the event that the next word does not belong in the next slot, the slot is left blank and the sequential allocation of slots moves on one position. 5
56. A method as claimed in claim 55, wherein in the event that the next word does not belong to the table type of the current part then this indicates an end to the current part. 10
57. A method as claimed in any one of claims 52 to 56, wherein the tables have a plurality of rows such that when the next word does not fit into the rest of the row following placement of the current word in the current 15 part, but the word does not indicate an end to the current part then it is placed in the next row of the table.
58. A system for numerically representing a document comprising: 20 means for lexically normalising each word of the document; means for partitioning the normalised words of the document into parts, which each part designates as one of a noun phrase or a verb clause. 25
59. A computer program configured to control a computer to perform any one of the methods as claimed in any one of claims 1 to 16, 25 to 30, 32 to 33, 35 to 37, 39 to 47 or 49 to 57. 30
60. A computer program configured to control a computer to operate as any one of the systems as claimed in any one of claims 17 to 24, 31, 34, 38, 48, or 58. WO2006/119578 PCT/AU2006/000630 - 53 61. A computer readable storage medium comprising a computer program as claimed in claim 59 or 60. 5
AU2006246317A 2005-05-13 2006-05-12 Comparing text based documents Abandoned AU2006246317A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2006246317A AU2006246317A1 (en) 2005-05-13 2006-05-12 Comparing text based documents

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
AU2005902424 2005-05-13
AU2005902424A AU2005902424A0 (en) 2005-05-13 Formative assessment visual feedback in computer graded essays
AU2005903032 2005-06-10
AU2005903032A AU2005903032A0 (en) 2005-06-10 Comparing text based documents
PCT/AU2006/000630 WO2006119578A1 (en) 2005-05-13 2006-05-12 Comparing text based documents
AU2006246317A AU2006246317A1 (en) 2005-05-13 2006-05-12 Comparing text based documents

Publications (1)

Publication Number Publication Date
AU2006246317A1 true AU2006246317A1 (en) 2006-11-16

Family

ID=38819890

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2006246317A Abandoned AU2006246317A1 (en) 2005-05-13 2006-05-12 Comparing text based documents

Country Status (1)

Country Link
AU (1) AU2006246317A1 (en)

Similar Documents

Publication Publication Date Title
US20090265160A1 (en) Comparing text based documents
Vajjala et al. On improving the accuracy of readability classification using insights from second language acquisition
JP4778474B2 (en) Question answering apparatus, question answering method, question answering program, and recording medium recording the program
US20120330647A1 (en) Hierarchical models for language modeling
KR100481580B1 (en) Apparatus for extracting event sentences in documents and method thereof
Rodda et al. Vector space models of Ancient Greek word meaning, and a case study on Homer
Pal et al. Resume classification using various machine learning algorithms
Joundy Hazar et al. Automated scoring for essay questions in e-learning
Chernova Occupational skills extraction with FinBERT
Blšták et al. Machine learning approach to the process of question generation
CN112559711A (en) Synonymous text prompting method and device and electronic equipment
Koka et al. Automatic identification of keywords in lecture video segments
Sharma et al. Automatic question and answer generation from bengali and english texts
CN101238459A (en) Comparing text based documents
Rathi et al. Automatic Question Generation from Textual data using NLP techniques
Saha et al. Adopting computer-assisted assessment in evaluation of handwritten answer books: An experimental study
AU2006246317A1 (en) Comparing text based documents
Lee Natural Language Processing: A Textbook with Python Implementation
Tolmachev et al. Automatic Japanese example extraction for flashcard-based foreign language learning
Kronlid et al. TreePredict: improving text entry on PDA's
CN111159366A (en) Question-answer optimization method based on orthogonal theme representation
Scherbakova Comparative study of data clustering algorithms and analysis of the keywords extraction efficiency: Learner corpus case
CN114153949B (en) Word segmentation retrieval method and system
Bergsma Large-scale semi-supervised learning for natural language processing
Goto et al. An automatic generation of multiple-choice cloze questions based on statistical learning

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application