CN113420184A - MongoDB-based English grammar library packaging and reading-writing method - Google Patents

MongoDB-based English grammar library packaging and reading-writing method Download PDF

Info

Publication number
CN113420184A
CN113420184A CN202010747656.5A CN202010747656A CN113420184A CN 113420184 A CN113420184 A CN 113420184A CN 202010747656 A CN202010747656 A CN 202010747656A CN 113420184 A CN113420184 A CN 113420184A
Authority
CN
China
Prior art keywords
phenomenon
list
grammar
lexical
packaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010747656.5A
Other languages
Chinese (zh)
Inventor
戴翰波
李辉
王丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Huiren Information Technology Co ltd
Original Assignee
Wuhan Huiren Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Huiren Information Technology Co ltd filed Critical Wuhan Huiren Information Technology Co ltd
Priority to CN202010747656.5A priority Critical patent/CN113420184A/en
Publication of CN113420184A publication Critical patent/CN113420184A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/80Information retrieval; Database structures therefor; File system structures therefor of semi-structured data, e.g. markup language structured data such as SGML, XML or HTML
    • G06F16/81Indexing, e.g. XML tags; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • G06F8/24Object-oriented

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a MongoDB-based grammar phenomenon semi-structured data storage and calling method, which is used for semi-structured storage of relevant grammar knowledge point data generated by textbooks or extraclass reading materials and the like used in an education process, and a corresponding reading and calling method. And the MongoDB-based primary school English grammar library packaging can help us to store grammar knowledge in a semi-structured manner, and the whole packaging process is automatically performed by a program without manual intervention. Meanwhile, the packaged grammar library can be screened according to corresponding level information when a user needs to inquire a certain knowledge point, for example, related knowledge points on a corresponding textbook can be checked according to the grade, so that all results can be returned at one time, and the consumed time is short. Meanwhile, the grammar library can help teachers to automatically generate related knowledge point test questions.

Description

MongoDB-based English grammar library packaging and reading-writing method
Technical Field
The invention belongs to MongoDB-based semi-structured data storage and calling, and particularly relates to a construction method for semi-structured data storage aiming at grammatical phenomenon results obtained by English text analysis. The first patent: the grammar phenomenon of the English text is automatically analyzed to obtain the following form: the sentence- > word method- > - >, or the sentence- > syntax- > - > leaf node (the output form of each grammar phenomenon is the record from the root node of the grammar tree to the leaf node), the MongoDB is subjected to semi-structured storage, then packaging is carried out according to grammar types, and a related reading and writing implementation method is provided.
Background
In the current domestic education process, the explanation and the learning of English grammar knowledge examination points mainly depend on the range encountered in courses or practice problems, the system learning cannot be carried out, or the systematic learning is required, the arrangement summary of teachers who give lessons is still relied on, the result of the arrangement summary is often limited, and the operations of carrying out corresponding filling deletion and the like are very complicated. The most important problem is that the grammar knowledge of english is many, only depending on manpower to summarize, summarize and package, the amount of engineering is huge, and the subsequent maintenance (adding new related content) will also be very energy consuming.
Therefore, the summarization, induction and arrangement of grammar knowledge points are urgently needed in the existing education process, manpower needs to be liberated urgently by using the existing technology, the existing relational database has strict definition on a relational mode, if data changes, a certain attribute needs to be added, a system is greatly changed, and for semi-structured data, a plurality of attribute columns of primitive types are empty values, so that the waste of storage space is caused, and the system performance is influenced. Aiming at the problems, the non-relational database can be well solved, wherein MongoDB is the most popular open source NoSQL (non-relational) database, has a key/value storage mode, uses a JSON format with a loose data structure, is simple in read-write operation and has good horizontal expansion capability.
The existing relevant theoretical research has not provided a method for encapsulating english grammar knowledge points, and only includes the processing procedures of other data, for example, in the CN 104021210 a patent, a method for reading and writing geographic data of a MongoDB cluster that stores geographic data in a GeoJSON format semi-structured manner is provided, which is a storage scheme for geographic data, and besides, a semi-structured storage method for other types of data has not been proposed. And for other summarization and induction methods of grammar knowledge points, manual marking and sorting are mainly adopted, and teachers giving lessons explain articles or topics encountered, so that few systems guide a summarization process.
The prior art lacks a packaging processing method for English grammar knowledge points and a corresponding read-write calling method, namely lacks a proper grammar knowledge semi-structured framework, so that a database for performing systematic arrangement on English numerous grammar knowledge points is lacked, and accordingly, no extended functions based on the database can support, such as: the database is used for automatically generating detection questions of related knowledge points, and performing unified learning of a certain class of knowledge points in related grades.
Disclosure of Invention
In view of the above situation, the present invention provides a method for storing and calling semi-structured data based on the syntax phenomenon of MongoDB, which is used for semi-structured storage of relevant syntax knowledge point data generated by textbooks or extraclass reading materials used in the education process, and a corresponding reading and calling method. According to the grammar library packaging method, the grammar library packaging method utilizes a first patent: the method for automatically analyzing grammar phenomena of English texts (the patent can automatically analyze the grammar of English texts and output the grammar phenomena of lexical methods and syntaxes), automatically analyzing grammar knowledge points of English textbooks of any grade, then storing the grammar knowledge point results of the textbooks of all grades in a MongoDB database after carrying out semi-structured processing by utilizing a semi-structured storage mode of the patent, and further providing a corresponding read-write calling method aiming at a grammar database data storage scheme.
The grammar library can be continuously expanded, and the content of the grammar library is enriched. Whether with our patent one: the method for automatically analyzing the grammar phenomenon aiming at the English text can obtain the grammar phenomenon result and also can automatically summarize the grammar phenomenon by self, and the data can be arranged into an input form (a chain structure with continuously refined grammar knowledge points) required by the packaging process through the data preprocessing process, so that the simple and quick grammar knowledge learning can be realized by utilizing the packaging and calling methods of the self, or the function expansion based on the grammar knowledge can be carried out.
The MongoDB-based English grammar library packaging and read-write calling method comprises an English grammar library packaging process and a corresponding grammar library read-write calling process, and the specific technical route is as follows.
The encapsulation of the English grammar library mainly comprises the JSON format standardization of chain input data and the storage of the standardized data into a MongoaDB database, wherein the JSON storage format is designed as follows: the method mainly comprises four large blocks, namely analyzed Sentence text Sennce, corresponding difficulty Level, Lexical phenomenon Lexical and Syntactic phenomenon Syntact. The Syntactic phenomenon module is subdivided into 10 types, and the key is syntax; the Lexical phenomenon module is also subdivided into 10 types, and then is packaged in blocks according to four aspects of morphological change, fixed expression/collocation, word function and word part of speech type, and is also inserted into the same document with syntactic phenomenon, and the key is Lexical.
JSON normalization, the specific processing steps are as follows.
1) The key is sequence and the corresponding value is the english text being analyzed, the string type.
2) The key is Level, the corresponding value is class grade or blues grading grade corresponding to the sentence, the character string type is adopted, and if the label is not known, the character string type is marked as DN.
3) The values of key syntax and key Lexical are stored in lists, each Syntactic phenomenon of the sentence is read, the first node of the chain structure is used for judging whether the Syntactic phenomenon or the Lexical phenomenon belongs to the Syntactic phenomenon or the Lexical phenomenon, if the Syntactic phenomenon belongs to the Syntactic phenomenon, the syntax is converted into 4), and if the Lexical phenomenon belongs to the Lexical phenomenon, the syntax is converted into 5).
4) Add the Syntactic phenomenon to the list with key of syntactical, go to 7).
5) Traversing the chain structure, judging which category belongs to 10 categories according to the second node, packaging the corresponding categories in a list, subdividing each list, and turning to 6).
6) If the chain structure contains a mode { suffix addition, change form and morphology change } which is packaged into a list, the list is called a part of speech change module; similarly, if the chain structure contains a pattern { fixed expression/collocation, the structure is that words/structures representing.. are packaged into a list, we call the list as a fixed expression/collocation module; if the chain structure contains a mode {. No. word function,. No. word is done, the front is done, and the rear is done so, the list is packaged into a list, and the list is called a word function module; the remaining chain structures are packed into a list called word part of speech category module, and then go to 7).
7) The four modules described above (i.e., the four different lists) are added to the list with keys of Lexical in order, go to 8).
8) And if all the grammar phenomena of the sentence are added and packaged, the JSON standardization process is ended, and the packaged JSON format data is added into the document of the corresponding level of the MongoDB.
The read-write calling process of the corresponding grammar library is realized by three classes, namely class MongoRLexical, class MongoRSyntactic and class MongoWrite, which are respectively responsible for reading operation of word method phenomenon and syntax phenomenon in the grammar library and adding and writing operation of new content. The three classes respectively contain a positioning subclass and a result display or data writing subclass, and are responsible for specific function realization functions.
Drawings
FIG. 1 is a general process flow diagram of the grammar library packaging and reading and writing method of the present invention.
FIG. 2 shows a semi-structured storage manner of the English grammar library in the embodiment of the present invention.
Fig. 3 is a JSON format standardization example in the embodiment of the present invention.
Fig. 4 is a schematic diagram of an input structure in an embodiment of the present invention.
FIG. 5 is a diagram illustrating a read operation of a syntax library call class according to an embodiment of the present invention.
FIG. 6 is a diagram illustrating a write operation of a syntax library call class according to an embodiment of the present invention.
Detailed Description
The MongoDB-based English grammar library packaging and reading method mainly comprises an English grammar library packaging process and a corresponding grammar library reading and writing calling process.
The first part of the packaging process of the English grammar library is the most important in the design of a storage format, because English grammar data often has different modes, if a relational database is adopted, a large number of attribute values are null, the waste of data space is caused, the requirements of the storage of non-relational data on the data format are not strict, a fixed mode is not needed, and the transverse expansion can be carried out without redundant operation, therefore, the packaging and read-write calling method of the English grammar library is provided based on the non-relational database MongoDB.
And in the second part, for the read-write calling method, according to the course requirements of the user, calling is often only needed aiming at the level and the sentence, then, a selection strategy is made in the lexical and the syntax, and the corresponding required lexical and syntax phenomena are returned.
A first module: and (5) packaging the English grammar library.
The construction process of the English grammar database mainly aims at that the data form is a chain structure, the chain connection process is to continuously refine grammar knowledge points, and finally, the connection state from a line to a point is determined by stepping down to the knowledge points which do not need to be refined, namely: lexical- > adjective comparison level- > fixed expression- > the + adj, the + adj structure; or lexical- > adjective comparison level- > morphological change- > tail of word after comparison level is-er; or syntax- > sentence kind- > emissary- > ken-formula- > Be + adj! And (5) structure.
For the construction process of the grammar library, the JSON format standardization of input data (chain grammar phenomenon set) is firstly carried out in two parts, and then the second part stores the standardized data into a MongoaDB database, wherein the core lies in the design of the JSON storage format in the first part.
For the design of JSON storage format, the JSON storage format is mainly divided into four blocks, namely analyzed Sentence text Sennce, corresponding difficulty Level, Lexical phenomenon Lexical and Syntactic phenomenon Syntact. The difficulty level is recorded by selecting the grade corresponding to the textbook and the grade in the blues grading reading; the syntactic phenomenon module is subdivided into 10 classes, which are: the 10 categories are not subdivided any more and are used as a document (called as a document in MongoDB and equivalent to a data record row in a relational database), and the document is recorded without sequence, and the key is Syntactc; the lexical phenomenon module is also subdivided into 10 types, which are respectively: nouns, words, adverbs, verbs, pronouns, prepositions, adjectives, common qualifiers, articles and conjunctions, under the 10 major categories, the four major aspects of word shape change, fixed expression/collocation, word functions and word part-of-speech categories are packaged in blocks, and the four major aspects are also inserted into the same document as syntactic phenomena, and the key is Lexical.
For JSON standardization of a sentence corresponding to all grammatical phenomena, the specific processing steps are as follows.
1) The key is sequence and the corresponding value is the english text being analyzed, the string type.
2) The key is Level, the corresponding value is class grade or blues grading grade corresponding to the sentence, the character string type is adopted, and if the label is not known, the character string type is marked as DN.
3) The values of key syntax and key Lexical are stored in lists, each Syntactic phenomenon of the sentence is read, the first node of the chain structure is used for judging whether the Syntactic phenomenon or the Lexical phenomenon belongs to the Syntactic phenomenon or the Lexical phenomenon, if the Syntactic phenomenon belongs to the Syntactic phenomenon, the syntax is converted into 4), and if the Lexical phenomenon belongs to the Lexical phenomenon, the syntax is converted into 5).
4) Add the Syntactic phenomenon to the list with key of syntactical, go to 7).
5) Traversing the chain structure, judging which category belongs to 10 categories according to the second node, packaging the corresponding categories in a list, subdividing each list, and turning to 6).
6) If the chain structure contains a mode { suffix addition, change form and morphology change } which is packaged into a list, the list is called a part of speech change module; similarly, if the chain structure contains a pattern { fixed expression/collocation, the structure is that words/structures representing.. are packaged into a list, we call the list as a fixed expression/collocation module; if the chain structure contains a mode {. No. word function,. No. word is done, the front is done, and the rear is done so, the list is packaged into a list, and the list is called a word function module; the remaining chain structures are packed into a list called word part of speech category module, and then go to 7).
7) The four modules described above (i.e., the four different lists) are added to the list with keys of Lexical in order, go to 8).
8) And if all the grammar phenomena of the sentence are added and packaged, the JSON standardization process is ended, and the packaged JSON format data is added into the document of the corresponding level of the MongoDB.
And a second module: and (5) reading and writing calling methods of the grammar library.
For the packaging of the grammar library, three classes are required to perform corresponding read-write calls, the read operation is shown in fig. 5, and a brief description of the write call method can be seen in fig. 6.
Class MongoRLexical is responsible for reading operation of lexical phenomena in a grammar library; wherein, the sub-class RL _ Location _ Layer is used to locate a specific sub-list position in the value list after the key is Lexical; the sub-class RL _ Result _ Show is used to return all data in the sub-list where the sub-class RL _ Location _ Layer is located.
The class MongorRSyntactic is responsible for reading syntax phenomenon in a grammar library; the subclass RS _ Location _ Layer traverses in a list with a key of Syntactic, accesses each chain data, and locates data required by a user, such as the locating process of sentence types, firstly traverses the chain data, and locates the data containing the sentence type nodes; the sub-class RS _ Result _ Show packs the data located by the sub-class RS _ Location _ Layer into a list and returns the list to the user.
The class MongoWrite is responsible for adding and writing operation of new content of the grammar library; wherein, the position in the grammar library is located by the subclass W _ Location _ Layer, and the set corresponding to the added content is found first (the set is summarized according to the level, so only the set corresponding to the difficulty level needs to be found); the sub-class W _ WriteData first performs JSON structure normalization on the data, and then inserts the normalized data into the set located by the sub-class W _ Location _ Layer.

Claims (4)

1. The MongoDB-based English grammar library packaging and reading-writing method is characterized by comprising the following steps: the method comprises two parts, namely an English grammar library packaging process and a corresponding grammar library reading and writing calling process;
in the packaging process of the English grammar library, the design of a JSON storage format is mainly realized, the JSON format standardization is carried out according to the JSON storage format, and the formatted data is stored in a MongoDB database;
the read-write calling method is usually called only aiming at the level and the sentence according to the course requirements of a user, then the selection strategy is made in the lexical and the syntax, and the corresponding required lexical and syntax phenomena are returned.
2. The process of packaging english grammar library according to claim 1, wherein: and carrying out JSON format standardization on the chain data, and storing the standardized data into a MongoaDB database.
3. The method of read-write calling according to claim 1, wherein: the three classes are respectively responsible for reading the grammar phenomenon, and the class MongoRLexical is responsible for reading the lexical phenomenon in the grammar library; the class MongorRSyntactic is responsible for reading syntax phenomenon in a grammar library; the class MongoWrite is responsible for the add-write operation of new content of the grammar library.
4. The JSON format normalization of claim 2, wherein the JSON storage format design is mainly composed of four parts: the analyzed Sentence text Sennce, the corresponding difficulty Level, the Lexical phenomenon Lexical and the Syntactic phenomenon Syntactc;
the difficulty level is recorded by selecting the grade corresponding to the textbook and the grade in the blues grading reading; the syntactic phenomenon module is subdivided into 10 classes, which are: the 10 categories are not subdivided any more and are used as a document (called as a document in MongoDB and equivalent to a data record row in a relational database), and the document is recorded without sequence, and the key is Syntactc; the lexical phenomenon module is also subdivided into 10 types, which are respectively: nouns, numbers, adverbs, verbs, pronouns, prepositions, adjectives, common qualifiers, articles and conjunctions, under the 10 major categories, block packaging is carried out according to four major aspects of word shape change, fixed expression/collocation, word functions and word part-of-speech types, and the block packaging is also inserted into the same document with syntactic phenomena, and keys are Lexical;
for JSON standardization of a sentence corresponding to all grammatical phenomena, the specific processing steps are as follows:
1) the key is a sequence, and the corresponding value is analyzed English text and character string type;
2) the key is Level, the corresponding value is class grade or blues grading grade corresponding to the sentence, the character string type is adopted, and if the key is not marked as DN;
3) the values of the key which is syntactical and the key which is Lexical are stored in a list, each grammatical phenomenon of the sentence is read, whether the grammatical phenomenon belongs to the syntactical phenomenon or the Lexical phenomenon is judged by using a first node of a chain structure, if the syntactical phenomenon is the syntactical phenomenon, the operation is carried out by 4), and if the grammatical phenomenon is the Lexical phenomenon, the operation is carried out by 5);
4) add the Syntactic phenomenon to the list with keys of syntactics, go to 7);
5) traversing the chain structure, judging which category belongs to 10 categories according to the second node, packaging the corresponding categories in a list, subdividing each list, and turning to 6);
6) if the chain structure contains a mode { suffix addition, change form and morphology change } which is packaged into a list, the list is called a part of speech change module; similarly, if the chain structure contains a pattern { fixed expression/collocation, the structure is that words/structures representing.. are packaged into a list, we call the list as a fixed expression/collocation module; if the chain structure contains a mode {. No. word function,. No. word is done, the front is done, and the rear is done so, the list is packaged into a list, and the list is called a word function module; packaging the other chain structures into a list, and then turning to 7), wherein the list is called a word part-of-speech type module;
7) adding the four modules (namely four different lists) into a list with keys of Lexical in sequence, and turning to 8);
8) and if all the grammar phenomena of the sentence are added and packaged, the JSON standardization process is ended, and the packaged JSON format data is added into the document of the corresponding level of the MongoDB.
CN202010747656.5A 2020-07-30 2020-07-30 MongoDB-based English grammar library packaging and reading-writing method Pending CN113420184A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010747656.5A CN113420184A (en) 2020-07-30 2020-07-30 MongoDB-based English grammar library packaging and reading-writing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010747656.5A CN113420184A (en) 2020-07-30 2020-07-30 MongoDB-based English grammar library packaging and reading-writing method

Publications (1)

Publication Number Publication Date
CN113420184A true CN113420184A (en) 2021-09-21

Family

ID=77711547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010747656.5A Pending CN113420184A (en) 2020-07-30 2020-07-30 MongoDB-based English grammar library packaging and reading-writing method

Country Status (1)

Country Link
CN (1) CN113420184A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130124545A1 (en) * 2011-11-15 2013-05-16 Business Objects Software Limited System and method implementing a text analysis repository
CN103823794A (en) * 2014-02-25 2014-05-28 浙江大学 Automatic question setting method about query type short answer question of English reading comprehension test
CN103902651A (en) * 2014-02-19 2014-07-02 南京大学 Cloud code query method and device based on MongoDB
CN104008209A (en) * 2014-06-20 2014-08-27 南京大学 Reading-writing method for MongoDB cluster geographic data stored with GeoJSON format structuring method
CN104021210A (en) * 2014-06-20 2014-09-03 南京大学 Geographic data reading and writing method of MongoDB cluster of geographic data stored in GeoJSON-format semi-structured mode
US20180121496A1 (en) * 2016-11-03 2018-05-03 Pearson Education, Inc. Mapping data resources to requested objectives
CN110009956A (en) * 2019-04-22 2019-07-12 上海乂学教育科技有限公司 English Grammar adaptive learning method and learning device
CN110263331A (en) * 2019-05-24 2019-09-20 南京航空航天大学 A kind of English-Chinese semanteme of word similarity automatic testing method of Knowledge driving
CN110489327A (en) * 2019-07-10 2019-11-22 苏州浪潮智能科技有限公司 A kind of heterogeneous task execution method and system based on MongoDB
CN111046630A (en) * 2019-12-06 2020-04-21 中国科学院计算技术研究所 Syntax tree extraction method of JSON data

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130124545A1 (en) * 2011-11-15 2013-05-16 Business Objects Software Limited System and method implementing a text analysis repository
CN103902651A (en) * 2014-02-19 2014-07-02 南京大学 Cloud code query method and device based on MongoDB
CN103823794A (en) * 2014-02-25 2014-05-28 浙江大学 Automatic question setting method about query type short answer question of English reading comprehension test
CN104008209A (en) * 2014-06-20 2014-08-27 南京大学 Reading-writing method for MongoDB cluster geographic data stored with GeoJSON format structuring method
CN104021210A (en) * 2014-06-20 2014-09-03 南京大学 Geographic data reading and writing method of MongoDB cluster of geographic data stored in GeoJSON-format semi-structured mode
US20180121496A1 (en) * 2016-11-03 2018-05-03 Pearson Education, Inc. Mapping data resources to requested objectives
CN110009956A (en) * 2019-04-22 2019-07-12 上海乂学教育科技有限公司 English Grammar adaptive learning method and learning device
CN110263331A (en) * 2019-05-24 2019-09-20 南京航空航天大学 A kind of English-Chinese semanteme of word similarity automatic testing method of Knowledge driving
CN110489327A (en) * 2019-07-10 2019-11-22 苏州浪潮智能科技有限公司 A kind of heterogeneous task execution method and system based on MongoDB
CN111046630A (en) * 2019-12-06 2020-04-21 中国科学院计算技术研究所 Syntax tree extraction method of JSON data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张天宇;贺金鑫;王阳;付友萍;: "基于NoSQL数据库的地学大数据高效存储方法", 吉林大学学报(信息科学版), no. 06 *

Similar Documents

Publication Publication Date Title
EP0610760B1 (en) Document detection system with improved document detection efficiency
Chen et al. Sinica corpus: Design methodology for balanced corpora
Abeillé Treebanks: Building and using parsed corpora
US5450545A (en) Generation of rules-based computer programs using data entry screens
US6470347B1 (en) Method, system, program, and data structure for a dense array storing character strings
CN1606004B (en) Method and apparatus for identifying semantic structures from text
US6279005B1 (en) Method and apparatus for generating paths in an open hierarchical data structure
CN109840256B (en) Query realization method based on business entity
CN106886509A (en) A kind of academic dissertation form automatic testing method
US20110040553A1 (en) Natural language processing
CN103593335A (en) Chinese semantic proofreading method based on ontology consistency verification and reasoning
CN111930793A (en) Target behavior mining and retrieval analysis method, system, computer equipment and application
CN111753536B (en) Automatic writing method and device for patent application text
CN112001183B (en) Segmentation and extraction method and system for primary and secondary school test questions based on paragraph semantics
US5289376A (en) Apparatus for displaying dictionary information in dictionary and apparatus for editing the dictionary by using the above apparatus
Meijs Linguistic corpora and lexicography
CN113420184A (en) MongoDB-based English grammar library packaging and reading-writing method
JPS61278970A (en) Method for controlling display and calibration of analyzed result of sentence structure in natural language processor
Beridze et al. Georgian dialect corpus: Linguistic and encyclopedic information in online dictionaries
Katzen The application of computers in the humanities: A view from Britain
Kellogg On-line translation of natural language questions into artificial language queries
Autayeu et al. Lightweight parsing of classifications into lightweight ontologies
KR960025181A (en) Dynamic Conversion Method of Search Statements for Korean Information Search
CN116909533B (en) Method and device for editing computer program statement, storage medium and electronic equipment
Sarmento et al. Making RAPOSA (FOX) Smarter.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240419

AD01 Patent right deemed abandoned