CN112434518A - Text report scoring method and system - Google Patents

Text report scoring method and system Download PDF

Info

Publication number
CN112434518A
CN112434518A CN202011379005.1A CN202011379005A CN112434518A CN 112434518 A CN112434518 A CN 112434518A CN 202011379005 A CN202011379005 A CN 202011379005A CN 112434518 A CN112434518 A CN 112434518A
Authority
CN
China
Prior art keywords
word
text
rule
scoring
text report
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011379005.1A
Other languages
Chinese (zh)
Other versions
CN112434518B (en
Inventor
郑勤华
陈丽
赵宏
徐鹏飞
杜君磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN202011379005.1A priority Critical patent/CN112434518B/en
Publication of CN112434518A publication Critical patent/CN112434518A/en
Application granted granted Critical
Publication of CN112434518B publication Critical patent/CN112434518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a text report scoring method and a text report scoring system, wherein different granularity identifications are carried out on a text report, and a starting position is marked; carrying out embedded coding on each word and the added mark of the text report based on a scoring rule applicable to each word to obtain coded data; taking the coded data as input, outputting the evaluation point scores corresponding to all words of the marked text as target values, and training a neural network model to obtain a text report scoring model; and (4) identifying and marking the report to be scored, inputting the report to be scored into a scoring model, outputting the scores corresponding to the evaluation points, and taking the sum of the scores of all the evaluation points as the score of the report. Based on a combination mechanism of text granularity grade and index point classification, rules are used as input data in a coding mode to train the scoring model to obtain a scoring model to score the text, most rule schemes are effectively represented, different evaluation points are distributed, scored and collected during evaluation, and efficiency of text evaluation can be greatly improved.

Description

Text report scoring method and system
Technical Field
The invention relates to the technical field of dynamic evaluation, in particular to a text report scoring method and a text report scoring system.
Background
At present, there are two ways for scoring a text report, one is based on rules, and the other is based on a machine learning or deep learning model, a scoring system based on the machine learning model needs to rely on a large amount of labeled data, the effectiveness of the labeled data cannot be guaranteed, and a large amount of manpower needs to be consumed. No matter which mode the rule-based scoring system adopts, a scheme of one system is lacked under the condition of multiple modules in a long text at present, and accurate and effective scoring can not be carried out on the long text.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is to overcome the defect that the long text multi-module cannot be effectively trained in the prior art, thereby providing a text report scoring method and system, converting the scoring rule into the labeled sample of the training data in a special coding manner, and greatly improving the training accuracy.
In order to achieve the purpose, the invention provides the following technical scheme:
in a first aspect, an embodiment of the present invention provides a text report scoring method, including the following steps:
identifying full text, modules, paragraphs and sentences of the text report, and marking the corresponding initial positions;
carrying out each word and added marks on the text report, and carrying out embedded coding based on a preset scoring rule applicable to each word to obtain coded data;
taking the coded data as input, taking the score of each word of the marked text corresponding to a preset evaluation point as a target value to output, training a neural network model, and taking the trained neural model as a text report scoring model;
and identifying and marking the text report to be scored, inputting the text report to be scored into the text report scoring model, outputting scores corresponding to the evaluation points, and taking the sum of the scores corresponding to all the evaluation points as the score of the text report.
In one embodiment, the encoded data includes: word vectors, combined word and word vectors, named entity coding, part of speech coding, rule type coding, rule attribute coding and granularity coding, wherein:
the word vector is a pre-trained multi-dimensional word vector;
the joint word and word vector is the concatenation of the distance between the joint word and the current word and the pre-training word vector of the joint word;
the named entity code is a one-hot code comprising a plurality of named entities;
the part-of-speech codes are onehot codes comprising a plurality of parts-of-speech;
the length of the rule type code is a first preset length, and each length position corresponds to an optional preset rule type code;
the length of the rule attribute code is a second preset length, and each length position corresponds to a preset rule attribute code;
the length of the granularity code is 5, and each position corresponds to position codes of full texts, modules, paragraphs, sentences and words.
In one embodiment, the preset rule types include:
basic keyword rules: the vector of the union word is 0, and other positions of the coded data are coded according to corresponding contents;
context rules: the combined word vector is not empty, the distance is greater than 0, and other positions of the coded data are coded according to corresponding contents;
combination phrase rules: the word vector of the combined word is not empty, the distance between the word vector of the combined word and the current word is equal to 0, and other positions of the coded data are coded according to corresponding contents;
the granularity frequency rule is as follows: the word vector, the word property code of the united word vector and the named entity code are all 0, the rule type code, the rule attribute code and the granularity code are not 0, and the words are coded according to corresponding contents;
self-defining rules: the joint word vector, the rule type code and the rule attribute code are 0, and other modules code according to corresponding contents.
In one embodiment, the preset rule attributes include:
level, which represents the priority of different rule types;
the score represents the highest score under different rule types;
the first score represents the score of different rule types after the first activation;
rate, score obtained after activation of a characterization rule type rule once, every more activation.
In one embodiment, before identifying and marking the text report to be scored, the method further comprises the following steps:
and carrying out data cleaning on the text report to be scored.
In one embodiment, each predetermined evaluation point corresponds to at least one predetermined rule type.
In a second aspect, an embodiment of the present invention provides a text report scoring system, including: the marking module is used for identifying full texts, modules, paragraphs and sentences of the text reports and marking the corresponding initial positions;
the coding module is used for marking each word and the added mark on the text report and carrying out embedded coding based on a preset scoring rule applicable to each word to obtain coded data;
the scoring model training module is used for inputting the coded data, outputting the scores of the preset evaluation points corresponding to the words of the marked text as target values, training the neural network model, and taking the trained neural model as a text report scoring model;
and the scoring module is used for identifying and marking the text report to be scored, inputting the text report to be scored into the text report scoring model, outputting scores corresponding to the evaluation points, and taking the sum of the scores corresponding to all the evaluation points as the score of the text report.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium, where computer instructions are stored, and the computer instructions are configured to cause the computer to execute the text report scoring method according to the first aspect of the embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention provides a computer device, including: the text report scoring system comprises a memory and a processor, wherein the memory and the processor are connected in communication with each other, the memory stores computer instructions, and the processor executes the computer instructions to execute the text report scoring method according to the first aspect of the embodiment of the invention.
The technical scheme of the invention has the following advantages:
according to the text report scoring method and system provided by the invention, different granularity identifications are carried out on the text report, and the text report is marked at the beginning position; carrying out embedded coding on each word and the added mark of the text report based on a scoring rule applicable to each word to obtain coded data; taking the coded data as input, outputting the evaluation point scores corresponding to all words of the marked text as target values, and training a neural network model to obtain a text report scoring model; and after the report to be scored is subjected to identification marking, inputting the report to be scored into a scoring model, outputting scores corresponding to the evaluation points, and taking the sum of the scores of all the evaluation points as the score of the text report. Based on a combination mechanism of text granularity grade and index point classification, rules are used as input data in a coding mode to train the scoring model to obtain a scoring model to score the text, most rule schemes are effectively represented, different evaluation points are distributed, scored and collected during evaluation, and efficiency during text evaluation can be greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a workflow diagram of one particular example of a text report scoring method provided in an embodiment of the invention;
FIG. 2 is a schematic diagram of a text report provided in an embodiment of the present invention being cut into different granularities;
FIG. 3 is a schematic diagram of the structure of encoded data provided in the embodiment of the present invention;
FIG. 4 is a schematic representation of an embodiment of the present invention
The process schematic diagram of distributing, scoring and collecting different evaluation points during evaluation;
FIG. 5 is a block diagram illustrating the modular components of one particular example of a text report scoring system provided in embodiments of the present invention;
fig. 6 is a composition diagram of a specific example of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The embodiment of the invention provides a text report scoring method which can be applied to a long text report scoring evaluation scene comprising a plurality of module contents, and as shown in figure 1, the method comprises the following steps:
step S1: and identifying full text, modules, paragraphs and sentences of the text report, and marking the corresponding initial positions.
In the embodiment of the present invention, the text may be segmented into different granularities, as shown in fig. 2, including: the method comprises the steps of recognizing the whole text, modules, paragraphs, sentences and words when the text is read, marking the beginning positions of the whole text, the modules, the paragraphs and the sentences by using a # granularity #. Adding a mark of "# full text #" as the full text begins; adding a # module #' mark at the position where the module starts; the "# paragraph #" mark is added at the beginning of paragraph; add "# sentence #" where the sentence starts; the words are not marked, and the actual process of identifying and marking may be manual operation or may be implemented by using a relatively mature identification algorithm, which is not limited herein.
Step S2: and marking each word and the added mark on the text report, and carrying out embedded coding based on a preset scoring rule applicable to each word to obtain coded data.
In the embodiment of the present invention, the encoded data includes: word vectors, joint word vectors, named entity coding, part of speech coding, rule type coding, rule attribute coding, and granularity coding, as shown in fig. 3, where:
the word vector is a pre-trained multi-dimensional word vector, for example, a 100-dimensional word vector, and thus has a corresponding length of 100.
The joint word and word vector is the concatenation of the distance between the joint word and the current word and the pre-training word vector of the joint word, and the length of the joint word and word vector is 101 based on the length of the word vector.
The named entity code is a one-hot code comprising a plurality of named entities, wherein the named entities comprise 5 entities such as a person name, a place name, an organization name, a date and time, and the one-hot code with the length of 5 is constructed.
The part-of-speech coding is onehot coding comprising a plurality of parts-of-speech, and the parts-of-speech in the embodiment of the invention comprises the following parts-of-speech: the name, verb, adjective, adverb, pronoun, preposition, conjunctive, digraph, quantifier, auxiliary word, exclamatory word, and pseudonym are 12 words in total, so that the part is onehot code with length of 12.
The length of the rule type code in the embodiment of the invention is a first preset length, each length position corresponds to an optional preset rule type code, and different positions in the embodiment of the invention respectively correspond to a basic keyword rule, a context rule, a combined phrase rule, a granularity frequency rule and a user-defined rule. The current word is applicable to which rule in the current label, 1 is filled in which rule, and if not, 0 is filled in which rule, one word can belong to different rules at the same time; basic keyword rules: the vector of the union word is 0, and other positions of the coded data are coded according to corresponding contents; in one embodiment, the rule type code is 5 in length, and includes rule types of:
basic keyword rules: the vector of the union word is 0, and other positions of the coded data are coded according to corresponding contents; basic keyword rules are, for example, search "KW 1", "KW 2", "KW 3", "KW 4", if a keyword occurs, score a; and adding b points for each more keyword.
Context rules: the joint word vector is not null, the distance is greater than 0, other positions of the coded data are coded according to corresponding contents, and the context rule is as follows: any one of the search keywords "KW 1", "KW 2", and "KW 3" appears in a sentence simultaneously with the words "KW 4", "KW 5", and "KW 6" in the following words, and x is scored.
Combination phrase rules: the word vector of the united word is not empty, the distance between the word vector of the united word and the current word is equal to 0, other positions of the coded data are coded according to corresponding contents, and the phrase combination rule is as follows: any one of search keywords "KW 1", "KW 2", and "KW 3", and one appears giving a score and one appears giving b score in a plurality of phrases that are arranged and combined with "KW 4", "KW 5", and "KW 6" in the following words.
The granularity frequency rule is as follows: the word vector, the word property code of the united word vector and the named entity code are all 0, the rule type code, the rule attribute code and the granularity code are not 0, the encoding is carried out according to corresponding contents, and the granularity frequency rule is as follows: the frequency of the next grade of granularity under the granularity reaches a certain frequency to be divided into a and b when the frequency is increased once.
Self-defining rules: the joint word vector, the rule type code and the rule attribute code are 0, other modules are coded according to corresponding contents, and the self-defined rule is, for example, a rule based on a regular expression, and is activated for a score and is activated for b scores once more.
In an embodiment of the present invention, the length of the rule attribute code is a second preset length, each length position corresponds to a preset rule attribute code, and the length of the rule attribute code is 4, where the rule attribute code includes:
and the grades represent the priorities of different rule types, and when the basic rules are verified together, a certain sequence is required, and the sequence is determined by the grade size of the basic rules. The grades are divided into 1-100 grades, and the higher the grade, the higher the order in which the rules are verified is.
And the score value represents the highest score under different rule types, and the highest score refers to the highest score of the basic rule, and the total number of the scores after activation under the rule must not exceed the score. The total score of the highest scores of all rules cannot exceed the evaluation total score.
The first score represents the score of a rule after the first activation of different rule types, for example, the score a of a rule after the first activation is referred to.
Rate, a score obtained every more activation after a rule characterizing a rule type is activated once, for example, a score b is obtained every more activation after a rule is activated once.
The length of the granularity code is 5, and each position corresponds to position codes of full texts, modules, paragraphs, sentences and words.
In practical application, the number of times of encoding is determined according to the number of types of rules applicable to each word.
Step S3:
and taking the coded data as input, outputting the score of each word of the marked text corresponding to a preset evaluation point as a target value, training a neural network model, and taking the trained neural model as a text report scoring model.
Step S4: and identifying and marking the text report to be scored, inputting the text report to be scored into the text report scoring model, outputting scores corresponding to the evaluation points, and taking the sum of the scores corresponding to all the evaluation points as the score of the text report.
In a specific embodiment, before the text report to be scored is subjected to identification marking, the method further comprises the following steps: and performing data cleaning on the text report to be scored, such as removing stop words, repeated words and the like. In practice, a plurality of evaluation indexes may exist in one evaluation report, each evaluation index is composed of a plurality of evaluation points, different scoring models may be set for the different evaluation points, as shown in fig. 4, the score of each evaluation point is assigned to a corresponding index score, the sum of all index scores is the text corresponding score, if one evaluation index exists in the evaluation report, the evaluation index is composed of a plurality of evaluation points, and the sum of the scores of the evaluation points is used as the text corresponding score. And different evaluation points are distributed and respectively scored during evaluation, and the scores are collected and added to be used as the scores of the text reports, so that the efficiency during text evaluation can be greatly improved.
Example 2
An embodiment of the present invention provides a text report scoring system, as shown in fig. 5, including:
the marking module 1 is used for identifying full texts, modules, paragraphs and sentences of the text reports and marking the corresponding initial positions; this module executes the method described in step S1 in embodiment 1, which will not be described again here.
The coding module 2 is used for marking each word and the added mark on the text report, and carrying out embedded coding based on a preset scoring rule applicable to each word to obtain coded data; this module executes the method described in step S2 in embodiment 1, which will not be described again here.
The scoring model training module 3 is used for inputting the coded data, outputting the scores of the preset evaluation points corresponding to the words of the marked text as target values, training the neural network model, and taking the trained neural model as a text report scoring model; this module executes the method described in step S3 in embodiment 1, which will not be described again here.
And the scoring module 4 is used for identifying and marking the text report to be scored, inputting the text report to be scored into the text report scoring model, outputting scores corresponding to the evaluation points, and taking the sum of the scores corresponding to all the evaluation points as the score of the text report. This module executes the method described in step S4 in embodiment 1, which will not be described again here.
The text report scoring system provided by the embodiment of the invention is based on a combined mechanism of text granularity grade and index point classification, takes the rule as input data in a coding mode to train the scoring model to obtain the scoring model to score the text, effectively represents most rule schemes, and greatly improves the efficiency of text evaluation by distributing and collecting different evaluation points during evaluation.
Example 3
An embodiment of the present invention provides a computer device, as shown in fig. 6, the device may include a processor 51 and a memory 52, where the processor 51 and the memory 52 may be connected by a bus or in another manner, and fig. 6 takes the connection by the bus as an example.
The memory 52, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as the corresponding program instructions/modules in the embodiments of the present invention. The processor 51 executes various functional applications and data processing of the processor by running non-transitory software programs, instructions and modules stored in the memory 52, that is, implements the text report scoring method in the above-described method embodiment 1.
The memory 52 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 51, and the like. Further, the memory 52 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 52 may optionally include memory located remotely from the processor 51, and these remote memories may be connected to the processor 51 via a network. Examples of such networks include, but are not limited to, the internet, intranets, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 52 and, when executed by the processor 51, perform the text report scoring method of embodiment 1.
The details of the computer device described above can be understood by referring to the corresponding related description and effects in embodiment 1, and will not be described here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program that can be stored in a computer-readable storage medium and that when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a flash Memory (FlashMemory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the spirit or scope of the invention.

Claims (10)

1. A method for scoring a text report, comprising the steps of:
identifying full text, modules, paragraphs and sentences of the text report, and marking the corresponding initial positions;
carrying out each word and added marks on the text report, and carrying out embedded coding based on a preset scoring rule applicable to each word to obtain coded data;
taking the coded data as input, taking the score of each word of the marked text corresponding to a preset evaluation point as a target value to output, training a neural network model, and taking the trained neural model as a text report scoring model;
and identifying and marking the text report to be scored, inputting the text report to be scored into the text report scoring model, outputting scores corresponding to the evaluation points, and taking the sum of the scores corresponding to all the evaluation points as the score of the text report.
2. The method of claim 1, wherein the encoding data comprises: word vectors, combined word and word vectors, named entity coding, part of speech coding, rule type coding, rule attribute coding and granularity coding, wherein:
the word vector is a pre-trained multi-dimensional word vector;
the joint word and word vector is the concatenation of the distance between the joint word and the current word and the pre-training word vector of the joint word;
the named entity code is a one-hot code comprising a plurality of named entities;
the part-of-speech codes are onehot codes comprising a plurality of parts-of-speech;
the length of the rule type code is a first preset length, and each length position corresponds to an optional preset rule type code;
the length of the rule attribute code is a second preset length, and each length position corresponds to a preset rule attribute code;
the length of the granularity code is 5, and each position corresponds to position codes of full texts, modules, paragraphs, sentences and words.
3. The text report scoring method according to claim 2, wherein the preset rule types include:
basic keyword rules: the vector of the union word is 0, and other positions of the coded data are coded according to corresponding contents;
context rules: the combined word vector is not empty, the distance is greater than 0, and other positions of the coded data are coded according to corresponding contents;
combination phrase rules: the word vector of the combined word is not empty, the distance between the word vector of the combined word and the current word is equal to 0, and other positions of the coded data are coded according to corresponding contents;
the granularity frequency rule is as follows: the word vector, the word property code of the united word vector and the named entity code are all 0, the rule type code, the rule attribute code and the granularity code are not 0, and the words are coded according to corresponding contents;
self-defining rules: the joint word vector, the rule type code and the rule attribute code are 0, and other modules code according to corresponding contents.
4. The text report scoring method according to claim 2, wherein the preset rule attributes comprise:
level, which represents the priority of different rule types;
the score represents the highest score under different rule types;
the first score represents the score of different rule types after the first activation;
rate, score obtained after activation of a characterization rule type rule once, every more activation.
5. The method of claim 1, wherein prior to identifying the text report to be scored, further comprising:
and carrying out data cleaning on the text report to be scored.
6. The method of claim 2, wherein at least one predetermined rule type is associated with each predetermined evaluation point.
7. The method of claim 6, wherein the number of encodings is determined based on the number of times each word applies a predetermined rule type.
8. A text report scoring system, comprising:
the marking module is used for identifying full texts, modules, paragraphs and sentences of the text reports and marking the corresponding initial positions;
the coding module is used for marking each word and the added mark on the text report and carrying out embedded coding based on a preset scoring rule applicable to each word to obtain coded data;
the scoring model training module is used for inputting the coded data, outputting the scores of the preset evaluation points corresponding to the words of the marked text as target values, training the neural network model, and taking the trained neural model as a text report scoring model;
and the scoring module is used for identifying and marking the text report to be scored, inputting the text report to be scored into the text report scoring model, outputting scores corresponding to the evaluation points, and taking the sum of the scores corresponding to all the evaluation points as the score of the text report.
9. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the method of scoring a textual report according to any of claims 1-7.
10. A computer device, comprising: a memory and a processor communicatively coupled to each other, the memory storing computer instructions, the processor executing the computer instructions to perform the text report scoring method of any one of claims 1-7.
CN202011379005.1A 2020-11-30 2020-11-30 Text report scoring method and system Active CN112434518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011379005.1A CN112434518B (en) 2020-11-30 2020-11-30 Text report scoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011379005.1A CN112434518B (en) 2020-11-30 2020-11-30 Text report scoring method and system

Publications (2)

Publication Number Publication Date
CN112434518A true CN112434518A (en) 2021-03-02
CN112434518B CN112434518B (en) 2023-08-15

Family

ID=74698443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011379005.1A Active CN112434518B (en) 2020-11-30 2020-11-30 Text report scoring method and system

Country Status (1)

Country Link
CN (1) CN112434518B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609860A (en) * 2021-08-05 2021-11-05 湖南特能博世科技有限公司 Text segmentation method and device and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10170107B1 (en) * 2016-12-29 2019-01-01 Amazon Technologies, Inc. Extendable label recognition of linguistic input
CN109446513A (en) * 2018-09-18 2019-03-08 中国电子科技集团公司第二十八研究所 The abstracting method of event in a kind of text based on natural language understanding
CN109657947A (en) * 2018-12-06 2019-04-19 西安交通大学 A kind of method for detecting abnormality towards enterprises ' industry classification
CN110298038A (en) * 2019-06-14 2019-10-01 北京奇艺世纪科技有限公司 A kind of text scoring method and device
CN110928764A (en) * 2019-10-10 2020-03-27 中国人民解放军陆军工程大学 Automated mobile application crowdsourcing test report evaluation method and computer storage medium
US20200183900A1 (en) * 2018-12-11 2020-06-11 SafeGraph, Inc. Deduplication of Metadata for Places
CN111324692A (en) * 2020-01-16 2020-06-23 深圳市芥菜种科技有限公司 Automatic subjective question scoring method and device based on artificial intelligence

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10170107B1 (en) * 2016-12-29 2019-01-01 Amazon Technologies, Inc. Extendable label recognition of linguistic input
CN109446513A (en) * 2018-09-18 2019-03-08 中国电子科技集团公司第二十八研究所 The abstracting method of event in a kind of text based on natural language understanding
CN109657947A (en) * 2018-12-06 2019-04-19 西安交通大学 A kind of method for detecting abnormality towards enterprises ' industry classification
US20200183900A1 (en) * 2018-12-11 2020-06-11 SafeGraph, Inc. Deduplication of Metadata for Places
CN110298038A (en) * 2019-06-14 2019-10-01 北京奇艺世纪科技有限公司 A kind of text scoring method and device
CN110928764A (en) * 2019-10-10 2020-03-27 中国人民解放军陆军工程大学 Automated mobile application crowdsourcing test report evaluation method and computer storage medium
CN111324692A (en) * 2020-01-16 2020-06-23 深圳市芥菜种科技有限公司 Automatic subjective question scoring method and device based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
舒琴: "一种适用于薪资计算的规则引擎的研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 10, pages 1 - 62 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609860A (en) * 2021-08-05 2021-11-05 湖南特能博世科技有限公司 Text segmentation method and device and computer equipment
CN113609860B (en) * 2021-08-05 2023-09-19 湖南特能博世科技有限公司 Text segmentation method and device and computer equipment

Also Published As

Publication number Publication date
CN112434518B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN108416058B (en) Bi-LSTM input information enhancement-based relation extraction method
CN108763510B (en) Intention recognition method, device, equipment and storage medium
CN115048944B (en) Open domain dialogue reply method and system based on theme enhancement
CN112711950A (en) Address information extraction method, device, equipment and storage medium
CN114118065A (en) Chinese text error correction method and device in electric power field, storage medium and computing equipment
CN114822812A (en) Character dialogue simulation method, device, equipment and storage medium
CN111401065A (en) Entity identification method, device, equipment and storage medium
CN112287100A (en) Text recognition method, spelling error correction method and voice recognition method
CN109299470B (en) Method and system for extracting trigger words in text bulletin
CN113705196A (en) Chinese open information extraction method and device based on graph neural network
CN116151132A (en) Intelligent code completion method, system and storage medium for programming learning scene
CN111401012A (en) Text error correction method, electronic device and computer readable storage medium
CN114239589A (en) Robustness evaluation method and device of semantic understanding model and computer equipment
CN112434518B (en) Text report scoring method and system
Wang et al. A statistical constraint dependency grammar (CDG) parser
CN112818693A (en) Automatic extraction method and system for electronic component model words
CN115757775B (en) Text inclusion-based trigger word-free text event detection method and system
CN110598205A (en) Splicing method and device of truncated text and computer storage medium
CN102945231B (en) Construction method and system of incremental-translation-oriented structured language model
CN113553853B (en) Named entity recognition method and device, computer equipment and storage medium
CN115129843A (en) Dialog text abstract extraction method and device
CN115879669A (en) Comment score prediction method and device, electronic equipment and storage medium
CN115169370A (en) Corpus data enhancement method and device, computer equipment and medium
CN110955768B (en) Question-answering system answer generation method based on syntactic analysis
CN110852112B (en) Word vector embedding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant