CN112434518B - Text report scoring method and system - Google Patents

Text report scoring method and system Download PDF

Info

Publication number
CN112434518B
CN112434518B CN202011379005.1A CN202011379005A CN112434518B CN 112434518 B CN112434518 B CN 112434518B CN 202011379005 A CN202011379005 A CN 202011379005A CN 112434518 B CN112434518 B CN 112434518B
Authority
CN
China
Prior art keywords
text
rule
word
text report
scoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011379005.1A
Other languages
Chinese (zh)
Other versions
CN112434518A (en
Inventor
郑勤华
陈丽
赵宏
徐鹏飞
杜君磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN202011379005.1A priority Critical patent/CN112434518B/en
Publication of CN112434518A publication Critical patent/CN112434518A/en
Application granted granted Critical
Publication of CN112434518B publication Critical patent/CN112434518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition

Abstract

The invention discloses a text report scoring method and a text report scoring system, which are used for identifying different granularity of a text report and marking at the beginning position; reporting each word and the added mark to the text, and performing embedded coding based on a scoring rule applicable to each word to obtain coded data; taking the coded data as input, taking the score of each word of the marked text corresponding to the evaluation point as a target value to output, and training the neural network model to obtain a text report scoring model; and (3) identifying and marking the report to be scored, inputting the report to be scored into a scoring model, outputting the score corresponding to the evaluation point, and taking the sum of the scores of all the evaluation points as the score of the report. According to the method, based on a combination mechanism of the granularity level of the text and the index point classification, rules are used as input data in a coding mode to train the scoring model to obtain the scoring model to score the text, most rule schemes are effectively represented, and different evaluation points are distributed, scored and collected during evaluation, so that the efficiency of text evaluation can be greatly improved.

Description

Text report scoring method and system
Technical Field
The invention relates to the technical field of dynamic evaluation, in particular to a text report scoring method and a text report scoring system.
Background
At present, two modes of scoring for text reports exist, one is based on rules, the other is based on a machine learning or deep learning model, a scoring system based on the machine learning model needs to rely on a large amount of marking data, the validity of the marking data cannot be guaranteed, and a large amount of manpower is required to be consumed. However, no rule-based scoring system is available, and in the case of multiple modules in a long text, a system scheme is lacking, so that accurate and effective scoring cannot be performed on the system.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the defect that the long text multi-module cannot be effectively trained in the prior art, thereby providing the text report scoring method and the system, converting the scoring rule into the labeling sample of the training data in a special coding mode, and greatly improving the training accuracy.
In order to achieve the above purpose, the present invention provides the following technical solutions:
in a first aspect, an embodiment of the present invention provides a text report scoring method, including the steps of:
identifying the text report in full text, module, paragraph and sentence, and marking at the corresponding beginning position;
each word and the added mark are carried out on the text report, and embedded coding is carried out on the basis of a preset scoring rule applicable to each word, so that coded data are obtained;
taking the coded data as input, taking the score of each word of the marked text corresponding to a preset evaluation point as a target value to output, training a neural network model, and taking the trained neural model as a text report scoring model;
and identifying and marking the text report to be scored, inputting the text report to the text report scoring model, and outputting the score corresponding to the evaluation point, wherein the sum of the scores corresponding to all the evaluation points is used as the score of the text report.
In one embodiment, the encoded data comprises: word vector, joint word vector, named entity encoding, part of speech encoding, rule type encoding, rule attribute encoding, and granularity encoding, wherein:
the word vector is a pre-trained multidimensional word vector;
the joint word vector is the joint of the distance between the joint word and the current word and the pre-training word vector of the joint word;
named entity encoding is one-hot encoding comprising a plurality of named entities;
part-of-speech coding is onehot coding comprising a plurality of parts of speech;
the length of the rule type code is a first preset length, and each length corresponds to an optional preset rule type code;
the length of the rule attribute codes is a second preset length, and each length position corresponds to the preset rule attribute code;
the granularity code has a length of 5, and each position corresponds to the position code of the whole text, the module, the paragraph, the sentence and the word.
In an embodiment, the preset rule type includes:
basic keyword rules: the joint word vector is 0, and other positions of the encoded data are encoded according to the corresponding content;
context rules: the joint word vector is not null, the distance is greater than 0, and other positions of the encoded data are encoded according to the corresponding content;
combining phrase rules: the joint word vector is not null, the distance between the joint word vector and the current word is equal to 0, and other positions of the coded data are coded according to the corresponding content;
particle size frequency rule: the word vector, the joint word vector part-of-speech code and the named entity code are all 0, the rule type code, the rule attribute code and the granularity code are not 0, and the codes are coded according to the corresponding content;
custom rules: the joint word vector, rule type code and rule attribute code are 0, and other modules code according to the corresponding contents.
In an embodiment, the preset rule attribute includes:
a hierarchy characterizing priorities of different rule types;
a score representing a highest score under different rule types;
the first score represents the score of different rule types after the first activation;
the rate, which characterizes the score obtained each time the rule type rule is activated once, is activated once more.
In an embodiment, before identifying and marking the text report to be scored, the method further comprises:
and cleaning the data of the text report to be scored.
In an embodiment, each preset evaluation point corresponds to at least one preset rule type.
In a second aspect, an embodiment of the present invention provides a text report scoring system, including: the marking module is used for identifying the text report in full text, paragraphs and sentences and marking the corresponding beginning positions;
the coding module is used for carrying out embedded coding on each word and the added mark on the text report based on a preset scoring rule applicable to each word to obtain coded data;
the scoring model training module is used for taking the coded data as input, taking the score of each word of the marked text corresponding to a preset evaluation point as a target value to output, training the neural network model, and taking the trained neural model as a text report scoring model;
the scoring module is used for identifying and marking the text report to be scored, inputting the text report to the text report scoring model, outputting the score corresponding to the evaluation point, and taking the sum of the scores corresponding to all the evaluation points as the score of the text report.
In a third aspect, embodiments of the present invention provide a computer-readable storage medium storing computer instructions for causing a computer to perform the text report scoring method of the first aspect of embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention provides a computer apparatus, including: the text report scoring device comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions so as to execute the text report scoring method according to the first aspect of the embodiment of the invention.
The technical scheme of the invention has the following advantages:
the text report scoring method and the text report scoring system provided by the invention are used for identifying different granularity of the text report and marking at the beginning position; reporting each word and the added mark to the text, and performing embedded coding based on a scoring rule applicable to each word to obtain coded data; taking the coded data as input, taking the score of each word of the marked text corresponding to the evaluation point as a target value to output, and training the neural network model to obtain a text report scoring model; and (3) identifying and marking the report to be scored, inputting the report to be scored into a scoring model, outputting the score corresponding to the evaluation point, and taking the sum of the scores of all the evaluation points as the score of the text report. Based on a combination mechanism of the granularity level of the text and the classification of the index points, the rule is used as input data in a coding mode to train the scoring model to obtain the scoring model to score the text, most rule schemes are effectively represented, and different evaluation points are distributed and scored and collected during evaluation, so that the efficiency of text evaluation can be greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a workflow diagram of one specific example of a text report scoring method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of text report segmentation into different granularities provided in an embodiment of the invention;
FIG. 3 is a schematic diagram of the structural composition of encoded data provided in an embodiment of the present invention;
FIG. 4 is provided in an embodiment of the present invention
A process schematic diagram for carrying out distribution scoring and collection on different evaluation points during evaluation;
FIG. 5 is a block diagram of one specific example of a text report scoring system provided in an embodiment of the present invention;
fig. 6 is a composition diagram of a specific example of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Example 1
The embodiment of the invention provides a text report scoring method which can be applied to a long text report scoring evaluation scene comprising a plurality of module contents, as shown in fig. 1, and comprises the following steps:
step S1: and identifying the text report in full text, module, paragraph and sentence, and marking at the corresponding beginning position.
In the embodiment of the present invention, the text may be cut into different granularities, as shown in fig. 2, including: and when the text is read, the whole text, the module, the paragraph and the sentence are identified, marks are made at the beginning positions, and "# granularity#" is used as a mark. If the full text starts, adding a mark of "# full text#"; the position of the beginning of the module is added with a # module # mark; the "# paragraph#" mark is added at the beginning of the paragraph; the "# sentence#" is added at the beginning of the sentence; the words are not marked, and the identification marking process can be performed manually or by using a relatively mature identification algorithm, which is not limited herein.
Step S2: and carrying out embedded coding on each word and the added mark on the text report based on a preset scoring rule applicable to each word to obtain coded data.
In an embodiment of the present invention, encoded data includes: word vectors, joint word vectors, named entity encoding, part-of-speech encoding, rule type encoding, rule attribute encoding, and granularity encoding, as shown in fig. 3, wherein:
the word vector is a pre-trained multi-dimensional word vector, for example, a 100-dimensional word vector, and thus corresponds to a length of 100.
The joint word vector is the joint of the distance between the joint word and the current word and the pre-training word vector of the joint word, and the length of the joint word vector is 101 based on the length of the word vector.
The named entity code is a one-hot code comprising a plurality of named entities, wherein the named entities comprise 5 entities of a person name, a place name, an organization name, a date, a time and the like, and the one-hot code with the length of 5 is constructed.
The part-of-speech code is onehot code comprising a plurality of parts-of-speech, and in the embodiment of the invention, the parts-of-speech code comprises: name, verb, adjective, adverb, pronoun, preposition, conjunctive, number, adverb, aid, interject, and personification for a total of 12 words, so the part is an onehot code of length 12.
The length of the rule type codes in the embodiment of the invention is a first preset length, each length position corresponds to an optional preset rule type code, and different positions in the embodiment of the invention respectively correspond to basic keyword rules, context rules, combination phrase rules, granularity frequency rules and custom rules. Which rule is applicable to the current word in the label, 1 is filled in which rule, otherwise, 0 is filled in, and one word can be simultaneously attributed to different rules; basic keyword rules: the joint word vector is 0, and other positions of the encoded data are encoded according to the corresponding content; in one embodiment, the rule type code has a length of 5 and includes the rule types:
basic keyword rules: the joint word vector is 0, and other positions of the encoded data are encoded according to the corresponding content; basic keyword rules are, for example, searching "KW1", "KW2", "KW3", "KW4", if a keyword appears, a score a is obtained; every time the keywords appear, add b points.
Context rules: the joint word vector is not null, the distance is greater than 0, other positions of the encoded data are encoded according to the corresponding content, and the context rules are as follows: any one of the search keywords KW1, KW2 and KW3 is found, and the words of KW4, KW5 and KW6 in the following words are simultaneously appeared in a sentence, so that the x score is obtained.
Combining phrase rules: the joint word vector is not null, the distance from the current word is equal to 0, other positions of the coded data are coded according to corresponding contents, and the combined phrase rule is as follows: any one of the search keywords "KW1", "KW2", "KW3" and a score of a appears in a plurality of phrases which are arranged and combined with "KW4", "KW5", "KW6" in the following words, and a score of b appears.
Particle size frequency rule: the word vector, the joint word vector part-of-speech code and the named entity code are all 0, the rule type code, the rule attribute code and the granularity code are not 0, and according to the corresponding content code, the granularity frequency rule is as follows: the frequency of the next level of granularity under granularity reaches a certain frequency to be divided into a and b for every frequency increase.
Custom rules: the joint word vector, rule type code and rule attribute code are 0, other modules encode according to the corresponding contents, and the custom rules are rules based on regular expressions, for example, the rule is activated to divide a, and the rule is activated to divide b once more.
In the embodiment of the invention, the length of the rule attribute code is a second preset length, and each length position corresponds to a preset rule attribute code preset rule attribute, and in a specific embodiment, the length of the rule attribute code is 4, and the included rule attributes are:
the level, which characterizes the priority of different rule types, requires an order, which is determined by the level size of the base rule, when multiple base rules are validated together. The class is classified into 1 to 100 classes, and the higher the class, the earlier the order in which the rule is verified.
The score, which characterizes the highest score under different rule types, refers to the highest score of the basic rule, and the total number of scores after activation under the rule must not exceed the score. The total score of the highest score of all rules cannot exceed the evaluation total score.
The first score characterizes the score of the first activation of different rule types, for example, the score a of the first activation of a rule.
The rate, which characterizes the score obtained once per multiple activations after a rule of the rule type is activated, for example, means that the score b is obtained once per multiple activations after a rule is activated once.
The granularity code has a length of 5, and each position corresponds to the position code of the whole text, the module, the paragraph, the sentence and the word.
In practical application, the number of codes is determined according to the number of rule types applicable to each word.
Step S3:
and taking the coded data as input, taking the score of each word of the marked text corresponding to the preset evaluation point as a target value to output, training the neural network model, and taking the trained neural model as a text report scoring model.
Step S4: and identifying and marking the text report to be scored, inputting the text report to the text report scoring model, and outputting the score corresponding to the evaluation point, wherein the sum of the scores corresponding to all the evaluation points is used as the score of the text report.
In a specific embodiment, before identifying and marking the text report to be scored, the method further comprises: the text report to be scored is subjected to data cleansing, such as removal of stop words, repeat words, and the like. In practice, there may be multiple evaluation indexes in one evaluation report, where the evaluation indexes are composed of multiple evaluation points, different scoring models may be set for different evaluation points, as shown in fig. 4, where the score of each evaluation point corresponds to an index score, the sum of all the index scores is a text corresponding score, and if there is one evaluation index in the evaluation report, the evaluation index is composed of multiple evaluation points, where the sum of the scores of the evaluation points is used as the text corresponding score. And when the text report is evaluated, different evaluation points are distributed and scored respectively, and then the scores are collected and summed to be used as the scores of the text report, so that the efficiency of the text evaluation can be greatly improved.
Example 2
An embodiment of the present invention provides a text report scoring system, as shown in fig. 5, including:
the marking module 1 is used for carrying out full text, module, paragraph and sentence recognition on the text report and marking at the corresponding beginning positions; this module performs the method described in step S1 of example 1, which is not described here.
The coding module 2 is used for carrying out embedded coding on each word and the added mark on the text report based on a preset scoring rule applicable to each word to obtain coded data; this module performs the method described in step S2 in example 1, which is not described here.
The scoring model training module 3 is used for taking the coded data as input, taking the score of each word of the marked text corresponding to a preset evaluation point as a target value to output, training the neural network model, and taking the trained neural model as a text report scoring model; this module performs the method described in step S3 in example 1, which is not described here.
And the scoring module 4 is used for identifying and marking the text report to be scored, inputting the text report to the text report scoring model, outputting the scores corresponding to the evaluation points, and taking the sum of the scores corresponding to all the evaluation points as the score of the text report. This module performs the method described in step S4 in example 1, which is not described here.
According to the text report scoring system provided by the embodiment of the invention, based on the combination mechanism of the text granularity level and the index point classification, rules are used as input data to train the scoring model to obtain the scoring model to score texts, most rule schemes are effectively represented, and the efficiency of text evaluation can be greatly improved in the manner of distributing and collecting different evaluation points during evaluation.
Example 3
Embodiments of the present invention provide a computer device, as shown in fig. 6, which may include a processor 51 and a memory 52, where the processor 51 and the memory 52 may be connected by a bus or otherwise, fig. 6 being an example of a connection via a bus.
The memory 52 serves as a non-transitory computer readable storage medium that may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as corresponding program instructions/modules in embodiments of the present invention. The processor 51 executes various functional applications of the processor and data processing by running non-transitory software programs, instructions, and modules stored in the memory 52, i.e., implements the text report scoring method in method embodiment 1 described above.
Memory 52 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created by the processor 51, etc. In addition, memory 52 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 52 may optionally include memory located remotely from processor 51, which may be connected to processor 51 via a network. Examples of such networks include, but are not limited to, the internet, intranets, mobile communication networks, and combinations thereof.
One or more modules are stored in memory 52 that, when executed by processor 51, perform the text report scoring method of embodiment 1.
The details of the above computer apparatus may be correspondingly understood by referring to the corresponding relevant descriptions and effects in embodiment 1, and will not be described here.
It will be appreciated by those skilled in the art that a program implementing all or part of the above-described embodiment method may be implemented by a computer program to instruct related hardware, and the program may be stored in a computer readable storage medium, and when executed, may include the above-described embodiment method flow. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a flash Memory (flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. And obvious variations or modifications thereof are contemplated as falling within the scope of the present invention.

Claims (9)

1. A method of scoring a text report, comprising the steps of:
identifying the text report in full text, module, paragraph and sentence, and marking at the corresponding beginning position;
each word and the added mark are carried out on the text report, embedded coding is carried out on the basis of a preset scoring rule applicable to each word, coding data are obtained, and the coding data comprise: word vector, joint word vector, named entity encoding, part of speech encoding, rule type encoding, rule attribute encoding, and granularity encoding, wherein:
the word vector is a pre-trained multidimensional word vector;
the joint word vector is the joint of the distance between the joint word and the current word and the pre-training word vector of the joint word;
named entity encoding is one-hot encoding comprising a plurality of named entities;
part-of-speech coding is one-hot coding comprising a plurality of parts of speech;
the length of the rule type code is a first preset length, and each length corresponds to an optional preset rule type code;
the length of the rule attribute codes is a second preset length, and each length position corresponds to the preset rule attribute code;
the length of the granularity code is 5, and each position corresponds to the position code of the whole text, the module, the paragraph, the sentence and the word;
taking the coded data as input, taking the score of each word of the marked text corresponding to a preset evaluation point as a target value to output, training a neural network model, and taking the trained neural model as a text report scoring model;
and identifying and marking the text report to be scored, inputting the text report to the text report scoring model, and outputting the score corresponding to the evaluation point, wherein the sum of the scores corresponding to all the evaluation points is used as the score of the text report.
2. The text report scoring method of claim 1, wherein the predetermined rule types include:
basic keyword rules: the joint word vector is 0, and other positions of the encoded data are encoded according to the corresponding content;
context rules: the joint word vector is not null, the distance is greater than 0, and other positions of the encoded data are encoded according to the corresponding content;
combining phrase rules: the joint word vector is not null, the distance between the joint word vector and the current word is equal to 0, and other positions of the coded data are coded according to the corresponding content;
particle size frequency rule: the word vector, the joint word vector part-of-speech code and the named entity code are all 0, the rule type code, the rule attribute code and the granularity code are not 0, and the codes are coded according to the corresponding content;
custom rules: the joint word vector, rule type code and rule attribute code are 0, and other modules code according to the corresponding contents.
3. The text report scoring method of claim 1, wherein the predetermined rule attributes comprise:
a hierarchy characterizing priorities of different rule types;
a score representing a highest score under different rule types;
the first score represents the score of different rule types after the first activation;
the rate, which characterizes the score obtained each time the rule type rule is activated once, is activated once more.
4. The text report scoring method of claim 1, further comprising, prior to identifying the text report to be scored:
and cleaning the data of the text report to be scored.
5. The text report scoring method of claim 1, wherein each preset evaluation point corresponds to at least one preset rule type.
6. The text report scoring method of claim 5 wherein the number of codes is determined based on the number of preset rule types applied to each word.
7. A text report scoring system, comprising:
the marking module is used for identifying the text report in full text, paragraphs and sentences and marking the corresponding beginning positions;
the coding module is used for carrying out embedded coding on each word and the added mark on the text report based on a preset scoring rule applicable to each word to obtain coding data, wherein the coding data comprises: word vector, joint word vector, named entity encoding, part of speech encoding, rule type encoding, rule attribute encoding, and granularity encoding, wherein:
the word vector is a pre-trained multidimensional word vector;
the joint word vector is the joint of the distance between the joint word and the current word and the pre-training word vector of the joint word;
named entity encoding is one-hot encoding comprising a plurality of named entities;
part-of-speech coding is one-hot coding comprising a plurality of parts of speech;
the length of the rule type code is a first preset length, and each length corresponds to an optional preset rule type code;
the length of the rule attribute codes is a second preset length, and each length position corresponds to the preset rule attribute code;
the length of the granularity code is 5, and each position corresponds to the position code of the whole text, the module, the paragraph, the sentence and the word;
the scoring model training module is used for taking the coded data as input, taking the score of each word of the marked text corresponding to a preset evaluation point as a target value to output, training the neural network model, and taking the trained neural model as a text report scoring model;
the scoring module is used for identifying and marking the text report to be scored, inputting the text report to the text report scoring model, outputting the score corresponding to the evaluation point, and taking the sum of the scores corresponding to all the evaluation points as the score of the text report.
8. A computer readable storage medium having stored thereon computer instructions for causing the computer to perform the text report scoring method of any one of claims 1-6.
9. A computer device, comprising: a memory and a processor in communication with each other, the memory storing computer instructions, the processor executing the computer instructions to perform the text report scoring method of any one of claims 1-6.
CN202011379005.1A 2020-11-30 2020-11-30 Text report scoring method and system Active CN112434518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011379005.1A CN112434518B (en) 2020-11-30 2020-11-30 Text report scoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011379005.1A CN112434518B (en) 2020-11-30 2020-11-30 Text report scoring method and system

Publications (2)

Publication Number Publication Date
CN112434518A CN112434518A (en) 2021-03-02
CN112434518B true CN112434518B (en) 2023-08-15

Family

ID=74698443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011379005.1A Active CN112434518B (en) 2020-11-30 2020-11-30 Text report scoring method and system

Country Status (1)

Country Link
CN (1) CN112434518B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609860B (en) * 2021-08-05 2023-09-19 湖南特能博世科技有限公司 Text segmentation method and device and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10170107B1 (en) * 2016-12-29 2019-01-01 Amazon Technologies, Inc. Extendable label recognition of linguistic input
CN109446513A (en) * 2018-09-18 2019-03-08 中国电子科技集团公司第二十八研究所 The abstracting method of event in a kind of text based on natural language understanding
CN109657947A (en) * 2018-12-06 2019-04-19 西安交通大学 A kind of method for detecting abnormality towards enterprises ' industry classification
CN110298038A (en) * 2019-06-14 2019-10-01 北京奇艺世纪科技有限公司 A kind of text scoring method and device
CN110928764A (en) * 2019-10-10 2020-03-27 中国人民解放军陆军工程大学 Automated mobile application crowdsourcing test report evaluation method and computer storage medium
CN111324692A (en) * 2020-01-16 2020-06-23 深圳市芥菜种科技有限公司 Automatic subjective question scoring method and device based on artificial intelligence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10877947B2 (en) * 2018-12-11 2020-12-29 SafeGraph, Inc. Deduplication of metadata for places

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10170107B1 (en) * 2016-12-29 2019-01-01 Amazon Technologies, Inc. Extendable label recognition of linguistic input
CN109446513A (en) * 2018-09-18 2019-03-08 中国电子科技集团公司第二十八研究所 The abstracting method of event in a kind of text based on natural language understanding
CN109657947A (en) * 2018-12-06 2019-04-19 西安交通大学 A kind of method for detecting abnormality towards enterprises ' industry classification
CN110298038A (en) * 2019-06-14 2019-10-01 北京奇艺世纪科技有限公司 A kind of text scoring method and device
CN110928764A (en) * 2019-10-10 2020-03-27 中国人民解放军陆军工程大学 Automated mobile application crowdsourcing test report evaluation method and computer storage medium
CN111324692A (en) * 2020-01-16 2020-06-23 深圳市芥菜种科技有限公司 Automatic subjective question scoring method and device based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种适用于薪资计算的规则引擎的研究与实现;舒琴;《中国优秀硕士学位论文全文数据库信息科技辑》(第10期);1-62 *

Also Published As

Publication number Publication date
CN112434518A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN108763510B (en) Intention recognition method, device, equipment and storage medium
CN110851596B (en) Text classification method, apparatus and computer readable storage medium
CN108416058B (en) Bi-LSTM input information enhancement-based relation extraction method
CN108304468B (en) Text classification method and text classification device
US11501082B2 (en) Sentence generation method, sentence generation apparatus, and smart device
CN112101041B (en) Entity relationship extraction method, device, equipment and medium based on semantic similarity
CN111222305A (en) Information structuring method and device
CN112395395B (en) Text keyword extraction method, device, equipment and storage medium
CN108491389B (en) Method and device for training click bait title corpus recognition model
CN115048944B (en) Open domain dialogue reply method and system based on theme enhancement
CN110502742B (en) Complex entity extraction method, device, medium and system
CN111401065A (en) Entity identification method, device, equipment and storage medium
CN116805001A (en) Intelligent question-answering system and method suitable for vertical field and application of intelligent question-answering system and method
CN108846138A (en) A kind of the problem of fusion answer information disaggregated model construction method, device and medium
CN112287100A (en) Text recognition method, spelling error correction method and voice recognition method
CN111611393A (en) Text classification method, device and equipment
CN113722483A (en) Topic classification method, device, equipment and storage medium
CN112434518B (en) Text report scoring method and system
CN112036179B (en) Electric power plan information extraction method based on text classification and semantic frame
CN113672731A (en) Emotion analysis method, device and equipment based on domain information and storage medium
CN112818693A (en) Automatic extraction method and system for electronic component model words
CN111881264A (en) Method and electronic equipment for searching long text in question-answering task in open field
Wachsmuth et al. Back to the roots of genres: Text classification by language function
CN115169370A (en) Corpus data enhancement method and device, computer equipment and medium
CN110955768B (en) Question-answering system answer generation method based on syntactic analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant