CN111275091A - Intelligent text conclusion recommendation method and device and computer readable storage medium - Google Patents

Intelligent text conclusion recommendation method and device and computer readable storage medium Download PDF

Info

Publication number
CN111275091A
CN111275091A CN202010051191.XA CN202010051191A CN111275091A CN 111275091 A CN111275091 A CN 111275091A CN 202010051191 A CN202010051191 A CN 202010051191A CN 111275091 A CN111275091 A CN 111275091A
Authority
CN
China
Prior art keywords
text
conclusion
target
historical
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010051191.XA
Other languages
Chinese (zh)
Other versions
CN111275091B (en
Inventor
李海翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010051191.XA priority Critical patent/CN111275091B/en
Publication of CN111275091A publication Critical patent/CN111275091A/en
Priority to PCT/CN2020/098979 priority patent/WO2021143056A1/en
Application granted granted Critical
Publication of CN111275091B publication Critical patent/CN111275091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Operations Research (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses an intelligent text conclusion recommendation method, which comprises the following steps: acquiring a target text and a historical text set, and performing word segmentation operation on the target text to obtain a text attribute of the target text; carrying out similarity calculation and correlation coefficient calculation on the historical text set in sequence to obtain text features of the historical text set; screening out a preset number of historical texts from the historical text set to serve as a comparison text set, carrying out numerical calculation on text characteristics of each comparison text in the comparison text set and text attributes corresponding to the target text to obtain a difference value, and selecting the text characteristic with the minimum text attribute difference value corresponding to the target text as a text conclusion of the target text, so that the text conclusion recommendation of the target text is completed. The invention also provides an intelligent text conclusion recommending device and a computer readable storage medium. The invention realizes intelligent recommendation of the text conclusion.

Description

Intelligent text conclusion recommendation method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent text conclusion intelligent recommendation method and device and a computer readable storage medium.
Background
At present, in the traditional manual underwriting operation, underwriting operation personnel are required to accurately position the risk of a client, then relevant information of the client is collected by combining the risk to determine the risk degree, finally, a proper underwriting decision is made on a policy according to underwriting rules, and whether the policy of the client can be allowed to be underwritten or not is judged. The work needs to be accurate, needs not only rich insurance knowledge, medical knowledge and financial knowledge, but also needs to have rich case evaluation experience. Therefore, for a new person who just enters the business, the need of a skilled practitioner to spend a lot of time for guidance is avoided, which consumes a lot of labor cost, and an intelligent guidance mode is urgently needed to liberate the guidance labor.
Disclosure of Invention
The invention provides a text conclusion intelligent recommendation method and device and a computer readable storage medium, and mainly aims to intelligently judge a text conclusion according to historical data and liberate manual operation.
In order to achieve the above object, the invention provides an intelligent text conclusion recommendation method, which comprises the following steps:
acquiring a target text, performing word segmentation operation on the target text, and acquiring text attributes of the target text based on the word segmentation operation;
acquiring a historical text set from a preset historical text library, calculating the similarity between the target text based on word segmentation operation and the historical text set, and screening a preset number of historical text sets as similar text sets according to the similarity;
acquiring the text attribute of each similar text in the similar text set, calculating the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, and selecting a preset number of text attributes as text features according to the correlation coefficient;
training a linear regression model by using the similar text set, verifying an output value of the linear regression model by using a known text conclusion of the similar text set to obtain a deviation value between the known text conclusion and the output value, and screening out a preset number of similar texts as a comparison text set according to the deviation value;
and performing numerical calculation on the text characteristics of each contrast text in the contrast text set and the text attributes corresponding to the target text to obtain a difference value, and selecting the text characteristic with the minimum text attribute difference value corresponding to the target text as the text conclusion of the target text, thereby completing the text conclusion recommendation of the target text.
Optionally, the text attribute includes: text length, part of speech specific gravity, word tendency state, person name type, degree word frequency, sentence proportion and overall emotion category.
Optionally, the calculating the similarity between the target text after the word segmentation operation and the historical text set includes:
performing word screening in the target text according to the part of speech, and generating a target part of speech statistical list according to the screened words;
performing word screening in the historical texts in the historical text set according to the parts of speech to generate a historical part of speech statistical list;
and calculating the similarity of the target part-of-speech statistical list and the historical part-of-speech statistical list by using a similarity algorithm.
Optionally, the similarity algorithm comprises:
Figure BDA0002369461350000021
wherein u represents a target text, w represents one of the historical texts, j represents a value range of a part-of-speech type, n represents the number of words of a certain part-of-speech, and ai、biAnd the word frequencies of words of certain parts of speech in u and w respectively.
Optionally, the method for calculating the correlation coefficient between the text attribute in each similar text and the known text conclusion of the similar text includes:
Figure BDA0002369461350000022
Figure BDA0002369461350000023
wherein, OAAnd OBRespectively representing text attributes and text conclusions, | OAI and OBI denotes the number of words in the text attribute and text conclusion, respectively, Jaccard (O)A,OB) Similarity coefficient, O, representing text attributes and text conclusionsA∩OBRepresenting a text attribute OANumber of words in conclusion of text, OA∪OBRepresents a text attribute OAAnd textual conclusion OBThe total number of all the words after the same words are combined.
In addition, to achieve the above object, the present invention further provides a text conclusion intelligent recommendation apparatus, which includes a memory and a processor, wherein the memory stores a text conclusion intelligent recommendation program operable on the processor, and the text conclusion intelligent recommendation program, when executed by the processor, implements the following steps:
acquiring a target text, performing word segmentation operation on the target text, and acquiring text attributes of the target text based on the word segmentation operation;
acquiring a historical text set from a preset historical text library, calculating the similarity between the target text based on word segmentation operation and the historical text set, and screening a preset number of historical text sets as similar text sets according to the similarity;
acquiring the text attribute of each similar text in the similar text set, calculating the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, and selecting a preset number of text attributes as text features according to the correlation coefficient;
training a linear regression model by using the similar text set, verifying an output value of the linear regression model by using a known text conclusion of the similar text set to obtain a deviation value between the known text conclusion and the output value, and screening out a preset number of similar texts as a comparison text set according to the deviation value;
and performing numerical calculation on the text characteristics of each contrast text in the contrast text set and the text attributes corresponding to the target text to obtain a difference value, and selecting the text characteristic with the minimum text attribute difference value corresponding to the target text as the text conclusion of the target text, thereby completing the text conclusion recommendation of the target text.
Optionally, the text attribute includes: text length, part of speech specific gravity, word tendency state, person name type, degree word frequency, sentence proportion and overall emotion category.
Optionally, the calculating the similarity between the target text after the word segmentation operation and the historical text set includes:
performing word screening in the target text according to the part of speech, and generating a target part of speech statistical list according to the screened words;
performing word screening in the historical texts in the historical text set according to the parts of speech to generate a historical part of speech statistical list;
and calculating the similarity of the target part-of-speech statistical list and the historical part-of-speech statistical list by using a similarity algorithm.
Optionally, the similarity algorithm comprises:
Figure BDA0002369461350000041
wherein u represents a target text, w represents one of the historical texts, j represents a value range of a part-of-speech type, n represents the number of words of a certain part-of-speech, and ai、biAnd the word frequencies of words of certain parts of speech in u and w respectively.
In addition, to achieve the above object, the present invention also provides a computer readable storage medium, which stores a text conclusion intelligent recommendation program, where the text conclusion intelligent recommendation program is executable by one or more processors to implement the steps of the text conclusion intelligent recommendation method as described above.
According to the intelligent text conclusion recommending method, the intelligent text conclusion recommending device and the computer readable storage medium, when a user carries out conclusion judgment on a target text, a historical text set is obtained, a similar text set of the target text is screened out from the historical text set through a similarity judging method, and a suitable conclusion of the target text is found through a training linear regression model according to text attributes of the similar text set and a known text conclusion, so that manual judgment is not needed, and manual operation is released.
Drawings
Fig. 1 is a schematic flowchart of a text conclusion intelligent recommendation method according to an embodiment of the present invention;
fig. 2 is a schematic internal structural diagram of an intelligent text conclusion recommending apparatus according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of a text conclusion intelligent recommendation program in the text conclusion intelligent recommendation device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an intelligent text conclusion recommending method. Referring to fig. 1, a flowchart of a text conclusion intelligent recommendation method according to an embodiment of the present invention is shown. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the intelligent text conclusion recommendation method includes:
s1, obtaining a target text, performing word segmentation operation on the target text, and obtaining text attributes of the target text based on the word segmentation operation.
In a preferred embodiment of the present invention, the target text includes: the case report to be diagnosed, the subjective answer sheet to be scored, the policy of the insurance amount to be checked, and the like need to pass through a text for making an evaluation or conclusion on the text content.
In the preferred embodiment of the present invention, word segmentation is performed on the target text according to part-of-speech classification, and the target text is decomposed into a set using a single word as a unit. The part-of-speech classifications include, but are not limited to: noun (n), verb (v), adjective (a), adverb (d), preposition (p), conjunctive (c), pronoun (r), quantifier (q), and punctuation (w). For example: performing word segmentation operation on the following sentences:
constitution is the root law which regulates the implementation form and operation mode of the national power and adjusts the relationship between the national power and the citizen right, and generally specifies the state system and the form of the political authority organization.
The results obtained were: constitution n is the v norm v country n right n implementation v form n and c run v mode n w adjustment v country n right n and c citizen n right n f relation n root law n w r usually d defines v country n system n w political n organisation n form n. 'w' of a chemical formula
The text attribute refers to a symbolic feature capable of describing the property of the target text. In the preferred embodiment of the invention, the text attribute of the target text is obtained by traversing the target text after the word segmentation operation.
Preferably, the text attributes of the present invention include, but are not limited to: text length, part of speech specific gravity, word tendency state, person name type, degree word frequency, sentence proportion and overall emotion category.
The text length refers to the number of words obtained by carrying out statistics on the target text after the word segmentation operation except for _ w (punctuation marks). The part-of-speech proportion refers to the proportion of the 9 part-of-speech classifications in the total number of words, for example: if the number of nouns of a target text after word segmentation is 20 and the total number of words is 352, the noun proportion of the text is 20/352-5.7%. The word tendency state refers to analyzing words with emotional colors in the target text after the word segmentation operation, in a preferred embodiment of the present invention, the emotional colors are 3 types, which are: the positive, negative and neutral words are used to obtain the ratio of 3 types of words, i.e. the word tendency status, for example: the words with emotional colors after the word segmentation of a certain target text are as follows: the number of positive, negative and neutral words is 18, 6, 45 respectively, and the word tendency status is 18: 6: 45, 6: 2: 15. the people name type refers to: counting the pronouns in the target text after the word segmentation operation, and taking the pronouns with the largest number as the name types of the target text, for example: counting the pronouns in the target text after the word segmentation operation, and finding out that: and if the first person pronouns are 16, the second person pronouns are 7 and the third person pronouns are 10, the person type of the target text after the word segmentation operation is the first person. The degree wording frequency refers to the number of words expressing the degree of strength, and the words expressing the degree of strength include: "very", "extremely", "most". The sentence pattern proportion refers to: usually, sentence pattern types are statement sentences, exclamation sentences, question sentences and imperative sentences, the ratio of the number of each sentence pattern type is counted as the sentence pattern ratio, the overall emotion type refers to that sentences in a text are considered, and the expression is positive emotion, for example, the lung on a case diagnosis book does not have abnormal shadow, and the function is good. "this sentence can be counted as a forward emotion sentence; the expression negative emotions, for example, description of an accident in insurance "the engine explodes because of a violent lateral impact of the vehicle. The sentence can be calculated as a negative emotion sentence, the rest statement sentences are mostly general emotion sentences, the emotion sentences with the largest number are taken as the whole emotion types of the target text, the positive emotion is recorded as 1, the negative emotion is recorded as-1, and the general emotion is recorded as 0.
S2, obtaining a historical text set from a preset historical text library, calculating the similarity between the target text based on word segmentation operation and the historical text set, and screening a preset number of historical text sets as similar text sets according to the similarity.
In a preferred embodiment of the present invention, the preset historical text library stores a set of historical texts which have the same service type as the target text and which have already made text conclusions for evaluation. For example: if the target text is the case description of the policy to be underwritten, the historical text library is a set of case texts of the policy underwritten before; if the target text is a case report which is urgent for a doctor to diagnose, the historical text base is a set of case reports which are diagnosed before.
Further, the preferred embodiment of the present invention is based on the parts of speech of the predetermined category, such as the first 4 parts of speech in the above 9 part of speech categories, that is: and considering nouns, verbs, adjectives and adverbs (because the occurrence frequency of the words with the 4 parts of speech is high, the determination proportion of the texts is also high, and the influence of the remaining 5 parts of speech on the text conclusion is small), and calculating the similarity between the target text and the historical text set.
In a preferred embodiment of the present invention, the method for calculating the similarity includes: and screening out words with the preset parts of speech (such as nouns, verbs, adjectives and adverbs) in the target text, and generating a target part of speech statistical list corresponding to the words with each part of speech. The target part-of-speech statistics list comprises: the words themselves and the frequency of occurrence, i.e., word frequency. Similarly, the words of the preset part of speech in a certain historical text are screened out, and a historical part of speech statistical list is generated correspondingly by the words of each part of speech. Furthermore, the similarity calculation method is adopted to carry out similarity calculation on the target part-of-speech statistical list and the historical part-of-speech statistical list one by one, namely the similarity calculation of the two-way text LSTM one-way quantity is respectively carried out on nouns, verbs, adjectives and adverbs by nouns, and the calculation formula is as follows:
Figure BDA0002369461350000071
where u denotes the target text, wRepresenting one of the history texts, j represents the value range of part-of-speech categories, wherein j takes the values of 1 to 4 and represents that similarity matching calculation of 4 kinds of parts-of-speech is carried out, n represents the number of words with certain parts-of-speech (nouns, verbs, adjectives and adverbs), the value of n is determined according to empirical values, for example, similarity matching is carried out on 10 nouns in u and w, n takes 10 when the similarity matching of nouns is carried out, similarity matching is carried out on 5 verbs in u and w, n takes 5 when the similarity matching of verbs is carried out, a takes 5 when the similarity matching of verbs is carried out, and ai、biAnd the word frequencies of words of certain parts of speech in u and w respectively.
Therefore, according to the above method, 4 results of similarity calculation between a piece of historical text and the target text can be obtained, that is: noun one-way quantity similarity match calculation cos (u, w)1Verb one-way quantity similarity matching calculation value cos (u, w)2The similarity of adjectives is calculated by matching cos (u, w)3Similarity matching calculation value cos (u, w) of adverb4. And solving the similarity between the target text and the historical text by taking an average value as follows:
Figure BDA0002369461350000072
further, according to the similarity between the target text and the historical text set, a preset number of historical text sets are screened out from the target text set in the descending order of similarity. Preferably, the preset number of the historical text sets is 60.
S3, obtaining the text attribute of each similar text in the similar text set, calculating the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, and selecting a preset number of text attributes as text features according to the correlation coefficient.
The text conclusion refers to a processing result or a judgment result obtained by judging the text according to the state of the text description, for example: if the text is a case description of the policy to be underwritten, the text conclusion is an amount of indemnity derived from the case description of the policy; if the target text is a case report diagnosed by a doctor, the text conclusion is a diagnosis result according to the case report; if the target text is the subjective answer sheet of the student, the text conclusion is the score obtained according to the answer sheet.
In the preferred embodiment of the present invention, the text attribute of each of the 60 similar texts is obtained according to the method in S1, that is, 7 attributes of each of the similar texts are obtained: text length, part of speech specific gravity, word tendency state, person name type, degree word frequency, sentence proportion and overall emotion category. Further, the invention calculates the correlation coefficient of the text attribute of each similar text 7 and the text conclusion of the similar text, and selects a preset number of text attributes according to the sequence of the correlation coefficient from high to low. Preferably, in the preferred embodiment of the present invention, the predetermined number of text attributes is 3 text attributes, which may include, for example, a whole emotion category, a word frequency for degree, and a text length. It should be appreciated that the 3 text attributes with the highest relevance for different types of similar text may differ.
The calculation method of the correlation coefficient comprises the following steps:
Figure BDA0002369461350000081
Figure BDA0002369461350000082
wherein, OAAnd OBRespectively representing text attributes and text conclusions, | OAI and OBI denotes the number of words in the text attribute and text conclusion, respectively, Jaccard (O)A,OB) Similarity coefficient, O, representing text attributes and text conclusionsA∩OBRepresenting a text attribute OANumber of words in conclusion of text, OA∪OBRepresents a text attribute OAAnd textual conclusion OBThe total number of all the words after the same words are combined.
S4, training a linear regression model by using the similar text set, verifying an output value of the linear regression model by using a known text conclusion of the similar text set to obtain a deviation value between the known text conclusion and the output value, screening out a preset number of similar texts as a contrast text set according to the deviation value, carrying out numerical calculation on text characteristics of each contrast text in the contrast text set and text attributes corresponding to the target text to obtain a difference value, and selecting a text characteristic with the minimum text attribute difference value corresponding to the target text as a text conclusion of the target text, thereby completing text conclusion recommendation of the target text.
The preferred embodiment randomly divides the 60 similar texts into A, B, C groups, wherein each group contains 20 similar texts. Further, the present invention utilizes a linear regression model, and alternately takes 2 of A, B, C, 3 groups of similar texts as training sets, takes the text features of the training sets as input, takes the corresponding text conclusion as output to train the linear regression model, and leaves 1 group of similar texts as a verification set, that is: when A, B is used as the training set, C is used as the verification set, when B, C is used as the training set, A is used as the verification set, and when A, C is used as the training set, B is used as the verification set, so that 3 groups of verification results are obtained, wherein each 1 group of verification results is the deviation value of 20 verification of the similar texts in the group of verification results. For example, A, B is used as a training set, C is used as a verification set, and the verification result of the set is 20 deviation values obtained by comparing 20 text conclusion values of 20 similar texts in the C set with predicted values of 20 text conclusions obtained by inputting text features of the 20 similar texts in the C set into a A, B set trained linear regression model.
Further, the invention screens the verification set of the group with the minimum deviation value, for example, the group with the minimum average value of B, C is used as the training set, and a is the group of the verification set, then the group a is screened, the text features of each of the 20 similar texts contained in the group a are compared with the text attributes corresponding to the target text one by one, and the text conclusion of the corresponding similar text with the minimum value in the comparison result values is used as the text conclusion of the target text, thereby completing the text conclusion recommendation of the target text.
For example: the text attributes corresponding to the target text are as follows: the overall emotion category is 1, the frequency of words used for degree is 17, the length of the text is 400, and the text characteristics of the similar texts to be compared are as follows: the overall emotion category is-1, the degree wording frequency is 28, and the text length is 496. Then, the specific method for comparing is to perform a difference on the text features corresponding to the two texts to obtain three difference values, average the three difference values, and calculate the absolute value of the average value, which is the comparison result value of this time. The comparison results in the example are: l [ (-1-1) + (28-17) + (496-400) ]/3| -32. And comparing the 20 similar texts with the target text one by one to obtain 20 comparison result values, and taking the text conclusion of the corresponding similar text with the minimum value in the comparison result values of the target text as the text conclusion of the target text.
The invention further provides an intelligent text conclusion recommending device. Fig. 2 is a schematic diagram illustrating an internal structure of an intelligent text conclusion recommending apparatus according to an embodiment of the present invention.
In the present embodiment, the intelligent text conclusion recommendation apparatus 1 may be a PC (Personal Computer), a terminal device such as a smart phone, a tablet Computer, or a mobile Computer, or may be a server. The intelligent text conclusion recommendation device 1 at least comprises a memory 11, a processor 12, a communication bus 13 and a network interface 14.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may be an internal storage unit of the intelligent recommendation device for textual conclusions 1 in some embodiments, for example, a hard disk of the intelligent recommendation device for textual conclusions 1. The memory 11 may also be an external storage device of the intelligent recommendation device for textual conclusion 1 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the intelligent recommendation device for textual conclusion 1. Further, the memory 11 may also include both an internal storage unit of the intelligent recommendation device for textual conclusions 1 and an external storage device. The memory 11 may be used to store not only the application software installed in the intelligent text conclusion recommending apparatus 1 and various types of data, such as the code of the intelligent text conclusion recommending program 01, but also temporarily store data that has been output or will be output.
The processor 12 may be, in some embodiments, a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip for executing program codes stored in the memory 11 or Processing data, such as executing the intelligent text conclusion recommendation program 01.
The communication bus 13 is used to realize connection communication between these components.
The network interface 14 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), typically used to establish a communication link between the apparatus 1 and other electronic devices.
Optionally, the apparatus 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display may also be referred to as a display screen or a display unit, where appropriate, for displaying information processed in the intelligent recommendation device for textual conclusions 1 and for displaying a visual user interface.
While FIG. 2 shows only the intelligent recommendation device 1 with components 11-14 and the intelligent recommendation program 01 for textual conclusions, those skilled in the art will appreciate that the structure shown in FIG. 1 does not constitute a limitation of the intelligent recommendation device 1 for textual conclusions, and may include fewer or more components than those shown, or some components in combination, or a different arrangement of components.
In the embodiment of the apparatus 1 shown in fig. 2, the memory 11 stores a text conclusion intelligent recommendation program 01; the processor 12, when executing the text conclusion intelligent recommendation program 01 stored in the memory 11, implements the following steps:
step one, a target text is obtained, word segmentation operation is carried out on the target text, and text attributes of the target text are obtained based on the word segmentation operation.
In a preferred embodiment of the present invention, the target text includes: the case report to be diagnosed, the subjective answer sheet to be scored, the policy of the insurance amount to be checked, and the like need to pass through a text for making an evaluation or conclusion on the text content.
In the preferred embodiment of the present invention, word segmentation is performed on the target text according to part-of-speech classification, and the target text is decomposed into a set using a single word as a unit. The part-of-speech classifications include, but are not limited to: noun (n), verb (v), adjective (a), adverb (d), preposition (p), conjunctive (c), pronoun (r), quantifier (q), and punctuation (w). For example: performing word segmentation operation on the following sentences:
constitution is the root law which regulates the implementation form and operation mode of the national power and adjusts the relationship between the national power and the citizen right, and generally specifies the state system and the form of the political authority organization.
The results obtained were: constitution n is the v norm v country n right n implementation v form n and c run v mode n w adjustment v country n right n and c citizen n right n f relation n root law n w r usually d defines v country n system n w political n organisation n form n. 'w' of a chemical formula
The text attribute refers to a symbolic feature capable of describing the property of the target text. In the preferred embodiment of the invention, the text attribute of the target text is obtained by traversing the target text after the word segmentation operation.
Preferably, the text attributes of the present invention include, but are not limited to: text length, part of speech specific gravity, word tendency state, person name type, degree word frequency, sentence proportion and overall emotion category.
The text length refers to the number of words obtained by carrying out statistics on the target text after the word segmentation operation except for _ w (punctuation marks). The part-of-speech proportion refers to the proportion of the 9 part-of-speech classifications in the total number of words, for example: if the number of nouns of a target text after word segmentation is 20 and the total number of words is 352, the noun proportion of the text is 20/352-5.7%. The word tendency state refers to analyzing words with emotional colors in the target text after the word segmentation operation, in a preferred embodiment of the present invention, the emotional colors are 3 types, which are: the positive, negative and neutral words are used to obtain the ratio of 3 types of words, i.e. the word tendency status, for example: the words with emotional colors after the word segmentation of a certain target text are as follows: the number of positive, negative and neutral words is 18, 6, 45 respectively, and the word tendency status is 18: 6: 45, 6: 2: 15. the people name type refers to: counting the pronouns in the target text after the word segmentation operation, and taking the pronouns with the largest number as the name types of the target text, for example: counting the pronouns in the target text after the word segmentation operation, and finding out that: and if the first person pronouns are 16, the second person pronouns are 7 and the third person pronouns are 10, the person type of the target text after the word segmentation operation is the first person. The degree wording frequency refers to the number of words expressing the degree of strength, and the words expressing the degree of strength include: "very", "extremely", "most". The sentence pattern proportion refers to: usually, sentence pattern types are statement sentences, exclamation sentences, question sentences and imperative sentences, the ratio of the number of each sentence pattern type is counted as the sentence pattern ratio, the overall emotion type refers to that sentences in a text are considered, and the expression is positive emotion, for example, the lung on a case diagnosis book does not have abnormal shadow, and the function is good. "this sentence can be counted as a forward emotion sentence; the expression negative emotions, for example, description of an accident in insurance "the engine explodes because of a violent lateral impact of the vehicle. The sentence can be calculated as a negative emotion sentence, the rest statement sentences are mostly general emotion sentences, the emotion sentences with the largest number are taken as the whole emotion types of the target text, the positive emotion is recorded as 1, the negative emotion is recorded as-1, and the general emotion is recorded as 0.
And secondly, acquiring a historical text set from a preset historical text library, calculating the similarity between the target text based on word segmentation operation and the historical text set, and screening a preset number of historical text sets as similar text sets according to the similarity.
In a preferred embodiment of the present invention, the preset historical text library stores a set of historical texts which have the same service type as the target text and which have already made text conclusions for evaluation. For example: if the target text is the case description of the policy to be underwritten, the historical text library is a set of case texts of the policy underwritten before; if the target text is a case report which is urgent for a doctor to diagnose, the historical text base is a set of case reports which are diagnosed before.
Further, the preferred embodiment of the present invention is based on the parts of speech of the predetermined category, such as the first 4 parts of speech in the above 9 part of speech categories, that is: and considering nouns, verbs, adjectives and adverbs (because the occurrence frequency of the words with the 4 parts of speech is high, the determination proportion of the texts is also high, and the influence of the remaining 5 parts of speech on the text conclusion is small), and calculating the similarity between the target text and the historical text set.
In a preferred embodiment of the present invention, the method for calculating the similarity includes: and screening out words with the preset parts of speech (such as nouns, verbs, adjectives and adverbs) in the target text, and generating a target part of speech statistical list corresponding to the words with each part of speech. The target part-of-speech statistics list comprises: the words themselves and the frequency of occurrence, i.e., word frequency. Similarly, the words of the preset part of speech in a certain historical text are screened out, and a historical part of speech statistical list is generated correspondingly by the words of each part of speech. Furthermore, the similarity calculation method is adopted to carry out similarity calculation on the target part-of-speech statistical list and the historical part-of-speech statistical list one by one, namely the similarity calculation of the two-way text LSTM one-way quantity is respectively carried out on nouns, verbs, adjectives and adverbs by nouns, and the calculation formula is as follows:
Figure BDA0002369461350000131
wherein u represents a target text, w represents one of the historical texts, j represents a value range of part-of-speech categories, where j takes values from 1 to 4 and represents similarity matching calculation of 4 parts-of-speech, n represents the number of words with a certain part-of-speech (noun, verb, adjective, adverb), and the value of n is determined according to empirical values, for example, similarity matching is performed on 10 nouns in u and w, then n takes 10 when similarity matching is performed on nouns, similarity matching is performed on 5 verbs in u and w, then n takes 5 when similarity matching is performed on verbs, and a takes 5 when similarity matching is performed on verbsi、biAnd the word frequencies of words of certain parts of speech in u and w respectively.
Therefore, according to the above method, 4 results of similarity calculation between a piece of historical text and the target text can be obtained, that is: noun one-way quantity similarity match calculation cos (u, w)1Verb one-way quantity similarity matching calculation value cos (u, w)2The similarity of adjectives is calculated by matching cos (u, w)3Similarity matching calculation value cos (u, w) of adverb4. And solving the similarity between the target text and the historical text by taking an average value as follows:
Figure BDA0002369461350000132
further, according to the similarity between the target text and the historical text set, a preset number of historical text sets are screened out from the target text set in the descending order of similarity. Preferably, the preset number of the historical text sets is 60.
And step three, acquiring the text attribute of each similar text in the similar text set, calculating the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, and selecting a preset number of text attributes as text features according to the correlation coefficient.
The text conclusion refers to a processing result or a judgment result obtained by judging the text according to the state of the text description, for example: if the text is a case description of the policy to be underwritten, the text conclusion is an amount of indemnity derived from the case description of the policy; if the target text is a case report diagnosed by a doctor, the text conclusion is a diagnosis result according to the case report; if the target text is the subjective answer sheet of the student, the text conclusion is the score obtained according to the answer sheet.
In the preferred embodiment of the present invention, the text attribute of each of the 60 similar texts is obtained according to the method in the above step one, that is, 7 attributes of each of the similar texts are obtained: text length, part of speech specific gravity, word tendency state, person name type, degree word frequency, sentence proportion and overall emotion category. Further, the invention calculates the correlation coefficient of the text attribute of each similar text 7 and the text conclusion of the similar text, and selects a preset number of text attributes according to the sequence of the correlation coefficient from high to low. Preferably, in the preferred embodiment of the present invention, the predetermined number of text attributes is 3 text attributes, which may include, for example, a whole emotion category, a word frequency for degree, and a text length. It should be appreciated that the 3 text attributes with the highest relevance for different types of similar text may differ.
The calculation method of the correlation coefficient comprises the following steps:
Figure BDA0002369461350000141
Figure BDA0002369461350000142
wherein, OAAnd OBRespectively representing text attributes and text conclusions, | OAI and OBRespectively representing text attributesAnd the number of words in the textual conclusion, Jaccard (O)A,OB) Similarity coefficient, O, representing text attributes and text conclusionsA∩OBRepresenting a text attribute OANumber of words in conclusion of text, OA∪OBRepresents a text attribute OAAnd textual conclusion OBThe total number of all the words after the same words are combined.
And fourthly, training a linear regression model by using the similar text set, verifying an output value of the linear regression model by using a known text conclusion of the similar text set to obtain a deviation value between the known text conclusion and the output value, screening out a preset number of similar texts as a contrast text set according to the deviation value, carrying out numerical calculation on text characteristics of each contrast text in the contrast text set and text attributes corresponding to the target text to obtain a difference value, and selecting the text characteristic with the minimum text attribute difference value corresponding to the target text as the text conclusion of the target text, thereby completing text conclusion recommendation of the target text.
The preferred embodiment randomly divides the 60 similar texts into A, B, C groups, wherein each group contains 20 similar texts. Further, the present invention utilizes a linear regression model, and alternately takes 2 of A, B, C, 3 groups of similar texts as training sets, takes the text features of the training sets as input, takes the corresponding text conclusion as output to train the linear regression model, and leaves 1 group of similar texts as a verification set, that is: when A, B is used as the training set, C is used as the verification set, when B, C is used as the training set, A is used as the verification set, and when A, C is used as the training set, B is used as the verification set, so that 3 groups of verification results are obtained, wherein each 1 group of verification results is the deviation value of 20 verification of the similar texts in the group of verification results. For example, A, B is used as a training set, C is used as a verification set, and the verification result of the set is 20 deviation values obtained by comparing 20 text conclusion values of 20 similar texts in the C set with predicted values of 20 text conclusions obtained by inputting text features of the 20 similar texts in the C set into a A, B set trained linear regression model.
Further, the invention screens the verification set of the group with the minimum deviation value, for example, the group with the minimum average value of B, C is used as the training set, and a is the group of the verification set, then the group a is screened, the text features of each of the 20 similar texts contained in the group a are compared with the text attributes corresponding to the target text one by one, and the text conclusion of the corresponding similar text with the minimum value in the comparison result values is used as the text conclusion of the target text, thereby completing the text conclusion recommendation of the target text.
For example: the text attributes corresponding to the target text are as follows: the overall emotion category is 1, the frequency of words used for degree is 17, the length of the text is 400, and the text characteristics of the similar texts to be compared are as follows: the overall emotion category is-1, the degree wording frequency is 28, and the text length is 496. Then, the specific method for comparing is to perform a difference on the text features corresponding to the two texts to obtain three difference values, average the three difference values, and calculate the absolute value of the average value, which is the comparison result value of this time. The comparison results in the example are: l [ (-1-1) + (28-17) + (496-400) ]/3| -32. And comparing the 20 similar texts with the target text one by one to obtain 20 comparison result values, and taking the text conclusion of the corresponding similar text with the minimum value in the comparison result values of the target text as the text conclusion of the target text.
Alternatively, in other embodiments, the intelligent text conclusion recommending program may be divided into one or more modules, and the one or more modules are stored in the memory 11 and executed by one or more processors (in this embodiment, the processor 12) to implement the present invention, where the module referred to in the present invention refers to a series of computer program instruction segments capable of performing a specific function for describing the execution process of the intelligent text conclusion recommending program in the intelligent text conclusion recommending apparatus.
For example, referring to fig. 3, a schematic diagram of program modules of a text conclusion intelligent recommendation program in an embodiment of the intelligent text conclusion recommendation apparatus according to the present invention is shown, in this embodiment, the text conclusion intelligent recommendation program may be divided into a word segmentation module 10, a similarity calculation module 20, a correlation coefficient calculation module 30, and a text conclusion recommendation module 40, and exemplarily:
the word segmentation module 10 is configured to: the method comprises the steps of obtaining a target text, carrying out word segmentation operation on the target text, and obtaining text attributes of the target text based on the word segmentation operation.
The similarity calculation module 20 is configured to: acquiring a historical text set from a preset historical text library, calculating the similarity between the target text based on word segmentation operation and the historical text set, and screening a preset number of historical text sets as similar text sets according to the similarity.
The correlation coefficient calculation module 30 is configured to: acquiring the text attribute of each similar text in the similar text set, calculating the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, and selecting a preset number of text attributes as text features according to the sequence from high to low of the correlation coefficient.
The text conclusion recommendation module 40 is configured to: training a linear regression model by using the similar text set, verifying an output value of the linear regression model by using a known text conclusion of the similar text set to obtain a deviation value between the known text conclusion and the output value, screening out a preset number of similar texts as a comparison text set according to the deviation value, carrying out numerical calculation on text characteristics of each comparison text in the comparison text set and text attributes corresponding to the target text to obtain a difference value, and selecting the text characteristic with the minimum text attribute difference value corresponding to the target text as the text conclusion of the target text, thereby completing text conclusion recommendation of the target text. The functions or operation steps implemented by the above-mentioned segmentation module 10, similarity calculation module 20, correlation coefficient calculation module 30, and text conclusion recommendation module 40 when executed are substantially the same as those of the above-mentioned embodiments, and are not described herein again.
Furthermore, an embodiment of the present invention provides a computer-readable storage medium, where a text conclusion intelligent recommendation program is stored on the computer-readable storage medium, where the text conclusion intelligent recommendation program is executable by one or more processors to implement the following operations:
acquiring a target text, performing word segmentation operation on the target text, and acquiring text attributes of the target text based on the word segmentation operation;
acquiring a historical text set from a preset historical text library, calculating the similarity between the target text based on word segmentation operation and the historical text set, and screening a preset number of historical text sets as similar text sets according to the similarity;
acquiring the text attribute of each similar text in the similar text set, calculating the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, and selecting a preset number of text attributes as text features according to the sequence from high to low of the correlation coefficient;
training a linear regression model by using the similar text set, verifying an output value of the linear regression model by using a known text conclusion of the similar text set to obtain a deviation value between the known text conclusion and the output value, and screening out a preset number of similar texts as a comparison text set according to the deviation value;
and performing numerical calculation on the text characteristics of each contrast text in the contrast text set and the text attributes corresponding to the target text to obtain a difference value, and selecting the text characteristic with the minimum text attribute difference value corresponding to the target text as the text conclusion of the target text, thereby completing the text conclusion recommendation of the target text. The specific implementation of the computer-readable storage medium of the present invention is substantially the same as the above-mentioned embodiments of the intelligent text conclusion recommending apparatus and method, and will not be described herein again.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A text conclusion intelligent recommendation method is characterized by comprising the following steps:
acquiring a target text, performing word segmentation operation on the target text, and acquiring text attributes of the target text based on the word segmentation operation;
acquiring a historical text set from a preset historical text library, calculating the similarity between the target text based on word segmentation operation and the historical text set, and screening a preset number of historical text sets as similar text sets according to the similarity;
acquiring the text attribute of each similar text in the similar text set, calculating the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, and selecting a preset number of text attributes as text features according to the correlation coefficient;
training a linear regression model by using the similar text set, verifying an output value of the linear regression model by using a known text conclusion of the similar text set to obtain a deviation value between the known text conclusion and the output value, and screening out a preset number of similar texts as a comparison text set according to the deviation value;
and performing numerical calculation on the text characteristics of each contrast text in the contrast text set and the text attributes corresponding to the target text to obtain a difference value, and selecting the text characteristic with the minimum text attribute difference value corresponding to the target text as the text conclusion of the target text, thereby completing the text conclusion recommendation of the target text.
2. The intelligent textual conclusion recommendation method of claim 1, wherein the textual attributes include: text length, part of speech specific gravity, word tendency state, person name type, degree word frequency, sentence proportion and overall emotion category.
3. The intelligent text conclusion recommendation method of claim 1, wherein the calculating the similarity between the target text after the word segmentation operation and the historical text set comprises:
performing word screening in the target text according to the part of speech, and generating a target part of speech statistical list according to the screened words;
performing word screening in the historical texts in the historical text set according to the parts of speech to generate a historical part of speech statistical list;
and calculating the similarity of the target part-of-speech statistical list and the historical part-of-speech statistical list by using a similarity algorithm.
4. The intelligent textual conclusion recommendation method of claim 3, wherein said similarity algorithm comprises:
Figure FDA0002369461340000021
wherein u represents a target text, w represents one of the historical texts, j represents a value range of a part-of-speech type, n represents the number of words of a certain part-of-speech, and ai、biAnd the word frequencies of words of certain parts of speech in u and w respectively.
5. The intelligent recommendation method for text conclusion according to any claim 1-4, characterized in that the calculation method for calculating the correlation coefficient between the text attribute in each similar text and the known text conclusion of the similar text comprises:
Figure FDA0002369461340000022
Figure FDA0002369461340000023
wherein, OAAnd OBRespectively representing text attributes and text conclusions, | OAI and OBI denotes the number of words in the text attribute and text conclusion, respectively, Jaccard (O)A,OB) Similarity coefficient, O, representing text attributes and text conclusionsA∩OBRepresenting a text attribute OANumber of words in conclusion of text, OA∪OBRepresents a text attribute OAAnd textual conclusion OBThe total number of all the words after the same words are combined.
6. A intelligent text conclusion recommendation device, comprising a memory and a processor, wherein the memory stores a intelligent text conclusion recommendation program operable on the processor, and the intelligent text conclusion recommendation program when executed by the processor implements the following steps:
acquiring a target text, performing word segmentation operation on the target text, and acquiring text attributes of the target text based on the word segmentation operation;
acquiring a historical text set from a preset historical text library, calculating the similarity between the target text based on word segmentation operation and the historical text set, and screening a preset number of historical text sets as similar text sets according to the similarity;
acquiring the text attribute of each similar text in the similar text set, calculating the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, and selecting a preset number of text attributes as text features according to the correlation coefficient;
training a linear regression model by using the similar text set, verifying an output value of the linear regression model by using a known text conclusion of the similar text set to obtain a deviation value between the known text conclusion and the output value, and screening out a preset number of similar texts as a comparison text set according to the deviation value;
and performing numerical calculation on the text characteristics of each contrast text in the contrast text set and the text attributes corresponding to the target text to obtain a difference value, and selecting the text characteristic with the minimum text attribute difference value corresponding to the target text as the text conclusion of the target text, thereby completing the text conclusion recommendation of the target text.
7. The intelligent textual conclusion recommender according to claim 6, wherein the textual attributes include: text length, part of speech specific gravity, word tendency state, person name type, degree word frequency, sentence proportion and overall emotion category.
8. The intelligent textual conclusion recommender according to claim 6, wherein said calculating the similarity of the target text to the set of historical texts based on the word segmentation operation comprises:
performing word screening in the target text according to the part of speech, and generating a target part of speech statistical list according to the screened words;
performing word screening in the historical texts in the historical text set according to the parts of speech to generate a historical part of speech statistical list;
and calculating the similarity of the target part-of-speech statistical list and the historical part-of-speech statistical list by using a similarity algorithm.
9. The intelligent textual conclusion recommender according to claim 8, wherein the similarity algorithm comprises:
Figure FDA0002369461340000041
wherein u represents a target text, w represents one of the historical texts, j represents a value range of a part-of-speech type, n represents the number of words of a certain part-of-speech, and ai、biAnd the word frequencies of words of certain parts of speech in u and w respectively.
10. A computer-readable storage medium having a text conclusion intelligent recommendation program stored thereon, the text conclusion intelligent recommendation program being executable by one or more processors to implement the steps of the text conclusion intelligent recommendation method according to any one of claims 1 to 5.
CN202010051191.XA 2020-01-16 2020-01-16 Text conclusion intelligent recommendation method and device and computer readable storage medium Active CN111275091B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010051191.XA CN111275091B (en) 2020-01-16 2020-01-16 Text conclusion intelligent recommendation method and device and computer readable storage medium
PCT/CN2020/098979 WO2021143056A1 (en) 2020-01-16 2020-06-29 Text conclusion intelligent recommendation method and apparatus, computer device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010051191.XA CN111275091B (en) 2020-01-16 2020-01-16 Text conclusion intelligent recommendation method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111275091A true CN111275091A (en) 2020-06-12
CN111275091B CN111275091B (en) 2024-05-10

Family

ID=71002262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010051191.XA Active CN111275091B (en) 2020-01-16 2020-01-16 Text conclusion intelligent recommendation method and device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN111275091B (en)
WO (1) WO2021143056A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560477A (en) * 2020-12-09 2021-03-26 中科讯飞互联(北京)信息科技有限公司 Text completion method, electronic device and storage device
WO2021143056A1 (en) * 2020-01-16 2021-07-22 平安科技(深圳)有限公司 Text conclusion intelligent recommendation method and apparatus, computer device and computer-readable storage medium
CN114493904A (en) * 2022-04-18 2022-05-13 北京合理至臻科技有限公司 Intelligent core protection wind control method, system, equipment and medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579706B (en) * 2022-03-07 2023-09-29 桂林旅游学院 Automatic subjective question review method based on BERT neural network and multi-task learning
CN116578673B (en) * 2023-07-03 2024-02-09 北京凌霄文苑教育科技有限公司 Text feature retrieval method based on linguistic logics in digital economy field

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944911A (en) * 2017-11-18 2018-04-20 电子科技大学 A kind of recommendation method of the commending system based on text analyzing
CN108197137A (en) * 2017-11-20 2018-06-22 广州视源电子科技股份有限公司 Text processing method and device, storage medium, processor and terminal
CN109063147A (en) * 2018-08-06 2018-12-21 北京航空航天大学 Online course forum content recommendation method and system based on text similarity
CN109472008A (en) * 2018-11-20 2019-03-15 武汉斗鱼网络科技有限公司 A kind of Text similarity computing method, apparatus and electronic equipment
CN109614484A (en) * 2018-11-09 2019-04-12 华南理工大学 A kind of Text Clustering Method and its system based on classification effectiveness
CN110163476A (en) * 2019-04-15 2019-08-23 重庆金融资产交易所有限责任公司 Project intelligent recommendation method, electronic device and storage medium
CN110413773A (en) * 2019-06-20 2019-11-05 平安科技(深圳)有限公司 Intelligent text classification method, device and computer readable storage medium
CN110413728A (en) * 2019-06-20 2019-11-05 平安科技(深圳)有限公司 Exercise recommended method, device, equipment and storage medium
CN110427480A (en) * 2019-06-28 2019-11-08 平安科技(深圳)有限公司 Personalized text intelligent recommendation method, apparatus and computer readable storage medium
CN110442684A (en) * 2019-08-14 2019-11-12 山东大学 A kind of class case recommended method based on content of text
CN110457574A (en) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 Information recommendation method, device and the storage medium compared based on data
US20190354583A1 (en) * 2018-05-21 2019-11-21 State Street Corporation Techniques for determining categorized text
CN110489751A (en) * 2019-08-13 2019-11-22 腾讯科技(深圳)有限公司 Text similarity computing method and device, storage medium, electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122863A (en) * 2017-04-28 2017-09-01 厦门大学 Overstock inquiry and the Forecasting Methodology of material correlative attribute
CN107818138B (en) * 2017-09-28 2020-05-19 银江股份有限公司 Case law regulation recommendation method and system
CN109299007A (en) * 2018-09-18 2019-02-01 哈尔滨工程大学 A kind of defect repair person's auto recommending method
CN109446416B (en) * 2018-09-26 2021-09-28 南京大学 Law recommendation method based on word vector model
CN111275091B (en) * 2020-01-16 2024-05-10 平安科技(深圳)有限公司 Text conclusion intelligent recommendation method and device and computer readable storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944911A (en) * 2017-11-18 2018-04-20 电子科技大学 A kind of recommendation method of the commending system based on text analyzing
CN108197137A (en) * 2017-11-20 2018-06-22 广州视源电子科技股份有限公司 Text processing method and device, storage medium, processor and terminal
US20190354583A1 (en) * 2018-05-21 2019-11-21 State Street Corporation Techniques for determining categorized text
CN109063147A (en) * 2018-08-06 2018-12-21 北京航空航天大学 Online course forum content recommendation method and system based on text similarity
CN109614484A (en) * 2018-11-09 2019-04-12 华南理工大学 A kind of Text Clustering Method and its system based on classification effectiveness
CN109472008A (en) * 2018-11-20 2019-03-15 武汉斗鱼网络科技有限公司 A kind of Text similarity computing method, apparatus and electronic equipment
CN110163476A (en) * 2019-04-15 2019-08-23 重庆金融资产交易所有限责任公司 Project intelligent recommendation method, electronic device and storage medium
CN110413773A (en) * 2019-06-20 2019-11-05 平安科技(深圳)有限公司 Intelligent text classification method, device and computer readable storage medium
CN110413728A (en) * 2019-06-20 2019-11-05 平安科技(深圳)有限公司 Exercise recommended method, device, equipment and storage medium
CN110427480A (en) * 2019-06-28 2019-11-08 平安科技(深圳)有限公司 Personalized text intelligent recommendation method, apparatus and computer readable storage medium
CN110457574A (en) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 Information recommendation method, device and the storage medium compared based on data
CN110489751A (en) * 2019-08-13 2019-11-22 腾讯科技(深圳)有限公司 Text similarity computing method and device, storage medium, electronic equipment
CN110442684A (en) * 2019-08-14 2019-11-12 山东大学 A kind of class case recommended method based on content of text

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张均胜 等: "一种基于短文本相似度计算的主观题自动阅卷方法", 图书情报工作, vol. 58, no. 19, 5 October 2014 (2014-10-05), pages 31 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021143056A1 (en) * 2020-01-16 2021-07-22 平安科技(深圳)有限公司 Text conclusion intelligent recommendation method and apparatus, computer device and computer-readable storage medium
CN112560477A (en) * 2020-12-09 2021-03-26 中科讯飞互联(北京)信息科技有限公司 Text completion method, electronic device and storage device
CN112560477B (en) * 2020-12-09 2024-04-16 科大讯飞(北京)有限公司 Text completion method, electronic equipment and storage device
CN114493904A (en) * 2022-04-18 2022-05-13 北京合理至臻科技有限公司 Intelligent core protection wind control method, system, equipment and medium
CN114493904B (en) * 2022-04-18 2022-06-28 北京合理至臻科技有限公司 Intelligent core protection wind control method, system, equipment and medium

Also Published As

Publication number Publication date
CN111275091B (en) 2024-05-10
WO2021143056A1 (en) 2021-07-22

Similar Documents

Publication Publication Date Title
CN111275091A (en) Intelligent text conclusion recommendation method and device and computer readable storage medium
US10818397B2 (en) Clinical content analytics engine
US20210232762A1 (en) Architectures for natural language processing
Hunzaker Cultural sentiments and schema-consistency bias in information transmission
Reiter et al. An investigation into the validity of some metrics for automatically evaluating natural language generation systems
US9678949B2 (en) Vital text analytics system for the enhancement of requirements engineering documents and other documents
US9092789B2 (en) Method and system for semantic analysis of unstructured data
AU2022227854A1 (en) Automated classification of emotio-cogniton
Cavalcanti et al. Detection and evaluation of cheating on college exams using supervised classification
CN110309279A (en) Based on language model, method, apparatus and computer equipment are practiced in speech therapy
CN110502622A (en) Common medical question and answer data creation method, device and computer equipment
WO2022051436A1 (en) Personalized learning system
US20140101259A1 (en) System and Method for Threat Assessment
Amirhosseini et al. Automating the process of identifying the preferred representational system in Neuro Linguistic Programming using Natural Language Processing
Tolston et al. Beyond frequency counts: Novel conceptual recurrence analysis metrics to index semantic coordination in team communications
Trinh Ha et al. Identification of intimate partner violence from free text descriptions in social media
Uddin et al. Evaluation of Google’s voice recognition and sentence classification for health care applications
CA3207044A1 (en) Automated classification of emotio-cogniton
Bader The German bekommen passive: A case study on frequency and grammaticality
Francisco et al. Emotag: An approach to automated markup of emotions in texts
CN111144512A (en) Occupation guidance method and device based on EMLo pre-training model and storage medium
Kim et al. Developing information quality assessment framework of presentation slides
JP2004054732A (en) Human resource utilization support system and human resource utilization support program
JP2003345785A (en) System and program for evaluating ability
CN108804627B (en) Information acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant