WO2021143056A1 - 文本结论智能推荐方法、装置、计算机设备及计算机可读存储介质 - Google Patents

文本结论智能推荐方法、装置、计算机设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021143056A1
WO2021143056A1 PCT/CN2020/098979 CN2020098979W WO2021143056A1 WO 2021143056 A1 WO2021143056 A1 WO 2021143056A1 CN 2020098979 W CN2020098979 W CN 2020098979W WO 2021143056 A1 WO2021143056 A1 WO 2021143056A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
target
conclusion
similar
historical
Prior art date
Application number
PCT/CN2020/098979
Other languages
English (en)
French (fr)
Inventor
李海翔
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021143056A1 publication Critical patent/WO2021143056A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to an intelligent text conclusion intelligent recommendation method, device, computer equipment and computer-readable storage medium.
  • underwriting operators need to accurately locate customer risks, and then combine the risks to collect customer-related information to determine the degree of risk, and finally make appropriate underwriting decisions on the insurance policy according to the underwriting rules, and judge the customer’s insuring Whether the bill can be underwritten.
  • the inventor realized that for this work to be accurate, it not only requires rich insurance knowledge, medical knowledge, and financial knowledge, but also rich experience in case evaluation. Therefore, for a newcomer who is just entering the industry, it is inevitable that senior practitioners will spend a lot of time to guide, which consumes a lot of labor costs, and an intelligent coaching method is urgently needed to liberate this part of the guidance manpower.
  • This application provides a method, device and computer-readable storage medium for intelligently recommending text conclusions, the main purpose of which is to make intelligent judgments of text conclusions based on historical data and free human operations.
  • a text conclusion intelligent recommendation method includes:
  • this application also provides a text conclusion intelligent recommendation device, including:
  • the word segmentation module is used to obtain target text, perform word segmentation operations on the target text, and obtain text attributes of the target text based on the word segmentation operation;
  • the similarity calculation module is used to obtain a historical text set from a preset historical text library, calculate the similarity between the target text and the historical text set based on the word segmentation operation, and filter out a preset number of documents based on the similarity Collection of historical texts as a collection of similar texts;
  • the correlation coefficient calculation module is used to obtain the text attribute of each similar text in the similar text set, calculate the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, according to The correlation coefficient selects a preset number of text attributes as text features;
  • the text conclusion recommendation module is used to train a linear regression model using the similar text set, and use the known text conclusions of the similar text set to verify the output value of the linear regression model to obtain the known text
  • a preset number of similar texts are screened out as a comparative text set according to the deviation value; the text feature of each comparative text in the comparative text set is corresponding to the target text
  • the text attribute of the target text is numerically calculated to obtain the difference, and the text feature with the smallest text attribute difference corresponding to the target text is selected as the text conclusion of the target text, so as to complete the text conclusion recommendation of the target text.
  • the present application also provides a computer device including a memory and a processor, and the memory stores a text conclusion intelligent recommendation program that can be run on the processor, and the text conclusion When the smart recommendation program is executed by the processor, the following steps are implemented:
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a text conclusion intelligent recommendation program, and the text conclusion intelligent recommendation program is executed by one or more processors When implementing the following steps:
  • the text conclusion intelligent recommendation method, device, computer equipment, and computer readable storage medium proposed in this application obtain a historical text set when the user makes a conclusion judgment on the target text, and filter out all the historical texts from the historical text set through the similarity judgment method.
  • the linear regression model is trained to find the suitable conclusion of the target text, so that no human judgment is required, and human operation is released. .
  • FIG. 1 is a schematic flowchart of a method for intelligently recommending text conclusions according to an embodiment of the application
  • FIG. 2 is a schematic diagram of the internal structure of a computer device provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of modules of a text conclusion intelligent recommendation device provided by an embodiment of the application.
  • FIG. 1 it is a schematic flowchart of a method for intelligently recommending text conclusions according to an embodiment of this application.
  • the method can be executed by a computer device, and the computer device can be implemented by software and/or hardware.
  • the method for intelligently recommending text conclusions includes:
  • the target text includes: a case report to be diagnosed, a subjective question answer sheet to be scored, an insurance policy to be approved for an insurance amount, and other texts that require an evaluation or conclusion on the text content.
  • the preferred embodiment of the present application performs word segmentation operations on the target text according to part-of-speech classification, and disassembles the target text into a collection with a single word as a unit.
  • the part-of-speech classification includes, but is not limited to: noun (n), verb (v), adjective (a), adverb (d), preposition (p), conjunction (c), pronoun (r), quantifier (q), and Punctuation (w).
  • the constitution is the fundamental law that regulates the realization and operation of state power, and adjusts the relationship between state power and civil rights. It usually stipulates the state system and the form of power organization.
  • constitution_n is _v norm_v country_n power_n realization_v form_n and _c operation_v method_n, _w adjustment_v country_n power_n and _c citizen _n rights_n between _f relationship_n fundamental law_n, _w it_r usually _d stipulates _v country_n system_n, _w regime_n organization_n form_n. _w
  • the text attribute refers to a landmark feature that can describe the nature of the target text.
  • the preferred embodiment of the present application obtains the text attributes of the target text by traversing the target text after the word segmentation operation.
  • the text attributes described in this application include, but are not limited to: text length, part of speech weight, word tendency state, person type, degree of word frequency, sentence pattern ratio, and overall emotion category.
  • the text length refers to the number of words obtained by counting the _w (punctuation marks) of the target text after the word segmentation operation.
  • the word tendency state refers to the analysis of emotional words in the target text after the word segmentation operation.
  • the person type refers to: counting the pronouns in the target text after the word segmentation operation, and using the pronouns with the largest number as the person type of the target text, for example: all the pronouns after the word segmentation operation
  • the pronouns in the target text are counted, and it is found that there are 16 first-person pronouns, 7 second-person pronouns, and 10 third-person pronouns.
  • the person type of the target text after the word segmentation operation is the first person.
  • the frequency of degree terms refers to the number of words expressing the degree of intensity, and the words expressing the degree of intensity are: "very”, “very”, “extremely”, and "most".
  • the sentence pattern ratio refers to: the usual sentence pattern types are declarative sentences, exclamation sentences, interrogative sentences, and imperative sentences. The proportion of the number of various types of sentence patterns is calculated as the sentence pattern ratio.
  • the overall emotion category refers to the For example, “the lungs have no abnormal shadows and are functioning well” on the case diagnosis book. This sentence can be counted as a positive emotion sentence; it expresses a negative emotion. For example, the accident description in insurance "The engine exploded due to a violent lateral impact of the vehicle.” This sentence can be counted as a negative emotional sentence, and the rest of the statement sentences are mostly general emotional sentences. Sentences are regarded as the overall emotion category of the target text. Positive emotions are recorded as 1, negative emotions are recorded as -1, and general emotions are recorded as 0.
  • the preset historical text library stores a collection of historical texts that are of the same business type as the target text and whose text conclusions have been evaluated. For example: if the target text is a case description of an insurance policy to be underwritten, the historical text database is a collection of case texts of an insurance policy that has been underwritten before; if the target text is a case report that urgently needs to be diagnosed by a doctor, then The historical text database is a collection of case reports that have been previously diagnosed.
  • the preferred embodiment of the present application considers preset types of parts of speech, such as the first 4 parts of speech in the above 9 types of parts of speech, namely: nouns, verbs, adjectives, and adverbs (because the four parts of speech words appear The high frequency has a high proportion of determining the text, and the remaining 5 parts of speech have a small effect on the text conclusion), and the similarity between the target text and the historical text set is calculated.
  • the method for calculating the similarity includes: filtering out words with the preset part of speech (such as noun, verb, adjective, and adverb) in the target text, and each part of speech word corresponds to Generate a target part-of-speech statistics list.
  • the target part-of-speech statistics list includes: the word itself and the frequency of occurrence, that is, the word frequency.
  • this application filters out the preset words of part of speech in a certain historical text, and generates a historical part of speech statistical list corresponding to each word of speech.
  • this application uses a similarity algorithm to calculate the similarity of the target part-of-speech statistics list and the historical part-of-speech statistics list one by one, that is, noun vs. noun, verb vs. verb, adjective vs. adjective, and adverb vs. adverb respectively in two-way text LSTM single vector similarity matching calculation, the calculation formula is:
  • u represents the target text
  • w represents one of the historical texts
  • j represents the value range of the part of speech type, where j is 1 to 4, which means that the similarity matching calculation of 4 types of parts of speech is performed
  • n represents a certain part of speech (noun , Verbs, adjectives, adverbs) the number of words, the value of n is determined based on empirical values. For example, if 10 nouns in u and w are matched for similarity, then n takes 10 when performing noun similarity matching.
  • a i, b i denote speech of words in the word frequency u, w are certain.
  • four similarity calculation results for a historical text and the target text can be obtained, namely: the single vector similarity matching calculation value of nouns cos(u,w) 1 , the single vector of verbs Similarity matching calculation value cos(u,w) 2 , single vector similarity matching calculation value of adjectives cos(u,w) 3 , single vector similarity matching calculation value of adverbs cos(u,w) 4 .
  • the similarity between the target text and the historical text obtained by taking the average value is:
  • the present application screens out a preset number of historical text collections as similar text collections according to the order of similarity with the target text in descending order.
  • the preset number of historical text collections mentioned in this application is 60 copies.
  • the textual conclusion refers to the processing result or evaluation result of the text according to the state described in the text. For example: if a text is a case description of an insurance policy to be underwritten, then the textual conclusion is based on the case description of the insurance policy The amount of compensation obtained; if the target text is a case report diagnosed by a doctor, then the text conclusion is based on the diagnosis result of the case report; if the target text is a student's subjective question answer sheet, then the text conclusion is based on the answer sheet The score points.
  • the preferred embodiment of the present application obtains the text attributes of each similar text in the 60 similar texts according to the method in the above S1, that is, obtains the 7 attributes of each similar text: text length, part-of-speech weight , Word tendency state, person type, degree of word frequency, sentence pattern ratio, overall emotion category. Further, the present application calculates the correlation coefficient of each text attribute of the similar text 7 and the text conclusion of the similar text, and selects a preset number of text attributes according to the order of the correlation coefficient from high to low.
  • the preset number of text attributes in the preferred embodiment of the present application are 3 text attributes, which may include, for example, overall emotion category, degree of word frequency, and text length. It should be understood that the three most relevant text attributes of different types of similar text may be different.
  • the calculation method of the correlation coefficient includes:
  • O A and O B represent text attributes and text conclusions, respectively
  • represent text attributes and the number of words in the text conclusion respectively
  • Jaccard (O A , O B ) represents text attributes and text The similarity coefficient of the conclusion
  • O A ⁇ O B represents the number of the same words in the text attribute O A and the text conclusion
  • O A ⁇ O B represents the combination of the same words in the text attribute O A and the text conclusion O B The total number of.
  • the 60 similar texts are randomly and equally divided into 3 groups A, B, and C, wherein each group contains 20 similar texts.
  • this application uses a linear regression model to alternately use two of the three groups of similar texts, A, B, and C, as the training set, the text features of the training set as the input, and the corresponding text conclusion as the output.
  • Training the linear regression model leaving a group of similar text as the verification set, that is: when A and B are used as the training set, C is used as the verification set, when B and C are used as the training set, A is used as the verification set, when When A and C are used as the training set, B is used as the verification set, so that 3 sets of verification results are obtained, wherein each set of verification results is the deviation value of 20 similar text verifications in the verification set.
  • a and B are used as the training set
  • C is the validation set.
  • the verification result of this group is the 20 text conclusion values of 20 similar texts in group C and the text features of 20 similar texts in group C are input through A,
  • the predicted values of the 20 text conclusions obtained by the trained linear regression model of Group B are compared with the 20 deviation values obtained.
  • this application selects the validation set of the group with the smallest deviation value, for example, the group with the smallest average value is B and C as the training set, and A is the validation set, then the group A is selected And compare the text features of each similar text of the 20 similar texts contained in group A with the text attributes corresponding to the target text, and compare the value of the corresponding similar text with the smallest value in the comparison result.
  • the text conclusion is used as the text conclusion of the target text, so as to complete the text conclusion recommendation of the target text.
  • the text attributes corresponding to the target text are: the overall emotion category is 1, the degree of word frequency is 17, the text length is 400, and the text characteristics of the similar text to be compared are: the overall emotion category is -1 , The frequency of degree words is 28, and the text length is 496.
  • the specific method for comparison is to make difference between the text features corresponding to the two texts to obtain three differences, take the average of the three differences, and find the absolute value of the average, which is the value of the comparison result.
  • the comparison result in the example is:
  • 32.
  • the 20 similar texts are compared with the target text one by one. There are 20 comparison result values, and the text conclusion corresponding to the similar text with the smallest value among the comparison result values of the target text is taken as the text conclusion of the target text.
  • This application also provides a computer device.
  • FIG. 2 it is a schematic diagram of the internal structure of a computer device provided by an embodiment of this application.
  • the computer device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer, or a server.
  • the computer device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the computer device 1 in some embodiments, such as a hard disk of the computer device 1. In other embodiments, the memory 11 may also be an external storage device of the computer device 1, such as a plug-in hard disk, a smart media card (SMC), and a secure digital (SD) equipped on the computer device 1. Card, Flash Card, etc. Further, the memory 11 may also include both an internal storage unit of the computer device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the computer device 1, such as the code of the text conclusion intelligent recommendation program 01, etc., but also to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip, for running program codes or processing stored in the memory 11 Data, such as the implementation of the text conclusion intelligent recommendation program 01, etc.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip, for running program codes or processing stored in the memory 11 Data, such as the implementation of the text conclusion intelligent recommendation program 01, etc.
  • the communication bus 13 is used to realize the connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the computer device 1 and other electronic devices.
  • the computer device 1 may also include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be called a display screen or a display unit as appropriate, and is used to display the information processed in the computer device 1 and to display a visualized user interface.
  • Figure 2 only shows the computer device 1 with components 11-14 and the text conclusion intelligent recommendation program 01.
  • Figure 1 does not constitute a limitation on the computer device 1, and may include Fewer or more parts than shown, or some parts in combination, or different parts arrangement.
  • the memory 11 stores the text conclusion smart recommendation program 01; when the processor 12 executes the text conclusion smart recommendation program 01 stored in the memory 11, the following steps are implemented:
  • Step 1 Obtain the target text, perform a word segmentation operation on the target text, and obtain the text attributes of the target text based on the word segmentation operation.
  • the target text includes: a case report to be diagnosed, a subjective question answer sheet to be scored, an insurance policy to be approved for an insurance amount, and other texts that require an evaluation or conclusion on the text content.
  • the preferred embodiment of the present application performs word segmentation operations on the target text according to part-of-speech classification, and disassembles the target text into a collection with a single word as a unit.
  • the part-of-speech classification includes, but is not limited to: noun (n), verb (v), adjective (a), adverb (d), preposition (p), conjunction (c), pronoun (r), quantifier (q), and Punctuation (w).
  • the constitution is the fundamental law that regulates the realization and operation of state power, and adjusts the relationship between state power and civil rights. It usually stipulates the state system and the form of power organization.
  • constitution_n is _v norm_v country_n power_n realization_v form_n and _c operation_v method_n, _w adjustment_v country_n power_n and _c citizen _n rights_n between _f relationship_n fundamental law_n, _w it_r usually _d stipulates _v country_n system_n, _w regime_n organization_n form_n. _w
  • the text attribute refers to a landmark feature that can describe the nature of the target text.
  • the preferred embodiment of the present application obtains the text attributes of the target text by traversing the target text after the word segmentation operation.
  • the text attributes described in this application include, but are not limited to: text length, part of speech weight, word tendency state, person type, degree of word frequency, sentence pattern ratio, and overall emotion category.
  • the text length refers to the number of words obtained by counting the _w (punctuation marks) of the target text after the word segmentation operation.
  • the word tendency state refers to the analysis of emotional words in the target text after the word segmentation operation.
  • the person type refers to: counting the pronouns in the target text after the word segmentation operation, and using the pronouns with the largest number as the person type of the target text, for example: all the pronouns after the word segmentation operation
  • the pronouns in the target text are counted, and it is found that there are 16 first-person pronouns, 7 second-person pronouns, and 10 third-person pronouns.
  • the person type of the target text after the word segmentation operation is the first person.
  • the frequency of degree terms refers to the number of words expressing the degree of intensity, and the words expressing the degree of intensity are: "very”, “very”, “extremely”, and "most".
  • the sentence pattern ratio refers to: the usual sentence pattern types are declarative sentences, exclamation sentences, interrogative sentences, and imperative sentences. The proportion of the number of various types of sentence patterns is calculated as the sentence pattern ratio.
  • the overall emotion category refers to the For example, “the lungs have no abnormal shadows and are functioning well” on the case diagnosis book. This sentence can be counted as a positive emotion sentence; it expresses a negative emotion. For example, the accident description in insurance "The engine exploded due to a violent lateral impact of the vehicle.” This sentence can be counted as a negative emotional sentence, and the rest of the statement sentences are mostly general emotional sentences. Sentences are regarded as the overall emotion category of the target text. Positive emotions are recorded as 1, negative emotions are recorded as -1, and general emotions are recorded as 0.
  • Step 2 Obtain a historical text set from a preset historical text library, calculate the similarity between the target text after the word segmentation operation and the historical text set, and filter out a preset number of historical text sets based on the similarity as Similar text collection.
  • the preset historical text library stores a collection of historical texts that are of the same business type as the target text and whose text conclusions have been evaluated. For example: if the target text is a case description of an insurance policy to be underwritten, the historical text database is a collection of case texts of an insurance policy that has been underwritten before; if the target text is a case report that urgently needs to be diagnosed by a doctor, then The historical text database is a collection of case reports that have been previously diagnosed.
  • the preferred embodiment of the present application considers preset types of parts of speech, such as the first 4 parts of speech in the above 9 types of parts of speech, namely: nouns, verbs, adjectives, and adverbs (because the four parts of speech words appear The high frequency has a high proportion of determining the text, and the remaining 5 parts of speech have a small effect on the text conclusion), and the similarity between the target text and the historical text set is calculated.
  • the method for calculating the similarity includes: filtering out words with the preset part of speech (such as noun, verb, adjective, and adverb) in the target text, and each part of speech word corresponds to Generate a target part-of-speech statistics list.
  • the target part-of-speech statistics list includes: the word itself and the frequency of occurrence, that is, the word frequency.
  • this application filters out the preset words of part of speech in a certain historical text, and generates a historical part of speech statistical list corresponding to each word of speech.
  • this application uses a similarity algorithm to calculate the similarity of the target part-of-speech statistics list and the historical part-of-speech statistics list one by one, that is, noun vs. noun, verb vs. verb, adjective vs. adjective, and adverb vs. adverb respectively in two-way text LSTM single vector similarity matching calculation, the calculation formula is:
  • u represents the target text
  • w represents one of the historical texts
  • j represents the value range of the part of speech type, where j is 1 to 4, which means that the similarity matching calculation of 4 types of parts of speech is performed
  • n represents a certain part of speech (noun , Verbs, adjectives, adverbs) the number of words, the value of n is determined based on empirical values. For example, if 10 nouns in u and w are matched for similarity, then n takes 10 when performing noun similarity matching.
  • a i, b i denote speech of words in the word frequency u, w are certain.
  • four similarity calculation results for a historical text and the target text can be obtained, namely: the single vector similarity matching calculation value of nouns cos(u,w) 1 , the single vector of verbs Similarity matching calculation value cos(u,w) 2 , single vector similarity matching calculation value of adjectives cos(u,w) 3 , single vector similarity matching calculation value of adverbs cos(u,w) 4 .
  • the similarity between the target text and the historical text obtained by averaging is:
  • the present application screens out a preset number of historical text collections as similar text collections according to the order of similarity with the target text in descending order.
  • the preset number of historical text collections mentioned in this application is 60 copies.
  • Step 3 Obtain the text attribute of each similar text in the similar text set, calculate the correlation coefficient between the text attribute of each similar text and the known text conclusion of the similar text, according to the correlation coefficient Select a preset number of text attributes as text features.
  • the textual conclusion refers to the processing result or evaluation result of the text according to the state described in the text. For example: if a text is a case description of an insurance policy to be underwritten, then the textual conclusion is based on the case description of the insurance policy The amount of compensation obtained; if the target text is a case report diagnosed by a doctor, then the text conclusion is based on the diagnosis result of the case report; if the target text is a student's subjective question answer sheet, then the text conclusion is based on the answer sheet The score points.
  • the preferred embodiment of the present application obtains the text attributes of each similar text in the 60 similar texts according to the method in step one above, that is, obtains the 7 attributes of each similar text: text length, part of speech Proportion, word tendency state, person type, degree of word frequency, sentence pattern ratio, overall emotion category. Further, the present application calculates the correlation coefficient of each text attribute of the similar text 7 and the text conclusion of the similar text, and selects a preset number of text attributes according to the order of the correlation coefficient from high to low.
  • the preset number of text attributes in the preferred embodiment of the present application are 3 text attributes, which may include, for example, overall emotion category, degree of word frequency, and text length. It should be understood that the three most relevant text attributes of different types of similar text may be different.
  • the calculation method of the correlation coefficient includes:
  • O A and O B represent text attributes and text conclusions, respectively
  • represent text attributes and the number of words in the text conclusions, respectively
  • Jaccard (O A ,O B ) represents text attributes and text
  • O A ⁇ O B represents the number of the same words in the text attribute O A and the text conclusion
  • O A ⁇ O B represents the combination of the same words in the text attribute O A and the text conclusion O B The total number of.
  • Step 4 Use the similar text set to train a linear regression model, and use the known text conclusions of the similar text set to verify the output value of the linear regression model to obtain the known text conclusion and the The deviation value between the output values, a preset number of similar texts are screened out according to the deviation value as a comparative text set, and the text characteristics of each comparative text in the comparative text set are compared with the text attributes corresponding to the target text. The difference value is obtained by numerical calculation, and the text feature with the smallest text attribute difference value corresponding to the target text is selected as the text conclusion of the target text, so as to complete the text conclusion recommendation of the target text.
  • the 60 similar texts are randomly and equally divided into 3 groups A, B, and C, wherein each group contains 20 similar texts.
  • this application uses a linear regression model to alternately use two of the three groups of similar texts, A, B, and C, as the training set, the text features of the training set as the input, and the corresponding text conclusion as the output.
  • Training the linear regression model leaving a group of similar text as the verification set, that is: when A and B are used as the training set, C is used as the verification set, when B and C are used as the training set, A is used as the verification set, when When A and C are used as the training set, B is used as the verification set, so that 3 sets of verification results are obtained, wherein each set of verification results is the deviation value of 20 similar text verifications in the verification set.
  • a and B are used as the training set
  • C is the validation set.
  • the verification result of this group is the 20 text conclusion values of 20 similar texts in group C and the text features of 20 similar texts in group C are input through A,
  • the predicted values of the 20 text conclusions obtained by the trained linear regression model of Group B are compared with the 20 deviation values obtained.
  • this application selects the validation set of the group with the smallest deviation value, for example, the group with the smallest average value is B and C as the training set, and A is the validation set, then the group A is selected And compare the text features of each similar text of the 20 similar texts contained in group A with the text attributes corresponding to the target text, and compare the value of the corresponding similar text with the smallest value in the comparison result.
  • the text conclusion is used as the text conclusion of the target text, so as to complete the text conclusion recommendation of the target text.
  • the text attributes corresponding to the target text are: the overall emotion category is 1, the degree of word frequency is 17, the text length is 400, and the text characteristics of the similar text to be compared are: the overall emotion category is -1 , The frequency of degree words is 28, and the text length is 496.
  • the specific method for comparison is to make difference between the text features corresponding to the two texts to obtain three differences, take the average of the three differences, and find the absolute value of the average, which is the value of the comparison result.
  • the comparison result in the example is:
  • 32.
  • the 20 similar texts are compared with the target text one by one. There are 20 comparison result values, and the text conclusion corresponding to the similar text with the smallest value among the comparison result values of the target text is taken as the text conclusion of the target text.
  • the text conclusion intelligent recommendation device 100 can be divided into a word segmentation module 10, a similarity calculation module 20, and a correlation coefficient.
  • the calculation module 30 and the text conclusion recommendation module 40 exemplarily:
  • the word segmentation module 10 is configured to obtain a target text, perform a word segmentation operation on the target text, and obtain a text attribute of the target text based on the word segmentation operation.
  • the similarity calculation module 20 is configured to: obtain a historical text set from a preset historical text library, calculate the similarity between the target text and the historical text set based on the word segmentation operation, and filter out the historical text set based on the similarity. Let the number of historical text collections be the similar text collections.
  • the correlation coefficient calculation module 30 is configured to: obtain the text attributes of each similar text in the similar text set, and calculate the correlation between the text attributes of each similar text and the known text conclusions of the similar text Coefficient, a preset number of text attributes are selected as text features in descending order of the correlation coefficient.
  • the text conclusion recommendation module 40 is used to train a linear regression model using the similar text set, and use the known text conclusions of the similar text set to verify the output value of the linear regression model to obtain the The deviation value between the known text conclusion and the output value, a preset number of similar texts are screened out according to the deviation value as a comparative text set, and the text characteristics of each comparative text in the comparative text set are compared with the The text attribute corresponding to the target text is numerically calculated to obtain the difference, and the text feature with the smallest text attribute difference corresponding to the target text is selected as the text conclusion of the target text, thereby completing the text conclusion recommendation of the target text.
  • the functions or operation steps implemented by the word segmentation module 10, the similarity calculation module 20, the correlation coefficient calculation module 30, and the text conclusion recommendation module 40 when executed are substantially the same as those in the foregoing embodiment, and will not be repeated here.
  • the embodiment of the present application also proposes a computer-readable storage medium.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the computer-readable storage medium stores a text conclusion intelligent recommendation program.
  • the text conclusion intelligent recommendation program can be executed by one or more processors to achieve the following operations:

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Operations Research (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种文本结论智能推荐方法、装置、计算机设备以及一种计算机可读存储介质,包括:获取目标文本及历史文本集,对所述目标文本进行分词操作,得到所述目标文本的文本属性;对所述历史文本集依次进行相似度计算、相关系数计算,得到所述历史文本集的文本特征;从所述历史文本集筛选出预设数量的历史文本作为对比文本集,并将所述对比文本集中各对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。上述方法实现了文本结论的智能推荐。

Description

文本结论智能推荐方法、装置、计算机设备及计算机可读存储介质
本申请要求于2020年1月16日提交中国专利局、申请号为CN202010051191.X、发明名称为“文本结论智能推荐方法、装置及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种智能化的文本结论智能推荐方法、装置、计算机设备及计算机可读存储介质。
背景技术
目前在传统人工核保作业中,需要核保作业人员精准定位客户风险,然后结合风险收集客户相关信息确定风险程度,最后根据核保规则,对保单做出合适的核保决定,判断客户的投保单是否能够允予承保。发明人意识到这项工作要能做到精准无误,不仅需要丰富的保险知识、医学知识以及财务知识,同时需要具备丰富的案件评价经验。因此对于一名刚入行的新人来说,免不了需要资深从业人士花费大量时间来指导,这就耗费了不少人力成本,亟需一种智能的辅导方式,来解放这部分指导人力。
发明内容
本申请提供一种文本结论智能推荐方法、装置及计算机可读存储介质,其主要目的在于根据历史数据对文本结论的智能判断,解放人力操作。
为实现上述目的,本申请提供的一种文本结论智能推荐方法,包括:
获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性;
从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集;
获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征;
利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集;
将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
为实现上述目的,本申请还提供的一种文本结论智能推荐装置,包括:
分词模块,用于获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性;
相似度计算模块,用于从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集;
相关系数计算模块,用于获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征;
文本结论推荐模块,用于利用所述相似文本集训练线性回归模型,并利用所述相似文本集 的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集;将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
此外,为实现上述目的,本申请还提供一种计算机设备,该计算机设备包括存储器和处理器,所述存储器中存储有可在所述处理器上运行的文本结论智能推荐程序,所述文本结论智能推荐程序被所述处理器执行时实现如下步骤:
获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性;
从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集;
获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征;
利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集;
将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有文本结论智能推荐程序,所述文本结论智能推荐程序被一个或者多个处理器执行时实现如下步骤:
获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性;
从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集;
获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征;
利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集;
将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
本申请提出的文本结论智能推荐方法、装置、计算机设备及计算机可读存储介质,在用户对目标文本进行结论判断时,获取历史文本集,通过相似度判断方法从所述历史文本集中筛选出所述目标文本的相似文本集,再根据所述相似文本集的文本属性和已知的文本结论,通过训练线性回归模型找到所述目标文本的适合的结论,从而不需要人为判断,释放了人力操作。
附图说明
图1为本申请一实施例提供的文本结论智能推荐方法的流程示意图;
图2为本申请一实施例提供的计算机设备的内部结构示意图;
图3为本申请一实施例提供的文本结论智能推荐装置的模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种文本结论智能推荐方法。参照图1所示,为本申请一实施例提供的文本结论智能推荐方法的流程示意图。该方法可以由一个计算机设备执行,该计算机设备可以由软件和/或硬件实现。
在本实施例中,文本结论智能推荐方法包括:
S1、获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性。
本申请较佳实施例中,所述目标文本包括:待诊断的病例报告、待评分的主观题答卷、待核定保险金额的保单等需要通过对文本内容做出评价或者结论的文本。
本申请较佳实施例按照词性分类对所述目标文本进行分词操作,将所述目标文本拆解为以单个词为单位的集合。所述词性分类包括、但不限于:名词(n)、动词(v)、形容词(a)、副词(d)、介词(p)、连词(c)、代词(r)、量词(q)以及标点符号(w)。例如:对如下语句进行分词操作:
宪法是规范国家权力的实现形式以及运行方式、调整国家权力和公民权利之间关系的根本法,它通常规定国家体制、政权组织形式。
得到的结果为:宪法_n是_v规范_v国家_n权力_n实现_v形式_n以及_c运行_v方式_n、_w调整_v国家_n权力_n和_c公民_n权利_n之间_f关系_n根本法_n,_w它_r通常_d规定_v国家_n体制_n、_w政权_n组织_n形式_n。_w
所述文本属性是指能够描述所述目标文本性质的标志性特征。本申请较佳实施例通过遍历搜索分词操作后的所述目标文本得到所述目标文本的文本属性。
较佳地,本申请所述文本属性包括、但不限于:文本长度、词性比重、词语倾向性状态、人称类型、程度用词频度、句式比例以及整体情感类别。
所述文本长度是指对分词操作后的所述目标文本除去_w(标点符号)后进行统计得到的词的个数。所述词性比重是指所述9种词性分类在总词数中所占的比例,例如:某目标文本经分词后的名词数量是20,总词数是352,则该文本的名词比重为20/352=5.7%。所述词语倾向性状态是指对所述分词操作后的所述目标文本中带有感情色彩的词语进行分析,本申请较佳实施例中,所述感情色彩为3类,分别是:褒义词、贬义词和中性词,得到3类词的比例,即为所述词语倾向性状态,例如:某目标文本经分词后的带有感情色彩的词汇:褒义词、贬义词和中性词数量分别为18,6,45,则所述词语倾向性状态为18:6:45=6:2:15。所述人称类型是指:对所述分词操作后的所述目标文本中的代词进行统计,将数量最多的那类代词作为所述目标文本的人称类型,例如:对所述分词操作后的所述目标文本中的代词进行统计,发现:第一人称代词为16个,第二人称代词为7个,第三人称代词为10个,则所述分词操作后的所述目标文本的人称类型为第一人称。所述程度用词频度是指表达出强烈程度的词语的个数,所述表达出强烈程度的词语有:“很”、“非常”、“极其”、“最”。所述句式比例是指:通常句式类型是陈述句、感叹句、疑问句和祈使句,统计出各个类型句式数量的比例即为所述句式比例,所述整体情感类别是指,对文本中的句子进行考量,表达的是正向情感的,例如,病例诊断书上的“肺部未见异常阴影,功能良好。”这句话可算作正向情感语句;表达的是负向情感的,例如,保险中的事故描述“发动机因为车辆侧向猛烈撞击,发生爆炸。”这句话可算作负向情感语句,其余的陈述语句多为一般情感语句,将数量最多的那一类的情感语句作为所述目标文本的整体情感 类别,正向情感记为1,负向情感记为-1,一般情感记为0。
S2、从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集。
本申请较佳实施例中,所述预设历史文本库存储有与所述目标文本的业务类型相同的,并且已经做出评价的文本结论的历史文本的集合。例如:若所述目标文本是待核保的保单的案件描述,则所述历史文本库是之前已经核保完毕的保单的案件文本的集合;若目标文本是亟待医生进行诊断的病例报告,则所述历史文本库是之前已经确诊完毕的病例报告的集合。
进一步地,本申请较佳实施例根据预设种类的词性,如上述9种词性分类中的前4种词性,即:名词、动词、形容词、副词进行考量(因为所述4种词性的词语出现频率高对文本的决定比重也高,剩余5种词性对文本结论的影响作用较小),计算所述目标文本与所述历史文本集的相似度。
本申请较佳实施例中,所述相似度的计算方法包括:将所述目标文本中具有所述预设词性(如名词、动词、形容词、副词)的词语筛选出来,每种词性的词语对应生成一个目标词性统计列表。所述目标词性统计列表包含:词语本身以及出现频次,即词频。相同地,本申请将某一历史文本中的所述预设词性的词语筛选出来,每种词性的词语对应生成一个历史词性统计列表。进一步地,本申请采用相似度算法将所述目标词性统计列表和所述历史词性统计列表一一进行相似度计算,即名词对名词、动词对动词、形容词对形容词、副词对副词分别进行双向文本LSTM单向量相似度匹配计算,计算公式为:
Figure PCTCN2020098979-appb-000001
其中,u表示目标文本,w表示其中一个历史文本,j表示词性种类的取值范围,这里j取值1到4,表示进行4种词性的相似度匹配计算,n表示具有某种词性(名词、动词、形容词、副词)词语的个数,n值依据经验值来确定,例如要对u、w中的10个名词进行相似度匹配,那么在进行名词相似度匹配时n取10,要对u、w中的5个动词进行相似度匹配,那么在进行动词相似度匹配时n取5,a i、b i分别表示u、w中某种词性的词语的词频。
因此,根据上述方法,针对一份历史文本同所述目标文本进行相似度计算的结果可以得到4个,即:名词的单向量相似度匹配计算值cos(u,w) 1、动词的单向量相似度匹配计算值cos(u,w) 2、形容词的单向量相似度匹配计算值cos(u,w) 3、副词的单向量相似度匹配计算值cos(u,w) 4。通过取平均值求得所述目标文本和该份历史文本相似度为:
Figure PCTCN2020098979-appb-000002
进一步地,本申请根据与所述目标文本之间的相似度从大到小的顺序筛选出预设数量的历史文本集作为相似文本集。较佳地,本申请中所述预设数量的历史文本集为60份。
S3、获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征。
所述文本结论是指按照文本描述的状态对文本进行判断所得出的处理结果或者评判结果,例如:如果某文本是待核保的保单的案件描述,那么所述文本结论是根据保单的案件描述得出的赔偿金额;如果目标文本是医生进行诊断的病例报告,那么所述文本结论就是根据病例报告的诊断结果;如果目标文本是学生的主观题答卷,那么所述文本结论就是 根据答卷所得到的得分分值。
本申请较佳实施例按照上述S1中的方法获取所述60份相似文本中每一份相似文本的所述文本属性,即获取每一份所述相似文本的7个属性:文本长度、词性比重、词语倾向性状态、人称类型、程度用词频度、句式比例、整体情感类别。进一步地,本申请计算每一份所述相似文本7文本属性和该相似文本的所述文本结论的相关系数,并根据所述相关系数从高到低的顺序选取预设数量的文本属性。较佳地,本申请较佳实施例中所述预设数量的文本属性为3个文本属性,例如,可以包括,整体情感类别、程度用词频度、文本长度。应该了解,不同类型的相似文本相关性最高的3个文本属性可能不同。
其中,所述相关系数的计算方法包括:
Figure PCTCN2020098979-appb-000003
Figure PCTCN2020098979-appb-000004
其中,O A和O B分别表示文本属性和文本结论,|O A|和|O B|分别表示文本属性和文本结论内词语的个数,Jaccard(O A,O B)表示文本属性和文本结论的相似系数,O A∩O B表示文本属性O A和文本结论中相同词语的个数,O A∪O B表示将文本属性O A和文本结论O B中相同词语进行合并后所有词语的总个数。
S4、利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集,将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
本申请较佳实施例对所述60份相似文本进行随机均分为A、B、C共3组,其中,每一组中包含20份所述相似文本。进一步地,本申请利用线性回归模型,分别交替将A、B、C,3组相似文本中的其中2组作为训练集,将所述训练集的文本特征作为输入,对应的文本结论作为输出来训练所述线性回归模型,剩下1组相似文本作为验证集,即:当A、B作为训练集时,C就作为验证集,当B、C作为训练集时,A就作为验证集,当A、C作为训练集时,B就作为验证集,这样得到3组验证结果,其中,每1组验证结果是该组验证集中20份所述相似文本验证的偏差值。例如A、B作为训练集,C作为验证集这一组,该组的验证结果为C组中20份相似文本的20个文本结论值和将C组20份相似文本的文本特征输入经过A、B组训练过的线性回归模型所得出的20个文本结论的预测值相比较得到的20个偏差值。
进一步地,本申请将所述偏差值最小的那一组的验证集筛选出来,例如所述平均值最小的为B、C作为训练集,A为验证集的这一组,则将A组筛选出来,并将A组中所包含的20份相似文本的每一份相似文本的所述文本特征逐一同所述目标文本对应的文本属性进行比较,将比较结果值中数值最小的对应相似文本的文本结论作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
例如:所述目标文本对应的文本属性为:整体情感类别为1、程度用词频度为17、文本长度为400,所述要进行比较的相似文本的文本特征为:整体情感类别为-1、程度用词频度为28、文本长度为496。那么,进行比较的具体方法为将两个文本对应的文本特征进行做差,得到三个差值,对三个差值取平均值,求平均值的绝对值,即为该次比较结果值。例子中的比较结果值为:|[(-1-1)+(28-17)+(496-400)]/3|=32。20份所述相似文本逐一和所述目标文本进行比较就有20个比较结果值,将与所述目标文本比较结果值中数值 最小的对应相似文本的文本结论作为所述目标文本的文本结论。
本申请还提供一种计算机设备。参照图2所示,为本申请一实施例提供的计算机设备的内部结构示意图。
在本实施例中,所述计算机设备1可以是PC(Personal Computer,个人电脑),或者是智能手机、平板电脑、便携计算机等终端设备,也可以是一种服务器等。该计算机设备1至少包括存储器11、处理器12,通信总线13,以及网络接口14。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是计算机设备1的内部存储单元,例如该计算机设备1的硬盘。存储器11在另一些实施例中也可以是计算机设备1的外部存储设备,例如计算机设备1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括计算机设备1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于计算机设备1的应用软件及各类数据,例如文本结论智能推荐程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行文本结论智能推荐程序01等。
通信总线13用于实现这些组件之间的连接通信。
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该计算机设备1与其他电子设备之间建立通信连接。
可选地,该计算机设备1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在计算机设备1中处理的信息以及用于显示可视化的用户界面。
图2仅示出了具有组件11-14以及文本结论智能推荐程序01的计算机设备1,本领域技术人员可以理解的是,图1示出的结构并不构成对计算机设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图2所示的计算机设备1实施例中,存储器11中存储有文本结论智能推荐程序01;处理器12执行存储器11中存储的文本结论智能推荐程序01时实现如下步骤:
步骤一、获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性。
本申请较佳实施例中,所述目标文本包括:待诊断的病例报告、待评分的主观题答卷、待核定保险金额的保单等需要通过对文本内容做出评价或者结论的文本。
本申请较佳实施例按照词性分类对所述目标文本进行分词操作,将所述目标文本拆解为以单个词为单位的集合。所述词性分类包括、但不限于:名词(n)、动词(v)、形容词(a)、副词(d)、介词(p)、连词(c)、代词(r)、量词(q)以及标点符号(w)。例如:对如下语句进行分词操作:
宪法是规范国家权力的实现形式以及运行方式、调整国家权力和公民权利之间关系的根本法,它通常规定国家体制、政权组织形式。
得到的结果为:宪法_n是_v规范_v国家_n权力_n实现_v形式_n以及_c运行_v方式_n、_w调整_v国家_n权力_n和_c公民_n权利_n之间_f关系_n根本法_n,_w它_r通常_d规定_v国家_n体制_n、_w政权_n组织_n形式_n。_w
所述文本属性是指能够描述所述目标文本性质的标志性特征。本申请较佳实施例通过遍历搜索分词操作后的所述目标文本得到所述目标文本的文本属性。
较佳地,本申请所述文本属性包括、但不限于:文本长度、词性比重、词语倾向性状态、人称类型、程度用词频度、句式比例以及整体情感类别。
所述文本长度是指对分词操作后的所述目标文本除去_w(标点符号)后进行统计得到的词的个数。所述词性比重是指所述9种词性分类在总词数中所占的比例,例如:某目标文本经分词后的名词数量是20,总词数是352,则该文本的名词比重为20/352=5.7%。所述词语倾向性状态是指对所述分词操作后的所述目标文本中带有感情色彩的词语进行分析,本申请较佳实施例中,所述感情色彩为3类,分别是:褒义词、贬义词和中性词,得到3类词的比例,即为所述词语倾向性状态,例如:某目标文本经分词后的带有感情色彩的词汇:褒义词、贬义词和中性词数量分别为18,6,45,则所述词语倾向性状态为18:6:45=6:2:15。所述人称类型是指:对所述分词操作后的所述目标文本中的代词进行统计,将数量最多的那类代词作为所述目标文本的人称类型,例如:对所述分词操作后的所述目标文本中的代词进行统计,发现:第一人称代词为16个,第二人称代词为7个,第三人称代词为10个,则所述分词操作后的所述目标文本的人称类型为第一人称。所述程度用词频度是指表达出强烈程度的词语的个数,所述表达出强烈程度的词语有:“很”、“非常”、“极其”、“最”。所述句式比例是指:通常句式类型是陈述句、感叹句、疑问句和祈使句,统计出各个类型句式数量的比例即为所述句式比例,所述整体情感类别是指,对文本中的句子进行考量,表达的是正向情感的,例如,病例诊断书上的“肺部未见异常阴影,功能良好。”这句话可算作正向情感语句;表达的是负向情感的,例如,保险中的事故描述“发动机因为车辆侧向猛烈撞击,发生爆炸。”这句话可算作负向情感语句,其余的陈述语句多为一般情感语句,将数量最多的那一类的情感语句作为所述目标文本的整体情感类别,正向情感记为1,负向情感记为-1,一般情感记为0。
步骤二、从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集。
本申请较佳实施例中,所述预设历史文本库存储有与所述目标文本的业务类型相同的,并且已经做出评价的文本结论的历史文本的集合。例如:若所述目标文本是待核保的保单的案件描述,则所述历史文本库是之前已经核保完毕的保单的案件文本的集合;若目标文本是亟待医生进行诊断的病例报告,则所述历史文本库是之前已经确诊完毕的病例报告的集合。
进一步地,本申请较佳实施例根据预设种类的词性,如上述9种词性分类中的前4种词性,即:名词、动词、形容词、副词进行考量(因为所述4种词性的词语出现频率高对文本的决定比重也高,剩余5种词性对文本结论的影响作用较小),计算所述目标文本与所述历史文本集的相似度。
本申请较佳实施例中,所述相似度的计算方法包括:将所述目标文本中具有所述预设词性(如名词、动词、形容词、副词)的词语筛选出来,每种词性的词语对应生成一个目标词性统计列表。所述目标词性统计列表包含:词语本身以及出现频次,即词频。相同地,本申请将某一历史文本中的所述预设词性的词语筛选出来,每种词性的词语对应生成一个历史词性统计列表。进一步地,本申请采用相似度算法将所述目标词性统计列表和所述历史词性统计列表一一进行相似度计算,即名词对名词、动词对动词、形容词对形容词、副词对副词分别进行双向文本LSTM单向量相似度匹配计算,计算公式为:
Figure PCTCN2020098979-appb-000005
其中,u表示目标文本,w表示其中一个历史文本,j表示词性种类的取值范围,这里j取值1到4,表示进行4种词性的相似度匹配计算,n表示具有某种词性(名词、动词、形容词、副词)词语的个数,n值依据经验值来确定,例如要对u、w中的10个名词进行相似度匹配,那么在进行名词相似度匹配时n取10,要对u、w中的5个动词进行相似度匹配,那么在进行动词相似度匹配时n取5,a i、b i分别表示u、w中某种词性的词语的词频。
因此,根据上述方法,针对一份历史文本同所述目标文本进行相似度计算的结果可以得到4个,即:名词的单向量相似度匹配计算值cos(u,w) 1、动词的单向量相似度匹配计算值cos(u,w) 2、形容词的单向量相似度匹配计算值cos(u,w) 3、副词的单向量相似度匹配计算值cos(u,w) 4。通过取平均值求得所述目标文本和该份历史文本相似度为:
Figure PCTCN2020098979-appb-000006
进一步地,本申请根据与所述目标文本之间的相似度从大到小的顺序筛选出预设数量的历史文本集作为相似文本集。较佳地,本申请中所述预设数量的历史文本集为60份。
步骤三、获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征。
所述文本结论是指按照文本描述的状态对文本进行判断所得出的处理结果或者评判结果,例如:如果某文本是待核保的保单的案件描述,那么所述文本结论是根据保单的案件描述得出的赔偿金额;如果目标文本是医生进行诊断的病例报告,那么所述文本结论就是根据病例报告的诊断结果;如果目标文本是学生的主观题答卷,那么所述文本结论就是根据答卷所得到的得分分值。
本申请较佳实施例按照上述步骤一中的方法获取所述60份相似文本中每一份相似文本的所述文本属性,即获取每一份所述相似文本的7个属性:文本长度、词性比重、词语倾向性状态、人称类型、程度用词频度、句式比例、整体情感类别。进一步地,本申请计算每一份所述相似文本7文本属性和该相似文本的所述文本结论的相关系数,并根据所述相关系数从高到低的顺序选取预设数量的文本属性。较佳地,本申请较佳实施例中所述预设数量的文本属性为3个文本属性,例如,可以包括,整体情感类别、程度用词频度、文本长度。应该了解,不同类型的相似文本相关性最高的3个文本属性可能不同。
其中,所述相关系数的计算方法包括:
Figure PCTCN2020098979-appb-000007
Figure PCTCN2020098979-appb-000008
其中,O A和O B分别表示文本属性和文本结论,|O A|和|O B|分别表示文本属性和文本结论内词语的个数,Jaccard(O A,O B)表示文本属性和文本结论的相似系数,O A∩O B表示文本属性O A和文本结论中相同词语的个数,O A∪O B表示将文本属性O A和文本结论O B中相同词语进行合并后所有词语的总个数。
步骤四、利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之 间的偏差值,根据所述偏差值筛选出预设数的相似文本作为对比文本集,将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
本申请较佳实施例对所述60份相似文本进行随机均分为A、B、C共3组,其中,每一组中包含20份所述相似文本。进一步地,本申请利用线性回归模型,分别交替将A、B、C,3组相似文本中的其中2组作为训练集,将所述训练集的文本特征作为输入,对应的文本结论作为输出来训练所述线性回归模型,剩下1组相似文本作为验证集,即:当A、B作为训练集时,C就作为验证集,当B、C作为训练集时,A就作为验证集,当A、C作为训练集时,B就作为验证集,这样得到3组验证结果,其中,每1组验证结果是该组验证集中20份所述相似文本验证的偏差值。例如A、B作为训练集,C作为验证集这一组,该组的验证结果为C组中20份相似文本的20个文本结论值和将C组20份相似文本的文本特征输入经过A、B组训练过的线性回归模型所得出的20个文本结论的预测值相比较得到的20个偏差值。
进一步地,本申请将所述偏差值最小的那一组的验证集筛选出来,例如所述平均值最小的为B、C作为训练集,A为验证集的这一组,则将A组筛选出来,并将A组中所包含的20份相似文本的每一份相似文本的所述文本特征逐一同所述目标文本对应的文本属性进行比较,将比较结果值中数值最小的对应相似文本的文本结论作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
例如:所述目标文本对应的文本属性为:整体情感类别为1、程度用词频度为17、文本长度为400,所述要进行比较的相似文本的文本特征为:整体情感类别为-1、程度用词频度为28、文本长度为496。那么,进行比较的具体方法为将两个文本对应的文本特征进行做差,得到三个差值,对三个差值取平均值,求平均值的绝对值,即为该次比较结果值。例子中的比较结果值为:|[(-1-1)+(28-17)+(496-400)]/3|=32。20份所述相似文本逐一和所述目标文本进行比较就有20个比较结果值,将与所述目标文本比较结果值中数值最小的对应相似文本的文本结论作为所述目标文本的文本结论。
参照图3所示,为本申请文本结论智能推荐装置一实施例的模块示意图,该实施例中,所述文本结论智能推荐装置100可以被分割为分词模块10、相似度计算模块20、相关系数计算模块30以及文本结论推荐模块40,示例性地:
所述分词模块10用于:获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性。
所述相似度计算模块20用于:从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集。
所述相关系数计算模块30用于:获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数从高到低顺序选取预设数量的文本属性作为文本特征。
所述文本结论推荐模块40用于:利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集,将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。上述分词模块10、相似度计算模块20、相关系数计算模块30以及文本结论推荐模块40等模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所计算机可读存储介质可以是非易失性,也可以是易失性,所述计算机可读存储介质上存储有文本结论智能推荐程序,所述文本结论智能推荐程序可被一个或多个处理器执行,以实现如下操作:
获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性;
从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集;
获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数从高到低顺序选取预设数量的文本属性作为文本特征;
利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集;
将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。本申请计算机可读存储介质具体实施方式与上述文本结论智能推荐装置和方法各实施例基本相同,在此不作累述。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种文本结论智能推荐方法,其中,所述方法包括:
    获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性;
    从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集;
    获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征;
    利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集;
    将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
  2. 如权利要求1所述的文本结论智能推荐方法,其中,所述文本属性包括:文本长度、词性比重、词语倾向性状态、人称类型、程度用词频度、句式比例以及整体情感类别。
  3. 如权利要求1所述的文本结论智能推荐方法,其中,所述计算基于分词操作后的所述目标文本与所述历史文本集的相似度包括:
    在所述目标文本中根据词性进行词语筛选,并根据筛选后的词语生成目标词性统计列表;
    在所述历史文本集中的历史文本中根据所述词性进行词语筛选,生成历史词性统计列表;
    利用相似度算法计算所述目标词性统计列表和所述历史词性统计列表的相似度。
  4. 如权利要求3所述的文本结论智能推荐方法,其中,所述相似度算法包括:
    Figure PCTCN2020098979-appb-100001
    其中,u表示目标文本,w表示其中一个历史文本,j表示词性种类的取值范围,n表示某种词性的词语个数,a i、b i分别表示u、w中某种词性的词语的词频。
  5. 如权利要求1至4中任意一项所述的文本结论智能推荐方法,其中,所述计算所述每一份相似文本中的文本属性与该相似文本的已知的文本结论之间的相关系数的计算方法包括:
    Figure PCTCN2020098979-appb-100002
    Figure PCTCN2020098979-appb-100003
    其中,O A和O B分别表示文本属性和文本结论,|O A|和|O B|分别表示文本属性和文本结论内词语的个数,Jaccard(O A,O B)表示文本属性和文本结论的相似系数,O A∩O B表示文本属性O A和文本结论中相同词语的个数,O A∪O B表示将文本属性O A和文本结论O B中相同词语进行合并后所有词语的总个数。
  6. 如权利要求5所述的文本结论智能推荐方法,其中,所述对所述目标文本进行分词操作包括:按照词性分类将所述目标文本拆分为多个词。
  7. 如权利要求2所述的文本结论智能推荐方法,其中,所述人称类型的确定过程为:
    对所述分词操作后的所述目标文本中的代词进行统计,将数量最多的那类代词作为所述目标文本的人称类型。
  8. 一种文本结论智能推荐装置,其中,包括:
    分词模块,用于获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性;
    相似度计算模块,用于从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集;
    相关系数计算模块,用于获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征;
    文本结论推荐模块,用于利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集;将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
  9. 一种计算机设备,其中,所述计算机设备包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的文本结论智能推荐程序,所述文本结论智能推荐程序被所述处理器执行时实现如下步骤:
    获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性;
    从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集;
    获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征;
    利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集;
    将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
  10. 如权利要求9所述的计算机设备,其中所述文本属性包括:文本长度、词性比重、词语倾向性状态、人称类型、程度用词频度、句式比例以及整体情感类别。
  11. 如权利要求9所述的计算机设备,其中,所述计算基于分词操作后的所述目标文本与所述历史文本集的相似度包括:
    在所述目标文本中根据词性进行词语筛选,并根据筛选后的词语生成目标词性统计列表;
    在所述历史文本集中的历史文本中根据所述词性进行词语筛选,生成历史词性统计列表;
    利用相似度算法计算所述目标词性统计列表和所述历史词性统计列表的相似度。
  12. 如权利要求11所述的计算机设备,其中,所述相似度算法包括:
    Figure PCTCN2020098979-appb-100004
    其中,u表示目标文本,w表示其中一个历史文本,j表示词性种类的取值范围,n表示某种词性的词语个数,a i、b i分别表示u、w中某种词性的词语的词频。
  13. 如权利要求9至12中任意一项所述的计算机设备,其中,所述计算所述每一份相似文本中的文本属性与该相似文本的已知的文本结论之间的相关系数的计算方法包括:
    Figure PCTCN2020098979-appb-100005
    Figure PCTCN2020098979-appb-100006
    其中,O A和O B分别表示文本属性和文本结论,|O A|和|O B|分别表示文本属性和文本结论内词语的个数,Jaccard(O A,O B)表示文本属性和文本结论的相似系数,O A∩O B表示文本属性O A和文本结论中相同词语的个数,O A∪O B表示将文本属性O A和文本结论O B中相同词语进行合并后所有词语的总个数。
  14. 如权利要求13所述的计算机设备,其中,所述对所述目标文本进行分词操作包括:按照词性分类将所述目标文本拆分为多个词。
  15. 如权利要求10所述的计算机设备,其中,所述人称类型的确定过程为:
    对所述分词操作后的所述目标文本中的代词进行统计,将数量最多的那类代词作为所述目标文本的人称类型。
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有文本结论智能推荐程序,所述文本结论智能推荐程序被一个或者多个处理器执行时实现如下步骤:
    获取目标文本,对所述目标文本进行分词操作,基于所述分词操作获取所述目标文本的文本属性;
    从预设历史文本库中获取历史文本集,计算基于分词操作后的所述目标文本与所述历史文本集的相似度,根据所述相似度筛选出预设数量的历史文本集作为相似文本集;
    获取所述相似文本集中每一份相似文本的文本属性,计算所述每一份相似文本的文本属性与该相似文本的已知的文本结论之间的相关系数,根据所述相关系数选取预设数量的文本属性作为文本特征;
    利用所述相似文本集训练线性回归模型,并利用所述相似文本集的已知的文本结论对所述线性回归模型的输出值进行验证,得到所述已知的文本结论与所述输出值之间的偏差值,根据所述偏差值筛选出预设数量的相似文本作为对比文本集;
    将所述对比文本集中每一份对比文本的文本特征与所述目标文本对应的文本属性进行数值计算得到差值,选取与所述目标文本对应的文本属性差值最小的文本特征作为所述目标文本的文本结论,从而完成所述目标文本的文本结论推荐。
  17. 如权利要求16所述的计算机可读存储介质,其中,所述文本属性包括:文本长度、词性比重、词语倾向性状态、人称类型、程度用词频度、句式比例以及整体情感类别。
  18. 如权利要求16所述的计算机可读存储介质,其中,所述计算基于分词操作后的所述目标文本与所述历史文本集的相似度包括:
    在所述目标文本中根据词性进行词语筛选,并根据筛选后的词语生成目标词性统计列表;
    在所述历史文本集中的历史文本中根据所述词性进行词语筛选,生成历史词性统计列 表;
    利用相似度算法计算所述目标词性统计列表和所述历史词性统计列表的相似度。
  19. 如权利要求18所述的计算机可读存储介质,其中,所述相似度算法包括:
    Figure PCTCN2020098979-appb-100007
    其中,u表示目标文本,w表示其中一个历史文本,j表示词性种类的取值范围,n表示某种词性的词语个数,a i、b i分别表示u、w中某种词性的词语的词频。
  20. 如权利要求16至19中任意一项所述的计算机可读存储介质,其中,所述计算所述每一份相似文本中的文本属性与该相似文本的已知的文本结论之间的相关系数的计算方法包括:
    Figure PCTCN2020098979-appb-100008
    Figure PCTCN2020098979-appb-100009
    其中,O A和O B分别表示文本属性和文本结论,|O A|和|O B|分别表示文本属性和文本结论内词语的个数,Jaccard(O A,O B)表示文本属性和文本结论的相似系数,O A∩O B表示文本属性O A和文本结论中相同词语的个数,O A∪O B表示将文本属性O A和文本结论O B中相同词语进行合并后所有词语的总个数。
PCT/CN2020/098979 2020-01-16 2020-06-29 文本结论智能推荐方法、装置、计算机设备及计算机可读存储介质 WO2021143056A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010051191.X 2020-01-16
CN202010051191.XA CN111275091B (zh) 2020-01-16 2020-01-16 文本结论智能推荐方法、装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021143056A1 true WO2021143056A1 (zh) 2021-07-22

Family

ID=71002262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098979 WO2021143056A1 (zh) 2020-01-16 2020-06-29 文本结论智能推荐方法、装置、计算机设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111275091B (zh)
WO (1) WO2021143056A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579706A (zh) * 2022-03-07 2022-06-03 桂林旅游学院 一种基于bert神经网络和多任务学习的主观题自动评阅方法
CN116578673A (zh) * 2023-07-03 2023-08-11 北京凌霄文苑教育科技有限公司 数字经济领域基于语言逻辑学的文本特征检索方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275091B (zh) * 2020-01-16 2024-05-10 平安科技(深圳)有限公司 文本结论智能推荐方法、装置及计算机可读存储介质
CN112560477B (zh) * 2020-12-09 2024-04-16 科大讯飞(北京)有限公司 文本补全方法以及电子设备、存储装置
CN113704637A (zh) * 2021-08-30 2021-11-26 深圳前海微众银行股份有限公司 基于人工智能的对象推荐方法、装置、存储介质
CN114493904B (zh) * 2022-04-18 2022-06-28 北京合理至臻科技有限公司 一种智能核保风控方法、系统、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122863A (zh) * 2017-04-28 2017-09-01 厦门大学 积压物料相关物料属性的查询和预测方法
CN107818138A (zh) * 2017-09-28 2018-03-20 银江股份有限公司 一种案件法律条例推荐方法及系统
CN109299007A (zh) * 2018-09-18 2019-02-01 哈尔滨工程大学 一种缺陷修复者自动推荐方法
CN109446416A (zh) * 2018-09-26 2019-03-08 南京大学 基于词向量模型的法条推荐方法
CN111275091A (zh) * 2020-01-16 2020-06-12 平安科技(深圳)有限公司 文本结论智能推荐方法、装置及计算机可读存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944911B (zh) * 2017-11-18 2021-12-03 电子科技大学 一种基于文本分析的推荐系统的推荐方法
CN108197137A (zh) * 2017-11-20 2018-06-22 广州视源电子科技股份有限公司 文本的处理方法、装置、存储介质、处理器和终端
US11487941B2 (en) * 2018-05-21 2022-11-01 State Street Corporation Techniques for determining categorized text
CN109063147A (zh) * 2018-08-06 2018-12-21 北京航空航天大学 基于文本相似度的在线课程论坛内容推荐方法及系统
CN109614484A (zh) * 2018-11-09 2019-04-12 华南理工大学 一种基于分类效用的文本聚类方法及其系统
CN109472008A (zh) * 2018-11-20 2019-03-15 武汉斗鱼网络科技有限公司 一种文本相似度计算方法、装置及电子设备
CN110163476A (zh) * 2019-04-15 2019-08-23 重庆金融资产交易所有限责任公司 项目智能推荐方法、电子装置及存储介质
CN110413773B (zh) * 2019-06-20 2023-09-22 平安科技(深圳)有限公司 智能文本分类方法、装置及计算机可读存储介质
CN110413728B (zh) * 2019-06-20 2023-10-27 平安科技(深圳)有限公司 练习题推荐方法、装置、设备和存储介质
CN110427480B (zh) * 2019-06-28 2022-10-11 平安科技(深圳)有限公司 个性化文本智能推荐方法、装置及计算机可读存储介质
CN110457574A (zh) * 2019-07-05 2019-11-15 深圳壹账通智能科技有限公司 基于数据比较的信息推荐方法、装置及存储介质
CN110489751A (zh) * 2019-08-13 2019-11-22 腾讯科技(深圳)有限公司 文本相似度计算方法及装置、存储介质、电子设备
CN110442684B (zh) * 2019-08-14 2020-06-30 山东大学 一种基于文本内容的类案推荐方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122863A (zh) * 2017-04-28 2017-09-01 厦门大学 积压物料相关物料属性的查询和预测方法
CN107818138A (zh) * 2017-09-28 2018-03-20 银江股份有限公司 一种案件法律条例推荐方法及系统
CN109299007A (zh) * 2018-09-18 2019-02-01 哈尔滨工程大学 一种缺陷修复者自动推荐方法
CN109446416A (zh) * 2018-09-26 2019-03-08 南京大学 基于词向量模型的法条推荐方法
CN111275091A (zh) * 2020-01-16 2020-06-12 平安科技(深圳)有限公司 文本结论智能推荐方法、装置及计算机可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579706A (zh) * 2022-03-07 2022-06-03 桂林旅游学院 一种基于bert神经网络和多任务学习的主观题自动评阅方法
CN114579706B (zh) * 2022-03-07 2023-09-29 桂林旅游学院 一种基于bert神经网络和多任务学习的主观题自动评阅方法
CN116578673A (zh) * 2023-07-03 2023-08-11 北京凌霄文苑教育科技有限公司 数字经济领域基于语言逻辑学的文本特征检索方法
CN116578673B (zh) * 2023-07-03 2024-02-09 北京凌霄文苑教育科技有限公司 数字经济领域基于语言逻辑学的文本特征检索方法

Also Published As

Publication number Publication date
CN111275091A (zh) 2020-06-12
CN111275091B (zh) 2024-05-10

Similar Documents

Publication Publication Date Title
WO2021143056A1 (zh) 文本结论智能推荐方法、装置、计算机设备及计算机可读存储介质
US10818397B2 (en) Clinical content analytics engine
US10489502B2 (en) Document processing
US11762921B2 (en) Training and applying structured data extraction models
US8170969B2 (en) Automated computation of semantic similarity of pairs of named entity phrases using electronic document corpora as background knowledge
WO2017067153A1 (zh) 基于文本分析的信用风险评估方法及装置、存储介质
Li et al. Reliable medical diagnosis from crowdsourcing: Discover trustworthy answers from non-experts
JP2003519828A (ja) トレーニングデータから導かれる確率的なレコードリンクモデル
CN105740224A (zh) 一种基于文本分析的用户心理预警方法与装置
CN109472462B (zh) 一种基于多模型堆栈融合的项目风险评级方法及装置
US11775765B2 (en) Linguistic analysis of differences in portrayal of movie characters
US20160098456A1 (en) Implicit Durations Calculation and Similarity Comparison in Question Answering Systems
KR102359638B1 (ko) 의료 분야 맞춤형 감성분석을 통한 의료기관 평가 분석 시스템
Cavalcanti et al. Detection and evaluation of cheating on college exams using supervised classification
Sheikha et al. Learning to classify documents according to formal and informal style
CN110502622A (zh) 常见医疗问答数据生成方法、装置以及计算机设备
CN113362072A (zh) 风控数据处理方法、装置、电子设备及存储介质
CN104216880B (zh) 基于互联网的术语定义辨析方法
Rubin Identifying certainty in texts
US10846295B1 (en) Semantic analysis system for ranking search results
JP6206874B2 (ja) 格成分抽出プログラム
CN112561714A (zh) 基于nlp技术的核保风险预测方法、装置及相关设备
Bartoszuk et al. Detecting similarity of R functions via a fusion of multiple heuristic methods
CN112115705B (zh) 一种电子简历的筛选方法及装置
CN116629256A (zh) 多模态文本检错方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914501

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914501

Country of ref document: EP

Kind code of ref document: A1