CN113722462A - Target argument information extraction data processing system - Google Patents

Target argument information extraction data processing system Download PDF

Info

Publication number
CN113722462A
CN113722462A CN202111024884.0A CN202111024884A CN113722462A CN 113722462 A CN113722462 A CN 113722462A CN 202111024884 A CN202111024884 A CN 202111024884A CN 113722462 A CN113722462 A CN 113722462A
Authority
CN
China
Prior art keywords
argument
text
information
role
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111024884.0A
Other languages
Chinese (zh)
Other versions
CN113722462B (en
Inventor
张正义
傅晓航
刘羽
常宏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Yuchen Technology Co Ltd
Original Assignee
Zhongke Yuchen Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Yuchen Technology Co Ltd filed Critical Zhongke Yuchen Technology Co Ltd
Priority to CN202111024884.0A priority Critical patent/CN113722462B/en
Publication of CN113722462A publication Critical patent/CN113722462A/en
Application granted granted Critical
Publication of CN113722462B publication Critical patent/CN113722462B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9532Query formulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to a target argument information extraction data processing system, which comprises a preconfigured event type mapping table, an event argument role configuration table, a memory and a processor, wherein the memory and the processor are used for storing mapping records of trigger words and event types, and the mapping records of the trigger words and the event types comprise trigger word fields and event type fields; the event argument role configuration table is used for storing event argument role information records, and the event argument role information records comprise event type fields, argument role fields and argument role priority fields. The invention improves the integrity and accuracy of the target argument information extraction result.

Description

Target argument information extraction data processing system
Technical Field
The invention relates to the technical field of data processing, in particular to a target argument information extraction data processing system.
Background
With the rapid popularization and development of the internet, a great deal of data information is generated and spread in the network, and how to timely and accurately find needed information from a great amount of natural language texts becomes increasingly urgent. The massive natural language documents have the characteristics of large data volume, non-uniform structure, high redundancy, quick update and the like. In the prior art, an event extraction model is usually obtained by training in a machine learning manner to extract events that are of interest to a user from unstructured information, and the events are presented to the user in a structured manner. However, in the prior art, a method for extracting an event by directly adopting an event extraction model is relatively dependent on corpora, if the corpora are small in quantity, incomplete or inappropriate, the event extraction result is greatly influenced, and especially for an event type which is not used as a training sample for learning, accurate and complete extraction of the argument information cannot be guaranteed, so that the accuracy of event extraction is low, and the extracted event information is incomplete.
Disclosure of Invention
The invention aims to provide a target argument information extraction data processing system, which improves the integrity and accuracy of target argument information extraction results.
According to one aspect of the invention, a target argument information extraction data processing system is provided, which comprises a preconfigured event type mapping table, an event argument role configuration table, a memory in which a computer program is stored, and a processor, wherein the event type mapping table is used for storing mapping records of trigger words and event types, and the mapping records of the trigger words and the event types comprise trigger word fields and event type fields; the event argument role configuration table is used for storing event argument role information records, and the event argument role information records comprise event type fields, argument role fields and argument role priority fields;
when the processor executes the computer program, the following steps are realized:
step S1, determining a target event type corresponding to a preset target trigger word according to the event type mapping table, and acquiring a text to be processed corresponding to the target trigger word;
step S2, based on the aboveDetermining a target argument role list { B ] corresponding to the target event type by the element argument role configuration table1,B2,…BM},B1、B2、…BMIn order of decreasing priority, BmFor the mth target argument role, the value range of M is 1 to M, M is the number of target argument roles corresponding to the target event type, the initialization M is 1, and historical information h is initializedm=Am0
Step S3, based on Am0、Bm、hmConstructing input information of an argument information extraction model, and extracting mth argument information C from the text to be processed based on output information of the role extraction modelm
Step S4, comparing M and M, if M<M, setting M to M +1,
Figure BDA0003242932880000021
returning to step S3, if M is equal to M, then { C ═ M }1,C2,…CMAnd determining the target argument information.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By the technical scheme, the target argument information extraction data processing system provided by the invention can achieve considerable technical progress and practicability, has industrial wide utilization value and at least has the following advantages:
according to the invention, the argument information is extracted one by fusing the history information and setting the argument priority, so that the accuracy of argument information extraction is improved, and the integrity and accuracy of the target event extraction result are further improved.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic diagram of a target argument information extraction data processing system according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to a specific implementation and effects of a target argument information extraction data processing system according to the present invention with reference to the accompanying drawings and preferred embodiments.
The embodiment of the invention provides a target argument information extraction data processing system, which comprises a pre-configured event type mapping table, an event argument role configuration table, a memory storing a computer program and a processor, wherein the event type mapping table is used for storing mapping records of trigger words and event types, and the mapping records of the trigger words and the event types comprise trigger word fields and event type fields; the event argument role configuration table is used for storing event argument role information records, and the event argument role information records comprise event type fields, argument role fields and argument role priority fields;
when the processor executes the computer program, the following steps are realized:
step S1, determining a target event type corresponding to a preset target trigger word according to the event type mapping table, and acquiring a text to be processed corresponding to the target trigger word;
step S2, determining a target argument role list { B } corresponding to the target event type according to the event argument role configuration table1,B2,…BM},B1、B2、…BMIn order of decreasing priority, BmFor the mth target argument role, the value range of M is 1 to M, M is the number of target argument roles corresponding to the target event type, the initialization M is 1, and historical information h is initializedm=Am0
Step S3, based on Am0、Bm、hmConstructing input information of an argument information extraction model, and extracting mth argument information C from the text to be processed based on output information of the role extraction modelm
Step S4, comparisonM and M, if M<M, setting M to M +1,
Figure BDA0003242932880000031
returning to step S3, if M is equal to M, then { C ═ M }1,C2,…CMAnd determining the target argument information.
According to the embodiment of the invention, the argument information is extracted one by fusing the history information and setting the argument priority, so that the accuracy of argument information extraction is improved, and the integrity and accuracy of the target event extraction result are further improved.
As an embodiment, the argument information extraction model is obtained by training based on a preset third text sample training set and a third neural network model architecture, where the third text sample training set includes Y third text samples { E }1,E2,…EY},EyFor the yth third text sample, EyThe corresponding sample trigger is EAy,EyCorresponding sample argument role { BE1,BE2,…BEyM},EyCorresponding sample argument information CE1,CE2,…CEyMWherein Y has a value ranging from 1 to Y, BE1、BE2、…BEyMAre sequentially lower in priority, BEiIs EyCorresponding ith sample argument role, CEiIs EyCorresponding ith sample argument information, BEiAnd CEiCorrespondingly, the value range of i is 1 to yM; the third neural network model architecture is a sequence labeling model architecture;
when the processor executes the computer program, the following steps are also realized:
step S100, initializing y to 1;
step S200, initializing i to 1, and sampling history information Bhy=EAy
Step S300 based on BEi、EAyGenerating corresponding sample argument role question text BFi
Step S400, BFi、Ey、BhyInput a preset encoder, pairEyAnd BFiEncoding to obtain ELyOf ELyInputting the third neural network model architecture to obtain a corresponding second prediction output labeling sequence LCi,LCiCorresponding BFiIs labeled 0;
in the step S400, each argument information is extracted by incorporating history information, that is, in the current round of extraction, the known sample trigger word of the argument information extraction model and the already extracted argument information, that is, the known positions are not necessarily target labeled positions, that is, the position information is inevitably labeled as 0. In addition, argument roles are ordered according to preset priorities, argument information which is easy to extract can be extracted by an argument information extraction model, history information is increased along with increase of difficulty of argument information extraction, and the increased history information can guide the model to extract next argument information more quickly and accurately.
In step S400, BF is also subjected toiAnd EySpliced by preset separators, and then based on Bh, the encoderyAnd BFiAnd EyThe BF after splicing of the corresponding character position information pairiAnd EyAnd (6) coding is carried out. The preset separator is [ SEP ]]The system is further configured with a preset mask algorithm configured to map [ SEP]The former input part is shielded, only coding is carried out on the shielded part, and prediction is not carried out, the mask algorithm ensures that the third neural network model architecture only carries out the [ SEP ] when carrying out sequence marking]Then EyAnd (6) labeling.
Step S500 based on Ey、CEiGenerating a second actual output labeling sequence LDiIn the second actual output tag sequence, EyCorresponding CEiPosition marked 1, non-CEiPosition is labeled 0;
step S600 based on LCiAnd LDiJudging whether the currently trained third neural network model architecture reaches the preset model precision, if so, determining the current third neural network model architecture as the argument information extraction model, otherwise, determining the current third neural network model architecture as the argument information extraction modelStep S700 is executed;
step S700 based on LCiAnd LDiAdjusting the current third neural network model architecture parameters, comparing the sizes of i and yM, and if i is greater than yM<yM, setting i to i +1,
Figure BDA0003242932880000041
returning to step S300, if i is yM, step S800 is executed;
step S800 compares the magnitudes of Y and Y, and if Y < Y, the step returns to step S200 by setting Y to Y +1, and if Y is Y, the step returns to step S100.
It should be noted that the question set in the trigger word discovery model and the event type classification model is to maintain consistency with the argument extraction model when the system adopts the cascade model, to improve the accuracy of the system, and after the model parameters are determined, in the actual use process, the corresponding question may not be input when the trigger word discovery model is used to extract the trigger word and the event type is obtained by the event type classification model. However, the question of the argument extraction model still needs to be input, because the question of the argument extraction model also plays a role in guiding the argument extraction model to label corresponding argument information.
As an example, the step S3 includes:
step S31, based on Am0、BmGenerating an mth argument role question text FmTo process the text, Fm、hmInputting the text to be processed and F in a preset encodermCoding to obtain LmIs prepared by mixing LmInputting the argument information extraction model to obtain a corresponding second prediction output labeling sequence LCm
It should be noted that step S41 corresponds to step S400, and the text to be processed is matched with FmSplicing based on preset separation codes, and then splicing based on the text to be processed and FmThe position information of the characters and the current historical information are used for splicing the text to be processed and the text FmAnd (6) coding is carried out.
Step S32 based on LCmAnd LmExtracting the mth argument information C from the text to be processedm
It should be noted that, as the information labeling result of the argument information extraction model is only to label the information corresponding to the text to be processed, the actually input encoded text is the spliced text to be processed and the spliced FmEncoding is carried out, thus according to the text to be processed and FmDetermining corresponding mth argument information C by combining the position relation of the original characters and the sequence marking result output by the argument information extraction modelm
It should be noted that the argument role priority may be determined directly based on historical experience, may also be determined based on input, and may also be determined by sample argument role distribution, as an embodiment, when the processor executes the computer program, the following steps are further implemented:
step S301, determining the priority of the argument role corresponding to the event type of each argument role priority to be judged based on a sample argument role set formed by all sample argument roles in a preset third text sample training set, wherein the sample argument role set is { BEX }1,BEX2,…BEXZ},BEXzFor the Z-th sample argument role, the value range of Z is 1 to Z, Z is the number of sample argument roles in the sample argument role set, and the argument role set corresponding to the event type of the argument role priority to be judged is { BX {1,BX2,…BXW},BXwThe argument role is the W-th argument role corresponding to the event type of the argument role priority to be judged, the value range of W is 1 to W, and W is the argument role number corresponding to the event type of the argument role priority to be judged;
the step S301 specifically includes:
step S302, adding BXwInputting a preset encoder for encoding, and performing pooling processing on an encoding result to obtain argument role pooling encoding BX to be judgedw’;
Step S303, mixing BEXzInputting a preset encoder for encoding, and performing pooling treatment on the encoding result to obtainSample argument role pooling coding BEXz’,BXw' and BEXz' the vector dimensions are the same; cos (BX)w’,BEXz’)
Step S304, obtaining BXwCorresponding priority weight Pw
Figure BDA0003242932880000061
Step S305, according to BXwCorresponding priority weight PwAnd generating the priority of the argument roles corresponding to the event types of the argument role priorities to be judged from large to small.
It should be noted that, in step S1, argument information is extracted based on a known scenario that includes a target trigger word to-be-processed text and a mapping relationship between a target trigger word and an event type, and in practical applications, there may be a case where it is impossible to determine whether a target trigger word exists in the to-be-processed text and to clearly know a corresponding relationship between a trigger word and an event type, and therefore, it is possible to further determine whether a target trigger word exists in the to-be-processed text and to clearly know a corresponding relationship between a trigger word and an event type
The method for constructing the trigger word classification model is described in detail through a plurality of embodiments as follows:
the first embodiment,
The trigger word discovery model is obtained by training based on a preset first text sample training set and a first neural network model architecture, the first text training set comprises a first text sample and a corresponding trigger word, and the first neural network model architecture is a sequence labeling architecture;
when the processor executes the computer program, the following steps are also realized:
step S10, obtaining a first text sample from the first text sample training set, splicing a preset trigger word question with the first text sample through a preset separator to obtain a first spliced text sample, coding the first spliced text sample based on a preset coder, and setting a first actual output labeling sequence corresponding to the first spliced text sample, wherein in the first actual output labeling sequence, all positions corresponding to the trigger word question are labeled as 1, the position of a trigger word corresponding to the first text sample is labeled as 1, and the position of a non-trigger word is labeled as 0;
in an embodiment, the preset separator is [ SEP ], and the mask algorithm makes the first neural network model architecture label only the first text sample after [ SEP ] when sequence labeling is performed.
Step S20, taking the encoded first stitched text sample as an input of a preset first neural network architecture to obtain a first predicted output tagging sequence, adjusting a parameter of the first neural network architecture based on a first actual output tagging sequence and a first actual output tagging sequence of the first stitched text sample, and training to obtain the trigger word discovery model.
It can be understood that, the first neural network architecture parameter is adjusted based on the first actual output tagging sequence and the first actual output tagging sequence of the first stitched text sample, and an existing model training manner may be directly adopted, for example, solving the cross entropy so as to end the model training when the cross entropy is minimum, and the description is not expanded here.
The second embodiment,
The trigger word discovery model is obtained by training based on a preset first text training set and a second classification model architecture, it should be noted that the second classification model architecture can be specifically an SVM (support vector machine), a decision tree and the like, and can also be a sequence labeling model, each position of an output sequence is labeled with a second classification result, and the first text training set comprises a first text sample and a corresponding trigger word;
when the processor executes the computer program, the following steps are also realized:
step S101, obtaining a first text sample from the first text sample training set, taking a trigger word in the first text sample as a positive sample word, slicing the first text sample to obtain a sliced word, and randomly extracting the sliced word to form a non-trigger word as a negative sample word;
it should be noted that, as time progresses, some new trigger words appear, if non-trigger words in the current text are directly extracted from the text as negative samples, and if the non-trigger words are converted into trigger words subsequently, the accuracy of the model is greatly affected. Therefore, the first text sample is sliced to obtain slice participles, the slice participles can be one character of the first text sample or a plurality of continuous characters of the first text sample, and the sliced slice participles are randomly extracted to form non-trigger words as negative sample words, so that the combined large negative sample words are negative samples in certain probability and are converted into positive samples in small probability, the effect of diluting the negative samples is achieved, and the accuracy and the reliability of a trigger word discovery model are improved.
Step S102, respectively encoding the positive sample and the negative sample based on a preset encoder, inputting the encoded positive sample and the negative sample into a preset two-classification model architecture for classification prediction, adjusting parameters of the two-classification model architecture based on a sample prediction classification result and an actual classification result, and generating a trigger word discovery model.
The third embodiment,
The system includes a preset trigger word list, a pre-trained part-of-speech analysis model and a grammar analysis model, the trigger word list includes trigger words, trigger word part-of-speech grammar information and/or trigger word part-of-speech information, in step S1, candidate trigger words are extracted from the text to be processed, including:
step S11, performing word segmentation and word deactivation processing on the text to be processed to obtain a word segmentation list, and matching the word segmentation list with trigger words in the trigger word list to obtain a candidate word segmentation list;
step S12, inputting the text to be processed into the grammar analysis model to obtain grammar information of candidate participles, and/or inputting the participle list and the text to be processed into the part-of-speech analysis model to obtain part-of-speech information of each candidate participle;
and step S13, filtering the candidate participles in the candidate participle list, wherein the candidate participles are inconsistent with the part-of-speech information and/or the grammatical information of the corresponding trigger words in the trigger word list, and obtaining the candidate trigger words.
In the third embodiment, a trigger word can be added in the trigger word list, so that the system can recognize the newly added trigger word, and can be applied to the zero-time learning scene of the first event information, and through the step S12 and the step S13, the trigger words extracted by mistake can be filtered based on the part of speech and the grammar, so that the accuracy of extracting the trigger words is improved.
The fourth embodiment,
In order to extract the trigger words in the text to be processed more comprehensively and further improve the accuracy and reliability of the extraction of the trigger words, the third embodiment may be combined with at least one trigger word discovery model in the first embodiment and the second embodiment, and candidate trigger words obtained in different embodiments are merged to obtain the candidate trigger word list.
The following further describes how to determine the implementation of the event type in the case that the correspondence between the trigger word and the event type cannot be directly known, in detail, with several specific embodiments:
the first embodiment,
The pre-trained event type classification model is obtained by training based on a preset second text sample training set and a second neural network model architecture, the second text sample training set comprises a second text sample, a trigger word corresponding to the second text sample and an event type corresponding to the second text sample, the second neural network model architecture is a multi-classification model architecture, and an output vector is { d } d1,d2,…dRR is the number of event type names, drA probability value of the input trigger word belonging to the r-th event type;
when the processor executes the computer program, the following steps are realized:
step S201, a second text sample is obtained from a preset second text sample training set, a corresponding trigger word belonging event type question is generated based on a trigger word corresponding to the second text sample, the corresponding trigger word belonging event type question is spliced with the second text sample through a preset separator to obtain a second spliced text sample, the second spliced text sample is encoded based on a preset encoder, a second actual output vector corresponding to the second spliced text sample is set, in the second actual output vector, the probability value of the trigger word actually belonging event type corresponding to the second text sample is 1, and other probability values are 0;
step S202, inputting the coded second spliced text sample into the second neural network model architecture to obtain a second prediction output vector, adjusting parameters of the second neural network model architecture based on the second prediction output vector and a second actual output vector, and generating the event type classification model.
It can be understood that, the parameters of the second neural network model architecture are adjusted based on the second predicted output vector and the second actual output vector, and an existing model training manner may be directly adopted, for example, the cross entropy is solved, so that the model training is ended when the cross entropy is minimum, and the like, and the description is not repeated here.
The second embodiment,
The system further comprises a list of event type names { D }1,D2,…DR},DrFor the R-th event type name, the range of R is from 1 to R, and R is the number of event type names, in step S2, obtaining the event type corresponding to each candidate trigger word includes:
step S21, step DrInputting a preset encoder for encoding, and performing pooling processing on an encoding result to obtain a r-th event type name pooling encoding Dr’;
The pooling process may specifically be averaging the parameters of each column, or obtaining a maximum value of the parameters of each column.
Step S22, AnInputting the code into the coder, coding and pooling the coding result to obtain the n-th candidate trigger word pooling code An’,Dr' and An' vector dimensions are the same;
step S23, determine whether toR is present such that r satisfies argmaxcos (A)n’,Dr') and cos (A)n’,Dr’)>D1Wherein, cos (A)n’,Dr') represents An' and Dr' cosine similarity, D1And if the first similarity threshold exists, determining the r-th event type as the event type corresponding to the n-th candidate trigger word.
In step S23, if r is not present, r satisfies argmaxcos (a)n’,Dr') and cos (A)n’,Dr’)>D1Then, step S24 is executed:
step S24, obtaining the preset first G cos (A) which are sorted from big to smalln’,Dr') value cos1,cos2,…cosG},cosgIs the g-th cos (A)n’,Dr') G has a value of 1 to G, and G satisfies cosg+1-cosg<D2,D2If the current event type is the preset error threshold, executing step S25, otherwise, determining that the event type corresponding to the nth candidate trigger does not exist in the event type name list;
step S25, converting cosgMatching the corresponding candidate trigger word with the trigger word list, and if the candidate trigger word does not exist in the trigger word list, matching the corresponding cos with the trigger word listgFrom { cos1,cos2,…cosGDeleting in the sequence;
step S26, { cos after execution of the operation of step S251,cos2,…cosGDetermining that the event type corresponding to the nth candidate trigger word does not exist in the event type name list if the trigger word is an empty set, otherwise, executing { cos after the operation of the step S251,cos2,…cosGLargest cos in (c) }gAnd determining the corresponding event type as the event type corresponding to the nth candidate trigger word.
It should be noted that, the embodiment can quickly and accurately identify the event type that has been trained by the model, the second embodiment can add an event type in the event type name list, and has better extensibility, and the second embodiment can be applied to a scene in which event information is learned for zero times, that is, event data that has not been trained by the model can be extracted quickly and accurately.
It should be noted that all encoders related in the embodiment of the present invention are the same encoder, and as an embodiment, the system further includes a pre-configured word sequence number mapping table for storing a mapping relationship between words and sequence numbers, each word corresponds to a unique sequence number, the encoder converts each word of a text to be encoded into a corresponding sequence number based on the word sequence number mapping table, then encodes each sequence number into a vector of a preset dimension based on position information of each sequence number in the text to be encoded, and encodes each sequence number into a vector of a preset dimension based on the history information and the position information of each sequence number in the text to be encoded if the encoder further receives the history information. Specifically, the encoder may be a pre-training language model, and the pre-training language model includes a bert model, a roberta model, an albert model, and the like. .
It should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of some of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A target argument information extraction data processing system characterized in that,
the event type mapping table is used for storing mapping records of trigger words and event types, and the mapping records of the trigger words and the event types comprise trigger word fields and event type fields; the event argument role configuration table is used for storing event argument role information records, and the event argument role information records comprise event type fields, argument role fields and argument role priority fields;
when the processor executes the computer program, the following steps are realized:
step S1, determining a target event type corresponding to a preset target trigger word according to the event type mapping table, and acquiring a text to be processed corresponding to the target trigger word;
step S2, determining a target argument role list { B } corresponding to the target event type according to the event argument role configuration table1,B2,…BM},B1、B2、…BMIn order of decreasing priority, BmFor the mth target argument role, the value range of M is 1 to M, M is the number of target argument roles corresponding to the target event type, the initialization M is 1, and historical information h is initializedm=Am0
Step S3, based on Am0、Bm、hmConstructing input information of an argument information extraction model, and extracting mth argument information C from the text to be processed based on output information of the role extraction modelm
Step S4, comparing M and M, if M<M, setting M to M +1,
Figure FDA0003242932870000011
returning to step S3, if M is equal to M, then { C ═ M }1,C2,…CMAnd determining the target argument information.
2. The system of claim 1,
the argument information extraction model is obtained based on a preset third text sample training set and a third neural network model architecture training, wherein the third text sample training set comprises Y third text samples { E }1,E2,…EY},EyFor the yth third text sample, EyThe corresponding sample trigger is EAy,EyCorresponding sample argument role { BE1,BE2,…BEyM},EyCorresponding sample argument information CE1,CE2,…CEyMWherein Y has a value ranging from 1 to Y, BE1、BE2、…BEyMAre sequentially lower in priority, BEiIs EyCorresponding ith sample argument role, CEiIs EyCorresponding ith sample argument information, BEiAnd CEiCorrespondingly, the value range of i is 1 to yM; the third neural network model architecture is a sequence labeling model architecture;
when the processor executes the computer program, the following steps are also realized:
step S100, initializing y to 1;
step S200, initializing i to 1, and sampling history information Bhy=EAy
Step S300 based on BEi、EAyGenerating corresponding sample argument role question text BFi
Step S400, BFi、Ey、BhyInput a preset encoder, pair EyAnd BFiEncoding to obtain ELyOf ELyInputting the third neural network model architecture to obtain a corresponding second prediction output labeling sequence LCi,LCiCorresponding BhyIs labeled 0;
step S500 based on Ey、CEiGenerating a second actual output calloutColumn LDiIn the second actual output tag sequence, EyCorresponding CEiPosition marked 1, non-CEiPosition is labeled 0;
step S600 based on LCiAnd LDiJudging whether the currently trained third neural network model architecture reaches preset model precision, if so, determining the currently trained third neural network model architecture as the argument information extraction model, otherwise, executing the step S700;
step S700 based on LCiAnd LDiAdjusting the current third neural network model architecture parameters, comparing the sizes of i and yM, and if i is greater than yM<yM, setting i to i +1,
Figure FDA0003242932870000021
returning to step S300, if i is yM, step S800 is executed;
step S800 compares the magnitudes of Y and Y, and if Y < Y, the step returns to step S200 by setting Y to Y +1, and if Y is Y, the step returns to step S100.
3. The system of claim 2,
in the step S400, the BF is performedi、Ey、BhyInput a preset encoder, pair EyAnd BFiEncoding to obtain ELyThe method comprises the following steps:
step S401, for BFiAnd EySplicing by preset separators, the encoder being based on BhyAnd BFiAnd EyThe BF after splicing of the corresponding character position information pairiAnd EyEncoding to obtain ELy
4. The system of claim 2,
step S3 includes:
step S31, based on Am0、BmGenerating an mth argument role question text FmTo process the text, Fm、hmInput presetIn the encoder, the text and F to be processedmCoding to obtain LmIs prepared by mixing LmInputting the argument information extraction model to obtain a corresponding second prediction output labeling sequence LCm
Step S32 based on LCmAnd LmExtracting the mth argument information C from the text to be processedm
5. The system of claim 4,
in step S31, the text to be processed and the text F are processedm、hmInputting the text to be processed and F in a preset encodermCoding to obtain LmThe method comprises the following steps:
step S311, text to be processed and FmSplicing is carried out based on preset separation codes, and the spliced text to be processed and the spliced F are based onmPosition information of characters and current history information hmFor the spliced text to be processed and FmCoding to obtain Lm
6. The system of claim 3 or 5,
the preset separator is [ SEP ], the system is also provided with a preset mask algorithm, the mask algorithm is configured to mask an input part before the [ SEP ], and only coding is carried out on the masked part without prediction.
7. The system of claim 1,
when the processor executes the computer program, the following steps are also realized:
step S301, determining the priority of the argument role corresponding to the event type of each argument role priority to be judged based on a sample argument role set formed by all sample argument roles in a preset third text sample training set, wherein the sample argument role set is { BEX }1,BEX2,…BEXZ},BEXzRole for z sample argumentThe value range of Z is 1 to Z, Z is the number of sample argument roles in the sample argument role set, and the argument role set corresponding to the event type of the argument role priority to be judged is { BX }1,BX2,…BXW},BXwThe argument role is the W-th argument role corresponding to the event type of the argument role priority to be judged, the value range of W is 1 to W, and W is the argument role number corresponding to the event type of the argument role priority to be judged;
the step S301 specifically includes:
step S302, adding BXwInputting a preset encoder for encoding, and performing pooling processing on an encoding result to obtain argument role pooling encoding BX to be judgedw’;
Step S303, mixing BEXzInputting a preset encoder for encoding, and performing pooling processing on an encoding result to obtain sample argument role pooling encoding BEXz’,BXw' and BEXz' the vector dimensions are the same; cos (BX)w’,BEXz’)
Step S304, obtaining BXwCorresponding priority weight Pw
Figure FDA0003242932870000031
Step S305, according to BXwCorresponding priority weight PwAnd generating the priority of the argument roles corresponding to the event types of the argument role priorities to be judged from large to small.
8. The system of any one of claims 2-5 or 7,
the system also comprises a pre-configured word sequence number mapping table for storing the mapping relation between words and sequence numbers, wherein each word corresponds to a unique sequence number, the encoder converts each word of the text to be encoded into a corresponding sequence number based on the word sequence number mapping table, then encodes each sequence number into a vector with preset dimensionality based on the position information of each sequence number in the text to be encoded, and if the encoder also receives historical information, encodes each sequence number into a vector with preset dimensionality based on the historical information and the position information of each sequence number in the text to be encoded.
9. The system of claim 8,
the encoder is a pre-training language model, which includes a bert model, a roberta model, and an albert model.
CN202111024884.0A 2021-09-02 2021-09-02 Target argument information extraction data processing system Active CN113722462B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111024884.0A CN113722462B (en) 2021-09-02 2021-09-02 Target argument information extraction data processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111024884.0A CN113722462B (en) 2021-09-02 2021-09-02 Target argument information extraction data processing system

Publications (2)

Publication Number Publication Date
CN113722462A true CN113722462A (en) 2021-11-30
CN113722462B CN113722462B (en) 2022-03-04

Family

ID=78680813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111024884.0A Active CN113722462B (en) 2021-09-02 2021-09-02 Target argument information extraction data processing system

Country Status (1)

Country Link
CN (1) CN113722462B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115062137A (en) * 2022-08-15 2022-09-16 中科雨辰科技有限公司 Data processing system for determining abnormal text based on active learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337474A1 (en) * 2016-05-20 2017-11-23 Disney Enterprises, Inc. Systems and Methods for Determining Semantic Roles of Arguments in Sentences
CN109325228A (en) * 2018-09-19 2019-02-12 苏州大学 English event trigger word abstracting method and system
CN109582949A (en) * 2018-09-14 2019-04-05 阿里巴巴集团控股有限公司 Event element abstracting method, calculates equipment and storage medium at device
CN112116075A (en) * 2020-09-18 2020-12-22 厦门安胜网络科技有限公司 Event extraction model generation method and device and text event extraction method and device
CN112231447A (en) * 2020-11-21 2021-01-15 杭州投知信息技术有限公司 Method and system for extracting Chinese document events
CN112580346A (en) * 2020-11-17 2021-03-30 深圳追一科技有限公司 Event extraction method and device, computer equipment and storage medium
CN112861527A (en) * 2021-03-17 2021-05-28 合肥讯飞数码科技有限公司 Event extraction method, device, equipment and storage medium
CN112966079A (en) * 2021-03-02 2021-06-15 中国电子科技集团公司第二十八研究所 Event portrait oriented text analysis method for dialog system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337474A1 (en) * 2016-05-20 2017-11-23 Disney Enterprises, Inc. Systems and Methods for Determining Semantic Roles of Arguments in Sentences
CN109582949A (en) * 2018-09-14 2019-04-05 阿里巴巴集团控股有限公司 Event element abstracting method, calculates equipment and storage medium at device
CN109325228A (en) * 2018-09-19 2019-02-12 苏州大学 English event trigger word abstracting method and system
CN112116075A (en) * 2020-09-18 2020-12-22 厦门安胜网络科技有限公司 Event extraction model generation method and device and text event extraction method and device
CN112580346A (en) * 2020-11-17 2021-03-30 深圳追一科技有限公司 Event extraction method and device, computer equipment and storage medium
CN112231447A (en) * 2020-11-21 2021-01-15 杭州投知信息技术有限公司 Method and system for extracting Chinese document events
CN112966079A (en) * 2021-03-02 2021-06-15 中国电子科技集团公司第二十八研究所 Event portrait oriented text analysis method for dialog system
CN112861527A (en) * 2021-03-17 2021-05-28 合肥讯飞数码科技有限公司 Event extraction method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZENG Y,FENG Y,MA R,ET AL.: "Scale Up Event Extraction Learning via Automatic Training Data Generation", 《PROCEEDINGS OF THE THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE,NEW ORLEANS,LOUISIANA,USA,AAAI PRESS》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115062137A (en) * 2022-08-15 2022-09-16 中科雨辰科技有限公司 Data processing system for determining abnormal text based on active learning
CN115062137B (en) * 2022-08-15 2022-11-04 中科雨辰科技有限公司 Data processing system for determining abnormal text based on active learning

Also Published As

Publication number Publication date
CN113722462B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN113704476B (en) Target event extraction data processing system
CN113468433B (en) Target event extraction data processing system
CN113722461B (en) Target event extraction data processing system
CN111581229B (en) SQL statement generation method and device, computer equipment and storage medium
CN108648747B (en) Language identification system
CN112100354A (en) Man-machine conversation method, device, equipment and storage medium
CN108205524B (en) Text data processing method and device
CN112560504B (en) Method, electronic equipment and computer readable medium for extracting information in form document
CN113886531A (en) Intelligent question and answer determining method and device, computer equipment and storage medium
CN110795942B (en) Keyword determination method and device based on semantic recognition and storage medium
CN113254613A (en) Dialogue question-answering method, device, equipment and storage medium
CN113722462B (en) Target argument information extraction data processing system
CN111554275A (en) Speech recognition method, device, equipment and computer readable storage medium
CN116991875B (en) SQL sentence generation and alias mapping method and device based on big model
CN112633007A (en) Semantic understanding model construction method and device and semantic understanding method and device
CN114647739B (en) Entity chain finger method, device, electronic equipment and storage medium
CN116244442A (en) Text classification method and device, storage medium and electronic equipment
WO2023173547A1 (en) Text image matching method and apparatus, device, and storage medium
CN114707509A (en) Traffic named entity recognition method and device, computer equipment and storage medium
CN112364666B (en) Text characterization method and device and computer equipment
CN117744632B (en) Method, device, equipment and medium for constructing vulnerability information keyword extraction model
CN113449510B (en) Text recognition method, device, equipment and storage medium
CN114519357B (en) Natural language processing method and system based on machine learning
CN113723058B (en) Text abstract and keyword extraction method, device, equipment and medium
CN117875424B (en) Knowledge graph completion method and system based on entity description and symmetry relation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant