CN114841162B - Text processing method, device, equipment and medium - Google Patents

Text processing method, device, equipment and medium Download PDF

Info

Publication number
CN114841162B
CN114841162B CN202210557278.3A CN202210557278A CN114841162B CN 114841162 B CN114841162 B CN 114841162B CN 202210557278 A CN202210557278 A CN 202210557278A CN 114841162 B CN114841162 B CN 114841162B
Authority
CN
China
Prior art keywords
text
word
event
processed
trigger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210557278.3A
Other languages
Chinese (zh)
Other versions
CN114841162A (en
Inventor
张星星
黄畅然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202210557278.3A priority Critical patent/CN114841162B/en
Publication of CN114841162A publication Critical patent/CN114841162A/en
Application granted granted Critical
Publication of CN114841162B publication Critical patent/CN114841162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Machine Translation (AREA)

Abstract

The application provides a text processing method, a text processing device, text processing equipment and a text processing medium, and relates to the field of natural language processing. The method comprises the following steps: acquiring a text to be processed; determining text characteristic information of a text to be processed, wherein the text characteristic information comprises first characteristic information for trigger word recognition; inputting the first characteristic information into a full-connection layer of a trigger word recognition model to obtain first probability values corresponding to a plurality of word combinations, wherein each first probability value represents the probability that the word combination corresponding to the first probability value is an event trigger word, and the word combination is a word containing a preset trigger key word in a text to be processed; and determining the word combination corresponding to the maximum probability value in the first probability values corresponding to the word combinations as an event trigger word of the text to be processed by triggering a classification layer of the word recognition model. According to the embodiment of the application, the accuracy of event extraction can be improved.

Description

Text processing method, device, equipment and medium
Technical Field
The present disclosure relates to the field of natural language processing, and in particular, to a text processing method, apparatus, device, and medium.
Background
Event extraction, which is one of the research directions in the field of natural language processing, can extract critical or generalized event information from text to be processed, and has deep application in customer service, office, professional fields and the like.
In a related art, the event extraction may be performed by firstly performing word segmentation on the text to be processed and then performing template matching on the word segmentation result, however, the accuracy of the event extraction technology is lower.
Therefore, a technical solution capable of improving the accuracy of event extraction is needed.
It should be noted that the information disclosed in the foregoing background section is only for enhancing understanding of the background of the present application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The application provides a text processing method, a device, equipment and a medium, which at least overcome the problem of low event extraction accuracy in the related technology to a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
According to one aspect of the present application, there is provided a text processing method including:
Acquiring a text to be processed;
determining text characteristic information of a text to be processed, wherein the text characteristic information comprises first characteristic information for trigger word recognition;
inputting the first characteristic information into a full-connection layer of a trigger word recognition model to obtain first probability values corresponding to a plurality of word combinations, wherein each first probability value represents the probability that the word combination corresponding to the first probability value is an event trigger word, and the word combination is a word containing a preset trigger key word in a text to be processed;
and determining event trigger words of the text to be processed based on the first probability values corresponding to the word combinations through a classification layer of the trigger word recognition model.
In one embodiment of the present application, the text feature information further includes second feature information for making event type determination,
after determining the text feature information of the text to be processed, the method further comprises:
inputting the second characteristic information into a full-connection layer of the event classification model to obtain second probability values corresponding to a plurality of preset event types, wherein each second probability value represents the probability that the text to be processed belongs to the preset event type corresponding to each second probability value;
and determining the event type of the text to be processed based on the second probability values corresponding to the preset events by using a classification layer of the event classification model.
In one embodiment of the present application, determining text feature information of text to be processed includes:
performing multi-stage text segmentation on the processed text to obtain a multi-stage text segmentation result;
and extracting features of the multi-stage text segmentation result to obtain text feature information.
In one embodiment of the present application, determining text feature information of text to be processed includes:
performing multi-stage text segmentation on the processed text to obtain a multi-stage text segmentation result;
feature extraction is carried out on each stage of text segmentation result, and a feature extraction result corresponding to the stage of text segmentation result is obtained;
and carrying out feature fusion on the feature extraction results corresponding to the multi-stage text segmentation results to obtain text feature information.
In one embodiment of the present application, feature extraction is performed on each stage of text segmentation result to obtain a feature extraction result corresponding to the stage of text segmentation result, including:
inputting each stage of text segmentation result into a feature extraction model to obtain a feature vector corresponding to the stage of text segmentation result;
generating a text normal vector of the level text based on adjacent words of the center word of the text to be processed;
and generating a feature extraction result corresponding to the text segmentation result based on the feature vector and the text normal vector.
In one embodiment of the present application, feature fusion is performed on feature extraction results corresponding to each of the multi-level text segmentation results, so as to obtain text feature information, including:
and inputting the feature extraction results corresponding to the multi-stage text segmentation results into a preset feature fusion model to obtain text feature information.
In one embodiment of the present application, after determining the event trigger word of the text to be processed based on the first probability values corresponding to the plurality of word combinations, the method further includes:
and correspondingly adding the event trigger words and the text to be processed to a trigger word sample set to obtain a new trigger word sample set, so as to optimize the trigger word recognition model by using the new trigger word sample set.
In one embodiment of the present application, after determining, based on the second probability values corresponding to the plurality of preset events, the event type to which the text to be processed belongs, the method further includes:
and correspondingly adding the event type to which the text to be processed belongs and the text to be processed into an event classification sample set to obtain a new event classification sample set so as to optimize an event classification model by using the new event classification sample set.
According to another aspect of the present application, there is provided a text processing apparatus including:
And the acquisition module is used for acquiring the text to be processed.
The information determining module is used for determining text characteristic information of the text to be processed, wherein the text characteristic information comprises first characteristic information used for carrying out trigger word recognition.
The first computing module is used for inputting the first characteristic information into the full-connection layer of the trigger word recognition model to obtain first probability values corresponding to the word combinations, wherein each first probability value represents the probability that the word combination corresponding to the first probability value is an event trigger word, and the word combination is a word containing a preset trigger key word in the text to be processed.
The trigger word determining module is used for determining event trigger words of the text to be processed based on the first probability values corresponding to the word combinations through the classification layer of the trigger word recognition model.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the text processing method described above via execution of the executable instructions.
According to yet another aspect of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described text processing method.
After obtaining the text feature information of the text to be processed, the text processing method, device, equipment and medium provided by the embodiment of the application can calculate a plurality of first probability values containing word combinations of preset trigger key words by using the full-connection layer of the trigger word recognition model, and determine the word combination corresponding to the maximum first probability value as the event trigger word of the text to be processed. In the embodiment of the application, because the event trigger words often comprise preset trigger key words, through the trigger word recognition model, the word combination of the proper event trigger word can be selected from the word combinations which have a certain probability of being the event trigger words to serve as the event trigger words, so that the recognition accuracy of the event trigger words is improved, and the accuracy of event extraction is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 shows a schematic diagram of a text processing scenario provided in an embodiment of the present application;
FIG. 2 shows a flow chart of a text processing method in an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a feature extraction manner according to an embodiment of the present application;
FIG. 4 is a schematic diagram of processing logic of a full connection layer of a trigger word recognition model according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of another text processing method according to an embodiment of the present application;
FIG. 6 shows a schematic diagram of text processing logic provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a text processing device according to an embodiment of the present application; and
fig. 8 shows a block diagram of an electronic device in an embodiment of the application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are only schematic illustrations of the present application and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
It should be understood that the various steps recited in the method embodiments of the present application may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present application is not limited in this respect.
It should be noted that the terms "first," "second," and the like herein are merely used for distinguishing between different devices, modules, or units and not for limiting the order or interdependence of the functions performed by such devices, modules, or units.
It should be noted that references to "one" or "a plurality" in this application are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be interpreted as "one or more" unless the context clearly indicates otherwise.
Event extraction, i.e., a language processing method that extracts events of interest to a user from unstructured information and presents the events to the user in a structured manner. The method can identify the specific type of event from the text, so that the key points and the topics of the text to be processed are mined and extracted.
In one related art, an input text is segmented, and then event extraction is performed in a manner of template matching of the segmentation result.
However, the method has poor flexibility, and often causes missing identification, which affects the event extraction precision.
Therefore, a technical solution capable of improving event extraction accuracy is needed.
The inventor finds that in the event extraction process, the trigger words are required to be identified at first, and accordingly the identification accuracy of the event trigger words often influences the accuracy of event classification, so that how to improve the identification accuracy of the event trigger words becomes a problem to be solved urgently.
However, in the process of extracting an actual event, the word boundary of a language such as chinese is relatively fuzzy, and when word segmentation is wrong or misjudged, the recognition accuracy of a subsequent trigger word is often affected.
Based on the above, the embodiment of the application provides a text processing scheme, which can select the word combination most likely to be the event trigger word from the word combinations having a certain probability of becoming the event trigger word as the event trigger word through the trigger word recognition model, so that the recognition accuracy of the event trigger word is improved, and the accuracy of event extraction is further improved.
Before starting to introduce embodiments of the present application, description will be made of related technical terms.
Event extraction: it means that events of interest to the user are extracted from the text describing the event information and presented in a structured form.
Events: the description of a specific event that occurs objectively in natural language form is typically a sentence or group of sentences. An event may consist of event information such as event trigger words, event types, etc.
Event trigger words: refers to a word in an event that can represent the occurrence of the event, which may be a verb or noun.
Having introduced the concepts described above, a text processing scenario designed in an embodiment of the present application will be described next.
Fig. 1 shows a schematic diagram of a text processing scenario provided in an embodiment of the present application. As shown in fig. 1, a user 10 may input a text to be processed into a text processing device 20, and after determining text feature information of the text to be processed, the text processing device 20 may calculate, through a full-connection layer of a trigger word recognition model, a first probability value of each of a plurality of word combinations including a preset trigger keyword, and then select, through a classification layer, a word combination corresponding to the first probability value that is the largest as an event trigger word. And then outputs output information 30 including the event trigger word. Illustratively, if the text to be processed is "school leader has seen the parents of the student. The output information 30 may include the event trigger word "meet".
In some embodiments, the text processing device 20 may further determine a probability value of the text to be processed belonging to each preset event type through the full connection layer of the event classification model after determining the text feature information of the text to be processed, and then determine the event type of the text to be processed based on the probability value. And then outputs output information 30 including the event type. Continuing with the previous example, the event type "conference event" may be included in output information 30.
After the scenario of the embodiment of the present application is introduced, the text processing method, apparatus, device and medium of the embodiment of the present application are sequentially described.
The embodiment of the application provides a text processing method, which can be executed by any electronic device with text processing capability, such as a terminal device like a computer, a palm computer, or a background device like a server, and the method is not limited in particular.
Fig. 2 shows a flowchart of a text processing method in an embodiment of the present application, and as shown in fig. 2, the text processing method provided in the embodiment of the present application includes the following steps S210 to S240.
S210, acquiring a text to be processed.
For text to be processed, it may be text that requires event extraction. For example, it may be text containing event information.
By way of example, it may be a language text such as chinese text. It should be noted that, the text to be processed in the embodiment of the present application may also be other languages, which is not limited specifically.
The obtaining mode of the text to be processed can be input by a user in real time or obtained from a document, and the obtaining mode is not limited.
S120, determining text characteristic information of the text to be processed.
For text feature information, it may be possible to extract information with event features from the text to be processed. The text feature information may include first feature information for trigger word recognition. In some embodiments, to improve the accuracy of the determination of the event type, the text feature information may further include second feature information for making the event type determination. It should be noted that the first feature information and the second feature information may be the same or different, which is not described in detail.
The text feature information will be described next in connection with S120.
In some embodiments, the text feature information may be information obtained by extracting features from a text segmentation result of the text to be processed.
Accordingly, S120 may include the following steps a11 and a12.
And step A11, performing multi-stage text segmentation processing on the text to be processed to obtain a multi-stage text segmentation result.
For multi-level text segmentation processing, it may be that the text to be processed is segmented from multiple dimensions into multiple text units of different lengths, i.e. multi-level text segmentation results.
In one example, to improve event extraction accuracy, the multi-level text segmentation process may include word segmentation and word segmentation. Accordingly, the multi-level text segmentation result may include a word segmentation result and a word segmentation result.
Through word segmentation and word segmentation, text feature information determined according to word segmentation results and word segmentation results can be combined with features of words effectively, so that influence of fuzzy Chinese vocabulary boundaries on event extraction of event trigger words or event types and the like is avoided, even under the condition that misjudgment is caused by Chinese word segmentation boundaries, event trigger words can be extracted accurately or event classification is realized, and event extraction precision is improved.
In another example, to further improve event extraction accuracy, in the case where the text to be processed includes a plurality of sentences, the multi-level text segmentation processing may include sentence cutting, word cutting, and word cutting.
For example, the text sentence to be processed may be divided into a plurality of sentences. And then carrying out word cutting and word cutting on each sentence to obtain a word segmentation result and a word segmentation result of each sentence. And then, respectively carrying out operations such as event trigger word extraction, event type determination and the like on each sentence.
After the multi-stage text segmentation process is described, step a11 will be specifically described next.
In one embodiment, a language model may be utilized to perform multi-level text segmentation of the text to be processed. For example, in the case where the text to be processed is chinese text, the language model may be a jieba chinese language model.
In one example, in order to improve text segmentation accuracy, a jieba word segmentation model is utilized to firstly segment a text to obtain a sentence segmentation result, then a syntactic tree structure is constructed according to the sentence segmentation structure, and then word segmentation and word segmentation are further performed by utilizing the syntactic tree structure to obtain a segmentation result.
Illustratively, the student parents continue to be led with the pending text as the school leader. "for example, the word segmentation result may be: school/leader/reception/student/parent. The word segmentation result may be: school/leader/connect/see/school/student/home/long/. The word segmentation result may be further performed on the basis of the word segmentation result or performed on the sentence segmentation result, which is not particularly limited.
It should be noted that other chinese language models such as SnowNLP, pkuSeg, THULAC, hanLP may be used, and this is not particularly limited.
It should be further noted that, in the embodiments of the present application, text segmentation may be performed in other manners, for example, text recognition, which is not limited in particular.
And step A12, extracting features of the multi-stage text segmentation result to obtain text feature information.
For feature extraction, in One embodiment, text extraction may be performed by using DMCNN (Dynamic Multi-pooling Convolutional Neural Network, dynamic Multi-pool convolutional neural network), RNN (Recurrent Neural Network, cyclic neural network), one-Hot (One-Hot) coding, TF-IDF (Term Frequency-inverse document Frequency), or the like, and the extraction result is taken as text feature information, and the feature extraction method is not particularly limited.
In one example, taking the DMCNN model as an example, the synthesized feature may be obtained after convolution processing, nonlinear transformation, pooling processing, and then be used as the first feature information.
For example, the tanh activation function may be utilized to make a nonlinear change after computing the convolution of the text input layer to be processed. And then the convolution result after nonlinear transformation is pooled into two parts, and then the pooled results of the two parts are spliced to obtain the synthetic characteristic.
In one embodiment, step a12 may specifically include steps a121 to a123.
And step A121, inputting each stage of text segmentation result into a feature extraction model to obtain a feature vector corresponding to the stage of text segmentation result. For example, the word segmentation result may be input into a feature extraction model to obtain a word synthesis vector (feature vector corresponding to the word segmentation result). And inputting the word division result into a feature extraction model to obtain a word synthesis vector (a feature vector corresponding to the word division result).
The feature extraction model may be referred to the above in the embodiments of the present application, and will not be described herein.
Taking the example that the feature extraction model may be a DMCNN model, fig. 3 shows a schematic diagram of a feature extraction manner provided in an embodiment of the present application. As shown in fig. 3, the text to be processed is taken as a school leader to see parents of students. For example, for each word segment, the word vector of the word segment and the position of each word segment in the text to be processed can be used for generating a vector corresponding to the word segment. The position of each word segmentation result in the text to be processed can be the distance between the word segmentation result and the center word. The center word may be determined according to the part of speech of the word segmentation result, for example, the center word may be a verb in the text to be processed, or the center word may be a preset word or a preset word in a preset character library. It should be noted that the preset character library may be a specific character library of a preset event type. The words in the preset character library can be determined according to the actual scene and the specific event type, and the method is not particularly limited. For example, if "meet" is a central word, and "school" is two words before "meet", the corresponding position of "meet" is "-2".
With continued reference to fig. 3, after performing convolution feature mapping and nonlinear variation on vectors corresponding to multiple segmentation words, the convolution result after nonlinear transformation is pooled into two parts, namely max (c 11 ) And max (c) 12 ) The result max (c 11 ) And max (c) 12 ) And (5) splicing to obtain word synthesis characteristics.
And, the character synthesis feature can be obtained according to the input of the word division result into the DMCNN model. The specific generation manner of the character synthesis feature is similar to the word feature vector, and is not repeated here.
Step A122, generating a text normal vector of the level text based on adjacent words of the center word of the text to be processed.
For example, if the multi-level text includes word segmentation results and word segmentation results, the text normal vector may include a lexical normal vector and a word normal vector. The lexical vector is used for representing part-of-speech characteristics of word segmentation results of the text to be processed. The word normal vector is used for representing character characteristics of word segmentation results of the text to be processed.
Herein, the center word may refer to the above content of the embodiments of the present application, and is not described herein again.
Illustratively, continuing with the previous example, the neighboring words of "meet" may include "leading" and "having" and the word vectors of "meet" and "leading" may be stitched to obtain the lexical vector.
It should be noted that, text normal vectors may be generated in other manners according to neighboring words and neighboring words, which is not particularly limited.
And step A123, generating a feature extraction result corresponding to the stage of text segmentation result based on the synthesized feature vector and the text normal vector.
In one example, the synthesized vector corresponding to each stage of text segmentation result and the text normal vector may be spliced to obtain a feature extraction result corresponding to the stage of text segmentation result.
Illustratively, with continued reference to FIG. 3, the word-synthesizing features and lexical vectors may be stitched to obtain a word-level feature vector f word Namely, feature extraction results corresponding to word segmentation results.
Still another exemplary, word-synthesis features and word-normal vectors may be stitched to obtain a word-level feature vector f char I.e. the feature extraction result corresponding to the word division result.
It should be noted that, by using the DMCNN model, feature dimensions of text features can be increased, so that text features of a text to be processed can be fully expressed, and thus, accuracy of event extraction such as event trigger words, event type determination and the like can be improved.
And it should be further noted that, in the case where the text feature information includes the first feature information and the second feature information, the first feature information and the second feature information may be extracted using 2 feature extraction models. Or outputting two results, namely the first characteristic information and the second characteristic information, by using the same characteristic extraction model. This is not particularly limited.
Through the steps A11 and A12, the text feature information can be effectively combined with the features of the multi-stage text segmentation result in a multi-stage text segmentation mode, so that the influence of word boundary blurring on event extraction of event trigger words or event types and the like is avoided, even under the condition that misjudgment is caused by word segmentation boundaries, event trigger words can be accurately extracted or event classification is realized, and the event extraction precision is improved.
In other embodiments, S120 may include step a21 and step a22.
And step A21, performing primary text segmentation on the text to be processed to obtain a text segmentation result.
For example, only the text to be processed may be subjected to word segmentation or word segmentation, and the specific content thereof may be referred to the related description of the above portion of the embodiments of the present application, which is not repeated herein.
And step A22, extracting features of the text segmentation result to obtain text feature information.
It should be noted that, the specific content of step a22 may be referred to the related description of the above portion of the embodiments of the present application, and will not be described herein.
In still other embodiments, the text feature information may be information obtained by feature extraction of text to be processed.
Accordingly, S120 may include step a31.
And step A31, extracting features of the text to be processed to obtain text feature information.
It should be noted that, the specific content of the step a31 may be referred to the related description of the foregoing part of the embodiment of the present application, and will not be described herein again.
In still other embodiments, S120 may include steps a41 to a43.
And step A41, performing multi-stage text segmentation on the processed text to obtain a multi-stage text segmentation result.
It should be noted that, step a41 is similar to step a11, and reference may be made to the description related to step a11, which is not repeated herein.
And step A42, extracting the characteristics of each stage of text segmentation result to obtain a characteristic extraction result corresponding to the stage of text segmentation result.
It should be noted that, step a42 is similar to step a12, and reference may be made to the description related to step a12, which is not repeated herein.
In one embodiment, step a42 may include steps a421 through a423.
And step A421, inputting each stage of text segmentation result into a feature extraction model to obtain a feature vector corresponding to the stage of text segmentation result.
Step a421 is similar to step a121, and reference may be made to the description of step a121, which is not repeated herein.
Step A422, generating a text normal vector of the level text based on the adjacent words of the center word of the text to be processed.
Step a422 is similar to step a122, and reference may be made to the description of step a122, which is not repeated herein.
Step A423, based on the feature vector and the text normal vector, generating a feature extraction result corresponding to the stage text segmentation result.
Step a423 is similar to step a123, and reference may be made to the description related to step a123, which is not repeated herein.
And step A43, carrying out feature fusion on the feature extraction results corresponding to the multi-stage text segmentation results to obtain text feature information.
In step a43, feature fusion may be performed by using a feature fusion model or a feature fusion algorithm, which is not limited to a specific feature fusion method. For example, a multi-tasking feature fusion model may be provided.
In one embodiment, step a43 includes: and inputting the feature extraction results corresponding to the multi-stage text segmentation results into a preset feature fusion model to obtain text feature information.
For example, in the case where the text feature information includes the first feature information and the second feature information, the preset feature fusion model may be a multitasking feature fusion model.
For example, the multi-tasking feature fusion model corresponds to the following formulas (1) - (4):
z N =s(W N f′ char +U N f′ word +b N ) (1)
z T =s(W T f′ char +U T f′ word +b T ) (2)
f N =z N f′ char +(1-z N )f′ word (3)
f T =z T f′ char +(1-z T )f′ word (4)
wherein s () represents a sigmoid function, W N 、U N 、b N 、W T 、U T Model weight parameter b of fusion model for preset characteristics T And (5) presetting bias parameters of the feature fusion model. The determination may be made by means of learning, for example by means of back propagation learning.
As can be seen from the above formulas (1) - (4), the token-level feature vector f word And character-level feature vector f char Mapped to the same dimension, wherein the token-level feature vector f word Mapped to f' word Character-level feature vector f char Mapped to f' char . The mapping vector f 'is then' word And f' char After the multi-task feature fusion model is input, the first feature information f in the form of a vector can be calculated through the formula N And second characteristic information f in vector form T . It should be noted that alsoFeature mapping of the same dimension may not be performed, which is not limited.
And performing gradient descent, weighting of different degrees and the like on the character element level feature vector and the character element level feature vector when the event trigger word is identified through the multi-task feature fusion model, so that the calculated vector can highlight the feature and the performance of the trigger word, the calculated first feature information is more suitable for the identification of the trigger word, and the accuracy of the event extraction is further improved. It should be noted that, the effect of the second feature information is similar to that of the first feature information, and the multi-task feature fusion model is more suitable for determining the event type, so that the accuracy of event extraction is improved.
It should be noted that other preset feature fusion models may be used to extract the first feature information and the second feature information respectively. And text feature information can be extracted in other modes, for example, feature extraction results corresponding to the multi-stage text segmentation results can be input into a preset feature fusion model to obtain the text feature information, and the specific extraction mode of the text feature information is not limited.
Through the steps A41 to A43, the text to be processed can be fully expressed in a feature fusion mode, so that the text processing method provided by the embodiment of the application can be suitable for more texts, and the applicability of the method is improved.
S230, inputting the first characteristic information into a full connection layer of the trigger word recognition model to obtain first probability values corresponding to the word combinations. Wherein each first probability value represents the probability that the word combination corresponding to the first probability value is an event trigger word.
For word combinations, it may be a word or words in the text to be processed that contains a preset trigger keyword. Accordingly, the word combination may be any continuous string containing a preset keyword. In some embodiments, the preset trigger keyword may be a preset word or a preset word in a preset character library. It should be noted that the preset character library may be a specific character library of a preset event type. The words in the preset character library can be determined according to the actual scene and the specific event type, and the method is not particularly limited. In other embodiments, the preset trigger keyword may be a core word in the text to be processed, such as a verb, or a word located at the exact center of the text, which is not specifically limited.
In some embodiments, to reduce the amount of computation, the length of the word combinations is less than or equal to a preset length threshold. For example, the length of the word combination is 3.
Illustratively, fig. 4 shows a schematic diagram of processing logic of a full connection layer of the trigger word recognition model provided in an embodiment of the present application. As shown in fig. 4, if the preset trigger keyword of the text to be processed is "see", the plurality of word combinations may include: "guide-see", "see", etc "see", "see school", etc.
After the word combination is introduced, S230 will be explained next.
In some embodiments, after the full-connection layer obtains the text feature information, a plurality of word combinations and information such as offset, length, and the like of the word combinations may be determined. And calculating according to the word combinations, the offset, the length and other information of the word combinations, and obtaining a first probability value. The offset may be a distance between a first word of the word combination and a preset trigger keyword.
Continuing taking fig. 4 as an example, the offset of the first word "guide" of the word combination "guide" and the preset trigger keyword word "see" is 3, the length value of the word combination is 3, and the first probability value "0.01" of the "guide" can be obtained through calculation of the full connection layer 40.
S240, determining event trigger words of the text to be processed based on the first probability values corresponding to the word combinations through a classification layer of the trigger word recognition model.
Wherein the classification layer may be a calculation layer for performing classification. Illustratively, the classification layer may be a softmax layer. The classification layer may be another layer capable of achieving classification, which is not limited.
In some embodiments, a word combination corresponding to a maximum value of the plurality of first probability values may be determined as an event trigger word of the text to be processed. Continuing with the example of FIG. 4, of the first probability values for a plurality of word combinations, the first probability value "0.75" for the "meet-in" word combination is the largest. Accordingly, "meet" may be determined as an event trigger word for the text to be processed.
In some embodiments, to improve the recognition accuracy of the event trigger word, at least one word combination having a first probability value greater than the reference probability value may be selected from the plurality of word combinations, and then the event trigger word may be selected from the at least one word combination.
Optionally, if the first probability values of the word combinations are smaller than the reference probability value, determining that the text to be processed has no event trigger word.
Illustratively, the reference probability value may be a probability value for the case where the full connection layer pair does not have a preset trigger keyword. With continued reference to fig. 4, in the case where "NIL" indicates that the text to be processed does not have the preset trigger keyword, the first probability value corresponding to "NIL" indicates the probability that the text to be processed does not have the preset trigger keyword.
In some embodiments, if the text to be processed includes a plurality of sentences, each sentence may be input into the fully-connected layer, resulting in a plurality of first probability values for the sentence. And then selecting word combinations corresponding to the maximum probability values from the first probability values of the sentences as event trigger words of the text to be processed.
According to the text processing method provided by the embodiment of the application, after the text characteristic information of the text to be processed is obtained, the full-connection layer of the trigger word recognition model can be utilized to calculate a plurality of first probability values containing word combinations of preset trigger key words, and the word combination corresponding to the maximum first probability value is determined to be the event trigger word of the text to be processed. In the embodiment of the application, because the event trigger words often comprise preset trigger key words, through the trigger word recognition model, the word combination of the proper event trigger word can be selected from the word combinations which have a certain probability of being the event trigger words to serve as the event trigger words, so that the recognition accuracy of the event trigger words is improved, and the accuracy of event extraction is further improved.
In some embodiments, after S240, the text processing method further includes step B1.
And B1, correspondingly adding the event trigger words and the text to be processed to a trigger word sample set to obtain a new trigger word sample set, so as to optimize the trigger word recognition model by using the new trigger word sample set.
In one embodiment, if the text to be processed includes a plurality of sentences, the event trigger word and the clause to which the event trigger word belongs may be correspondingly added to the trigger word sample set to form a new trigger word sample set.
Wherein a new trigger word sample set S G Satisfy formula (5):
wherein T is G Representing addition event trigger wordsA sample set of previous trigger words. X is x k Representation contains event trigger wordsIs a sentence of (a).
Fig. 5 shows a flowchart of another text processing method according to an embodiment of the present application. Embodiments of the present application may be combined with each of the alternatives in one or more of the embodiments described above.
S510, acquiring a text to be processed.
Wherein, S510 is similar to S210, and the specific content of S210 will be referred to herein and will not be described in detail.
S520, determining text characteristic information of the text to be processed, wherein the text characteristic information comprises first characteristic information and second characteristic information for trigger word recognition.
Wherein, S520 is similar to S220, and reference may be made to the specific content of S220, which is not described herein.
S530, inputting the first characteristic information into a full connection layer of the trigger word recognition model to obtain first probability values corresponding to the plurality of word combinations. Each first probability value represents the probability that the word combination corresponding to the first probability value is an event trigger word, and the word combination is a word containing a preset trigger keyword in the text to be processed.
Wherein, S530 is similar to S230, and reference may be made to the specific content of S230, which is not described herein.
S540, determining event trigger words of the text to be processed based on the first probability values corresponding to the word combinations through a classification layer of the trigger word recognition model.
Wherein S540 is similar to S240, reference may be made to the specific content of S240, and the details are not repeated here.
S550, inputting the second characteristic information into the full connection layer of the event classification model to obtain second probability values corresponding to the preset event types. Wherein each second probability value represents a probability that the text to be processed belongs to a preset event type corresponding to each second probability value.
It should be noted that the fully connected layer of the event classification model is similar to the fully connected layer of the trigger word recognition model in function. Reference may be made to the description of the full connection layer of the trigger word recognition model in the foregoing embodiments of the present application, which is not repeated here.
In one embodiment, if the text to be processed includes a plurality of sentences, each sentence corresponds to one of the first feature information and the second feature information. If an event trigger word is identified from a sentence according to steps S530 and S540, the second feature information of the sentence may be input into the full-connection layer of the event classification model for calculation.
In one example, if the preset event type includes event type a and event type B. The second probability value of event type a and the second probability value of event type B can be obtained after the second feature information is input into the fully connected layer of the event classification model.
S560, determining the event type of the text to be processed based on the second probability values corresponding to the preset events by using the classification layer of the event classification model.
In some embodiments, the classification layer of the event classification model is functionally similar to the classification layer of the trigger word recognition model. Reference may be made to the description of the classification layer of the trigger word recognition model in the foregoing embodiments of the present application, which is not repeated here.
According to the text processing method provided by the embodiment of the application, after the text characteristic information of the text to be processed is obtained, the full-connection layer of the trigger word recognition model can be utilized to calculate a plurality of first probability values containing word combinations of preset trigger key words, and the word combination corresponding to the maximum first probability value is determined to be the event trigger word of the text to be processed. In the embodiment of the application, because the event trigger words often comprise preset trigger key words, through the trigger word recognition model, the word combination of the proper event trigger word can be selected from the word combinations which have a certain probability of being the event trigger words to serve as the event trigger words, so that the recognition accuracy of the event trigger words is improved, and the accuracy of event extraction is further improved. And through the event classification model, a word combination of a proper event type can be selected from preset event types which have a certain probability of being the event type to serve as an event trigger word, so that the identification accuracy of the event type is improved, and the accuracy of event extraction is further improved.
In this embodiment, when a plurality of event types correspond to the same event trigger word, for example, when event type refinement results in a plurality of event types after refinement correspond to the same event trigger word, for example, an event type "official meeting" and an event type "informal meeting" corresponding to a "meet" keyword can be accurately identified.
In some embodiments, after S560, the text processing method further includes step C1.
And step C1, adding the event type of the text to be processed and the correspondence of the text to be processed to an event classification sample set to obtain a new event classification sample set so as to optimize an event classification model by using the new event classification sample set.
In one embodiment, if the text to be processed includes a plurality of sentences, the event type and the clause to which the event type belongs can be correspondingly added to the event classification sample set to form a new event classification sample set.
Wherein a new trigger word sample set S C Satisfy formula (6):
/>
wherein T is C Representing an add eventA sample set of previous trigger words. X is x k The representation contains events->Is a sentence of (a).
In one example, a loss function, S, may be utilized C And S is G Model training is performed.
Illustratively, the loss function satisfies equation (7):
where θ is used for the gradient descent method, which is optimized with the optimization of the loss function L (θ), the final value of which represents the optimal state of the model.
In this way, the correlation learning can be performed on the event trigger word sample set and the event classification sample set based on the clause containing the trigger word and the corresponding event classification at the same time, so that the mutual promotion relationship between the trigger word recognition and the classification can be generated.
In order to facilitate understanding of the text processing method provided in the embodiments of the present application, text processing logic of the embodiments of the present application is described next by way of an example. Illustratively, FIG. 6 shows a schematic diagram of text processing logic provided by an embodiment of the present application.
As shown in fig. 6, the text processing method according to the embodiment of the present application may include the following steps D1 to D7.
And D1, acquiring a text to be processed.
And D2, dividing sentences of the text to be processed by using the jieba word division model to obtain a plurality of sentences. And aiming at each sentence, performing word segmentation and word segmentation processing to obtain a word segmentation result of the sentence and a word segmentation result of the sentence.
Note that since the subsequent processing of each sentence is the same, the description will be continued with one sentence.
And D3, inputting the word segmentation result and the word segmentation result of the sentence to be processed into a DMCNN model to obtain a word synthesis feature vector and a word synthesis feature vector.
Step D4, performing splicing processing on the word synthesis feature vector and the lexical normal vector to obtain a word element level feature vector f word . And performing splicing processing on the character synthesis feature vector and the character normal vector to obtain a character level feature vector f char
Step D5, the character element level characteristic vector f word And character-level feature vector f char Dimension unification is carried out to obtain a word element level feature vector f 'after dimension unification' word And character-level feature vector f 'with unified dimensions' char . The character element level feature vector f 'with unified dimensions' word Character-level feature vector f 'with unified dimensions' char Inputting the multi-task feature fusion model to obtain first feature information f in a vector form N And second characteristic information f in vector form T
Step D6, the first characteristic information f in the vector form N The trigger word recognition model 61 is input, and the event trigger word is obtained after calculation by the full connection layer and the softmax layer.
Step D7, the second characteristic information f in the vector form T The event classification model 62 is input, and the event type is obtained after calculation through the full connection layer and the softmax layer.
Based on the same inventive concept, a text processing device is also provided in the embodiments of the present application, as follows. Since the principle of solving the problem of the embodiment of the device is similar to that of the embodiment of the method, the implementation of the embodiment of the device can be referred to the implementation of the embodiment of the method, and the repetition is omitted.
Fig. 7 shows a schematic diagram of a text processing device in an embodiment of the present application, and as shown in fig. 7, the text processing device 700 includes an obtaining module 710, an information determining module 720, a first calculating module 730, and a trigger word determining module 740.
An obtaining module 710, configured to obtain a text to be processed.
The information determining module 720 is configured to determine text feature information of the text to be processed, where the text feature information includes first feature information for performing trigger word recognition.
The first calculation module 730 is configured to input the first feature information into a full-connection layer of the trigger word recognition model, and obtain first probability values corresponding to a plurality of word combinations, where each first probability value represents a probability that a word combination corresponding to the first probability value is an event trigger word, and the word combination is a word including a preset trigger keyword in a text to be processed.
The trigger word determining module 740 is configured to determine, by a classification layer of the trigger word recognition model, an event trigger word of the text to be processed based on the first probability values corresponding to the word combinations.
After obtaining the text feature information of the text to be processed, the text processing device provided by the embodiment of the application can calculate a plurality of first probability values containing word combinations of preset trigger key words by using the full-connection layer of the trigger word recognition model, and determine the word combination corresponding to the maximum first probability value as the event trigger word of the text to be processed. In the embodiment of the application, because the event trigger words often comprise preset trigger key words, through the trigger word recognition model, the word combination of the proper event trigger word can be selected from the word combinations which have a certain probability of being the event trigger words to serve as the event trigger words, so that the recognition accuracy of the event trigger words is improved, and the accuracy of event extraction is further improved.
In one embodiment of the present application, the text feature information further includes second feature information for making event type determination,
the text processing device 700 includes a second computing module and an event type determination module.
The second calculation module is used for inputting second characteristic information into the full-connection layer of the event classification model to obtain second probability values corresponding to a plurality of preset event types, wherein each second probability value represents the probability that the text to be processed belongs to the preset event type corresponding to each second probability value;
The event type determining module is used for determining the event type of the text to be processed based on the second probability values corresponding to the preset events by utilizing the classification layer of the event classification model.
In one embodiment of the present application, the information determining module 720 includes: a text segmentation unit and a feature extraction unit.
The text segmentation unit is used for carrying out multi-stage text segmentation on the processed text to obtain a multi-stage text segmentation result;
and the feature extraction unit is used for carrying out feature extraction on the multi-stage text segmentation result to obtain text feature information.
In one embodiment of the present application, the information determining module 720 includes a text segmentation unit, a feature extraction unit, and a feature fusion unit.
The text segmentation unit is used for carrying out multi-stage text segmentation on the processed text to obtain a multi-stage text segmentation result;
the feature extraction unit is used for carrying out feature extraction on each stage of text segmentation result to obtain a feature extraction result corresponding to the stage of text segmentation result;
and the feature fusion unit is used for carrying out feature fusion on the feature extraction results corresponding to the multi-stage text segmentation results to obtain text feature information.
In one embodiment of the present application, the feature extraction unit includes a feature extraction subunit, a normal vector generation subunit, and an extraction result generation subunit.
The feature extraction subunit is used for inputting each stage of text segmentation result into the feature extraction model to obtain a feature vector corresponding to the stage of text segmentation result;
the normal vector generation subunit is used for generating a text normal vector of the level text based on adjacent words of the center word of the text to be processed;
and the extraction result generation subunit is used for generating a feature extraction result corresponding to the level text segmentation result based on the feature vector and the text normal vector.
In one embodiment of the present application, the feature fusion unit is configured to:
and inputting the feature extraction results corresponding to the multi-stage text segmentation results into a preset feature fusion model to obtain text feature information.
In one embodiment of the present application, the text processing device 700 includes a trigger word sample set update module.
And the trigger word sample set updating module is used for correspondingly adding the event trigger word and the text to be processed to the trigger word sample set to obtain a new trigger word sample set so as to optimize the trigger word recognition model by using the new trigger word sample set.
In one embodiment of the present application, the text processing device 700 includes a sample set update module that also includes event classification.
The event classification sample set updating module is used for correspondingly adding the event type of the text to be processed and the text to be processed to the event classification sample set to obtain a new event classification sample set so as to optimize the event classification model by using the new event classification sample set.
The text processing device provided in the embodiment of the present application may be used to execute the text processing method provided in the above embodiments of the method, and its implementation principle and technical effects are similar, and for the sake of brevity, it is not repeated here.
Those skilled in the art will appreciate that the various aspects of the present application may be implemented as a system, method, or program product. Accordingly, aspects of the present application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to this embodiment of the present application is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, and a bus 830 connecting the various system components, including the memory unit 820 and the processing unit 810.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present application described in the above section of the "exemplary method" of the present specification. For example, the processing unit 810 may perform the following steps of the method embodiment described above:
acquiring a text to be processed;
determining text characteristic information of a text to be processed, wherein the text characteristic information comprises first characteristic information for trigger word recognition;
inputting the first characteristic information into a full-connection layer of a trigger word recognition model to obtain first probability values corresponding to a plurality of word combinations, wherein each first probability value represents the probability that the word combination corresponding to the first probability value is an event trigger word, and the word combination is a word containing a preset trigger key word in a text to be processed;
and determining event trigger words of the text to be processed based on the first probability values corresponding to the word combinations through a classification layer of the trigger word recognition model.
The storage unit 820 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 8201 and/or cache memory 8202, and may further include Read Only Memory (ROM) 8203.
Storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 840 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850.
Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860.
As shown in fig. 8, network adapter 860 communicates with other modules of electronic device 800 over bus 830.
It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a usb disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, a computer-readable storage medium, which may be a readable signal medium or a readable storage medium, is also provided. On which a program product is stored which enables the implementation of the method described above.
In some possible implementations, the various aspects of the present application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the present application as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
More specific examples of the computer readable storage medium in this application may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present application, a computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein.
Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing.
A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
In some examples, program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In particular implementations, the program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory.
Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the various steps of the methods herein are depicted in the accompanying drawings in a particular order, this is not required to either suggest that the steps must be performed in that particular order, or that all of the illustrated steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the description of the above embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware.
Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein.
This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

Claims (10)

1. A text processing method, comprising:
acquiring a text to be processed, wherein the text to be processed comprises a plurality of sentences x k
Determining text characteristic information of the text to be processed, wherein the text characteristic information comprises first characteristic information for trigger word recognition;
inputting the first characteristic information into a full-connection layer of a trigger word recognition model to obtain first probability values corresponding to a plurality of word combinations, wherein each first probability value represents the probability that the word combination corresponding to the first probability value is an event trigger word, and the word combination is a word containing a preset trigger key word in the text to be processed;
determining event trigger words of the text to be processed based on first probability values corresponding to the word combinations through a classification layer of the trigger word recognition model;
correspondingly adding the event trigger word and the clause to which the event trigger word belongs to a trigger word sample set to obtain a new trigger word sample set S G The S is G The formula is satisfied:
wherein T is G Representing addition event trigger wordsPrevious trigger word sample set, x k The representation contains the event trigger word->Is a sentence of (a);
the text characteristic information further includes second characteristic information for making an event type determination,
After the determining text feature information of the text to be processed, the method further includes:
inputting the second characteristic information into a full-connection layer of an event classification model to obtain second probability values corresponding to a plurality of preset event types, wherein each second probability value represents the probability that the text to be processed belongs to the preset event type corresponding to each second probability value;
determining an event type to which the text to be processed belongs based on second probability values corresponding to a plurality of preset events by using a classification layer of the event classification model;
adding the event type to which the text to be processed belongs and the clause to which the event type belongs to an event classification sample set correspondingly to obtain a new event classification sample set S C The S is C The formula is satisfied:
wherein T is C Representing an add eventPrevious event classification sample set, x k The representation contains events->Is a sentence of (a);
the S is based on a loss function by adopting a gradient descent method C And said S G And optimizing the trigger word recognition model and the event classification model.
2. The method of claim 1, wherein the determining text feature information of the text to be processed comprises:
Performing multi-stage text segmentation on the text to be processed to obtain a multi-stage text segmentation result;
and extracting the characteristics of the multi-stage text segmentation result to obtain the text characteristic information.
3. The method of claim 1, wherein the determining text feature information of the text to be processed comprises:
performing multi-stage text segmentation on the text to be processed to obtain a multi-stage text segmentation result;
feature extraction is carried out on each stage of text segmentation result, and a feature extraction result corresponding to the stage of text segmentation result is obtained;
and carrying out feature fusion on the feature extraction results corresponding to the multi-stage text segmentation results to obtain the text feature information.
4. The method of claim 3, wherein the step of,
the feature extraction is performed on each stage of text segmentation result to obtain a feature extraction result corresponding to the stage of text segmentation result, including:
inputting the segmentation result of each stage of text into a feature extraction model to obtain a feature vector corresponding to the segmentation result of the stage of text;
generating a text normal vector of the level text based on adjacent words of the center word of the text to be processed;
and generating a feature extraction result corresponding to the text segmentation result based on the feature vector and the text normal vector.
5. The method according to claim 3 or 4, wherein the feature fusion is performed on feature extraction results corresponding to each of the multi-level text segmentation results to obtain the text feature information, including:
and inputting the feature extraction results corresponding to the multi-stage text segmentation results into a preset feature fusion model to obtain the text feature information.
6. The method of claim 1, wherein after the determining the event trigger word of the text to be processed based on the respective first probability values of the plurality of word combinations, the method further comprises:
and correspondingly adding the event trigger word and the text to be processed to a trigger word sample set to obtain a new trigger word sample set, so as to optimize the trigger word recognition model by using the new trigger word sample set.
7. The method according to claim 1, wherein after determining an event type to which the text to be processed belongs based on the second probability values corresponding to the plurality of preset events, the method further comprises:
and correspondingly adding the event type to which the text to be processed belongs and the text to be processed into an event classification sample set to obtain a new event classification sample set, so as to optimize the event classification model by using the new event classification sample set.
8. A text processing apparatus, comprising:
an acquisition module for acquiring a text to be processed, wherein the text to be processed comprises a plurality of sentences x k
The information determining module is used for determining text characteristic information of the text to be processed, wherein the text characteristic information comprises first characteristic information for performing trigger word recognition and second characteristic information for performing event type determination;
the first calculation module is used for inputting the first characteristic information into a full-connection layer of a trigger word recognition model to obtain first probability values corresponding to a plurality of word combinations, wherein each first probability value represents the probability that the word combination corresponding to the first probability value is an event trigger word, and the word combination is a word containing a preset trigger key word in the text to be processed;
the trigger word determining module is used for determining event trigger words of the text to be processed based on the first probability values corresponding to the word combinations through the classification layer of the trigger word recognition model;
the trigger word sample set updating module is used for correspondingly adding the event trigger word and the clause to which the event trigger word belongs to the trigger word sample set to obtain a new trigger word sample set S G The S is G The formula is satisfied:
wherein T is G Representing addition event trigger wordsPrevious trigger word sample set, x k The representation contains the event trigger word->Is a sentence of (a);
the second calculation module is used for inputting the second characteristic information into the full-connection layer of the event classification model to obtain second probability values corresponding to a plurality of preset event types, wherein each second probability value represents the probability that the text to be processed belongs to the preset event type corresponding to each second probability value;
the event type determining module is used for determining the event type of the text to be processed based on the second probability values corresponding to the preset events by utilizing the classification layer of the event classification model;
the event classification sample set updating module is used for correspondingly adding the event type of the text to be processed and the clause of the event type to the event classification sample set to obtain a new event classification sample set S C The S is C The formula is satisfied:
wherein T is C Representing an add eventPrevious event classification sample set, x k The representation contains events->Is a sentence of (a);
to adopt gradient descent method based on loss function, the S C And said S G And optimizing the trigger word recognition model and the event classification model.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the text processing method of any of claims 1-7 via execution of the executable instructions.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the text processing method of any of claims 1-7.
CN202210557278.3A 2022-05-20 2022-05-20 Text processing method, device, equipment and medium Active CN114841162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210557278.3A CN114841162B (en) 2022-05-20 2022-05-20 Text processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557278.3A CN114841162B (en) 2022-05-20 2022-05-20 Text processing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN114841162A CN114841162A (en) 2022-08-02
CN114841162B true CN114841162B (en) 2024-01-05

Family

ID=82572277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557278.3A Active CN114841162B (en) 2022-05-20 2022-05-20 Text processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114841162B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299470A (en) * 2018-11-01 2019-02-01 成都数联铭品科技有限公司 The abstracting method and system of trigger word in textual announcement
CN109558591A (en) * 2018-11-28 2019-04-02 中国科学院软件研究所 Chinese event detection method and device
CN110188172A (en) * 2019-05-31 2019-08-30 清华大学 Text based event detecting method, device, computer equipment and storage medium
CN111222330A (en) * 2019-12-26 2020-06-02 中国电力科学研究院有限公司 Chinese event detection method and system
CN111967268A (en) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 Method and device for extracting events in text, electronic equipment and storage medium
CN112116075A (en) * 2020-09-18 2020-12-22 厦门安胜网络科技有限公司 Event extraction model generation method and device and text event extraction method and device
CN112131366A (en) * 2020-09-23 2020-12-25 腾讯科技(深圳)有限公司 Method, device and storage medium for training text classification model and text classification
CN112632230A (en) * 2020-12-30 2021-04-09 中国科学院空天信息创新研究院 Event joint extraction method and device based on multi-level graph network
CN112988979A (en) * 2021-04-29 2021-06-18 腾讯科技(深圳)有限公司 Entity identification method, entity identification device, computer readable medium and electronic equipment
CN113254628A (en) * 2021-05-18 2021-08-13 北京中科智加科技有限公司 Event relation determining method and device
CN113946681A (en) * 2021-12-20 2022-01-18 军工保密资格审查认证中心 Text data event extraction method and device, electronic equipment and readable medium
CN114239566A (en) * 2021-12-14 2022-03-25 公安部第三研究所 Method, device and processor for realizing two-step Chinese event accurate detection based on information enhancement and computer readable storage medium thereof
CN114330354A (en) * 2022-03-02 2022-04-12 杭州海康威视数字技术股份有限公司 Event extraction method and device based on vocabulary enhancement and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299470A (en) * 2018-11-01 2019-02-01 成都数联铭品科技有限公司 The abstracting method and system of trigger word in textual announcement
CN109558591A (en) * 2018-11-28 2019-04-02 中国科学院软件研究所 Chinese event detection method and device
CN110188172A (en) * 2019-05-31 2019-08-30 清华大学 Text based event detecting method, device, computer equipment and storage medium
CN111222330A (en) * 2019-12-26 2020-06-02 中国电力科学研究院有限公司 Chinese event detection method and system
CN111967268A (en) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 Method and device for extracting events in text, electronic equipment and storage medium
CN112116075A (en) * 2020-09-18 2020-12-22 厦门安胜网络科技有限公司 Event extraction model generation method and device and text event extraction method and device
CN112131366A (en) * 2020-09-23 2020-12-25 腾讯科技(深圳)有限公司 Method, device and storage medium for training text classification model and text classification
CN112632230A (en) * 2020-12-30 2021-04-09 中国科学院空天信息创新研究院 Event joint extraction method and device based on multi-level graph network
CN112988979A (en) * 2021-04-29 2021-06-18 腾讯科技(深圳)有限公司 Entity identification method, entity identification device, computer readable medium and electronic equipment
CN113254628A (en) * 2021-05-18 2021-08-13 北京中科智加科技有限公司 Event relation determining method and device
CN114239566A (en) * 2021-12-14 2022-03-25 公安部第三研究所 Method, device and processor for realizing two-step Chinese event accurate detection based on information enhancement and computer readable storage medium thereof
CN113946681A (en) * 2021-12-20 2022-01-18 军工保密资格审查认证中心 Text data event extraction method and device, electronic equipment and readable medium
CN114330354A (en) * 2022-03-02 2022-04-12 杭州海康威视数字技术股份有限公司 Event extraction method and device based on vocabulary enhancement and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于深度学习的中文临床指南事件抽取研究";余辉;《中国优秀硕士学位论文全文数据库信息科技辑》(第5期);全文 *

Also Published As

Publication number Publication date
CN114841162A (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN109241524B (en) Semantic analysis method and device, computer-readable storage medium and electronic equipment
CN107220235B (en) Speech recognition error correction method and device based on artificial intelligence and storage medium
EP4060565A1 (en) Method and apparatus for acquiring pre-trained model
CN108363790B (en) Method, device, equipment and storage medium for evaluating comments
CN108170749B (en) Dialog method, device and computer readable medium based on artificial intelligence
CN112131366B (en) Method, device and storage medium for training text classification model and text classification
CN107729313B (en) Deep neural network-based polyphone pronunciation distinguishing method and device
CN110276023B (en) POI transition event discovery method, device, computing equipment and medium
CN111611810B (en) Multi-tone word pronunciation disambiguation device and method
CN111079432B (en) Text detection method and device, electronic equipment and storage medium
CN113076739A (en) Method and system for realizing cross-domain Chinese text error correction
CN110263340B (en) Comment generation method, comment generation device, server and storage medium
CN111488742B (en) Method and device for translation
CN113590761A (en) Training method of text processing model, text processing method and related equipment
CN113158656B (en) Ironic content recognition method, ironic content recognition device, electronic device, and storage medium
CN112464642A (en) Method, device, medium and electronic equipment for adding punctuation to text
CN112100360B (en) Dialogue response method, device and system based on vector retrieval
CN111475635B (en) Semantic completion method and device and electronic equipment
CN113705207A (en) Grammar error recognition method and device
CN111666405B (en) Method and device for identifying text implication relationship
CN110472241B (en) Method for generating redundancy-removed information sentence vector and related equipment
WO2023088278A1 (en) Method and apparatus for verifying authenticity of expression, and device and medium
CN114841162B (en) Text processing method, device, equipment and medium
CN116167382A (en) Intention event extraction method and device, electronic equipment and storage medium
CN114791950A (en) Method and device for classifying aspect-level emotions based on part-of-speech position and graph convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant