CN114398855A - Text extraction method, system and medium based on fusion pre-training - Google Patents

Text extraction method, system and medium based on fusion pre-training Download PDF

Info

Publication number
CN114398855A
CN114398855A CN202210038607.3A CN202210038607A CN114398855A CN 114398855 A CN114398855 A CN 114398855A CN 202210038607 A CN202210038607 A CN 202210038607A CN 114398855 A CN114398855 A CN 114398855A
Authority
CN
China
Prior art keywords
training
text
model
fusion
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210038607.3A
Other languages
Chinese (zh)
Inventor
林远平
甘伟超
喻广博
邹鸿岳
周靖宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuaique Information Technology Co ltd
Original Assignee
Beijing Kuaique Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuaique Information Technology Co ltd filed Critical Beijing Kuaique Information Technology Co ltd
Priority to CN202210038607.3A priority Critical patent/CN114398855A/en
Publication of CN114398855A publication Critical patent/CN114398855A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/126Character encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a text extraction method, a system and a medium based on fusion pre-training, wherein the method comprises the following steps: acquiring a text to be extracted; pre-training and coding the text to be extracted through a pre-training model to obtain a corresponding character vector; selecting at least part of the character vectors to carry out semantic extraction on adjacent texts, and splicing to obtain semantic feature vectors; performing feature selection on the semantic feature vectors and fusing to obtain effective word feature vectors; and carrying out flow division decoding on the effective word characteristic vectors to respectively obtain a word segmentation result and an entity identification result. The character vectors are obtained by encoding based on a pre-training model frame, and at least part of the character vectors are fused to extract the semantics of the adjacent text so as to learn the text semantic information, so that the learning capability of the semantics is enhanced, the problem of fuzzy boundary of the finally obtained word segmentation result can be effectively avoided, and the accuracy of text extraction is improved.

Description

Text extraction method, system and medium based on fusion pre-training
Technical Field
The invention relates to the technical field of computers, in particular to a text extraction method, a text extraction system and a text extraction medium based on fusion pre-training.
Background
Text information extraction is a relatively mature algorithm technology in the field of deep learning, and is also successfully applied to various service scenes. However, in the financial field, especially in the currency field, the existing text extraction method has certain boundary problems, such as "1Y 0000013.097540005.29 +0A fund TO B fund", the extraction of the digital text "3.0975" may only be performed TO "3.09", or the extraction of the digital text "4000" may only be performed TO "400", so that the accuracy of text extraction is not high enough.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a method, a system and a medium for text extraction based on fusion pre-training, which aims to improve the accuracy of text extraction.
The technical scheme of the invention is as follows:
a text extraction method based on fusion pre-training comprises the following steps:
acquiring a text to be extracted;
pre-training and coding the text to be extracted through a pre-training model to obtain a corresponding character vector;
selecting at least part of the character vectors to carry out semantic extraction on adjacent texts, and splicing to obtain semantic feature vectors;
performing feature selection on the semantic feature vectors and fusing to obtain effective word feature vectors;
and carrying out flow division decoding on the effective word characteristic vectors to respectively obtain a word segmentation result and an entity identification result.
In an embodiment, before the pre-training encoding is performed on the text to be extracted through the pre-training model to obtain the corresponding character vector, the method further includes:
and carrying out countermeasure training on the pre-training model.
In one embodiment, the training of the pre-training model for confrontation comprises:
constructing a countermeasure sample, and adding the countermeasure sample into an input embedding layer of the pre-training model for perturbation;
and carrying out countermeasure training on the pre-training model according to the countermeasure sample to update the model parameters, and ending the countermeasure training until the number of updating times reaches a preset number.
In one embodiment, the construction confrontation sample specifically comprises:
the challenge sample is calculated according to the following formula,
Figure BDA0003469143110000021
Figure BDA0003469143110000022
wherein, gadvRepresenting the gradient of the pre-trained model during the training of confrontation, X representing the input information, y representing the label information, deltat-1Representing the magnitude of the disturbance at time t-1, fθRepresents the output of the pre-trained model, L represents the loss function,
Figure BDA0003469143110000023
representing graduating the disturbance in the loss function, a representing the learning rate, | |)FIs the Frobenius norm, gtAnd II, representing the gradient of the pre-training model at the moment t, wherein II is a cumulative multiplication sign.
In an embodiment, the performing the countermeasure training on the pre-training model according to the countermeasure sample to update the model parameters until the number of updating times reaches a preset number, and then the performing the countermeasure training is ended specifically includes:
after the pre-training model is disturbed according to the confrontation sample, according to a formula
Figure BDA0003469143110000031
The gradient of the parameter θ is accumulated, where K represents the number of times the gradient is raised, E represents the mathematical expectation, gt-1The gradient of the pre-trained model at time t-1,
Figure BDA0003469143110000032
representing the gradient of a parameter in the loss function;
and updating parameters of the pre-training model according to the accumulated gradient, and ending the confrontation training until the updating times reach the preset times.
In one embodiment, the selecting at least part of the character vectors to perform semantic extraction on the neighboring texts and obtaining semantic feature vectors by splicing includes:
selecting coding layers at a plurality of preset positions in the pre-training model as target coding layers;
respectively inputting the output results of the target coding layer into text classification models which are connected in a one-to-one correspondence manner to perform semantic extraction of adjacent texts, wherein the number of the text classification models is the same as that of the target coding layer, and the sizes of the kernels of the text classification models are different;
and performing fusion splicing on the extraction result of each text classification model to obtain the semantic feature vector.
In one embodiment, the performing feature selection and fusion on the semantic feature vectors to obtain valid term feature vectors specifically includes:
performing feature selection on the semantic feature vectors through a full connection layer and fusing to obtain effective word feature vectors, wherein the input of the full connection layer is FinputAn output of Foutput
Finput=concat(E1,E2,Ei…,En),
Foutput=softmax(Finput)=softmax(concat(E1,E2,Ei…,En) Wherein E) isiAnd n is the output result of the ith target coding layer, and the number of the target coding layers.
In one embodiment, the kernel size of the text classification model is 3-7.
A text extraction system based on fusion pretraining, the system comprising at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method for text extraction based on fusion pretraining described above.
A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the method for text extraction based on fusion pre-training described above.
Has the advantages that: compared with the prior art, the text extraction method, the text extraction system and the text extraction medium based on the fusion pre-training are characterized in that character vectors are obtained by encoding based on a pre-training model frame, at least part of the fusion character vectors are subjected to semantic extraction of adjacent texts to learn text semantic information, the learning capability of semantics is enhanced, the problem of fuzzy boundaries can be effectively avoided for finally obtained word segmentation results, and the accuracy of text extraction is improved.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flowchart of a text extraction method based on pre-fusion training according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a model framework of a text extraction method based on fusion pre-training according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of functional modules of a text extraction device based on pre-fusion training according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of a text extraction system based on fusion pre-training according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is described in further detail below. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating a text extraction method based on pre-fusion training according to an embodiment of the present invention. The text extraction method based on the fusion pre-training provided by the embodiment is suitable for the situation of automatically identifying the transaction opponent in the transaction process. As shown in fig. 1, the method specifically includes the following steps:
and S100, acquiring a text to be extracted.
In this embodiment, the text to be extracted may be text information of a transaction session in a current coupon transaction process, for example, order information, consultation information, and the like sent between different transaction institutions, and the text information in the transaction session is acquired as the text to be extracted to perform automatic text extraction processing and efficiency of financial information identification processing.
S200, pre-training and coding the text to be extracted through a pre-training model to obtain a corresponding character vector.
The pre-training model is trained through large-scale corpus information, and can achieve good effect in downstream tasks through training fine adjustment of downstream tasks, so in the embodiment, the pre-training coding is performed on a text to be extracted through the pre-training model, and then a corresponding character vector is obtained, specifically, in the embodiment, a Bert pre-training model is preferably adopted for character coding, Bert is a pre-trained language characterization model, and emphasizes that the pre-training is not performed by adopting a traditional one-way language model or a method for shallow splicing two one-way language models like the prior art, but a new MLM (masked language model) is adopted to generate a deep two-way language characterization, namely, a plurality of words in the text are randomly shielded according to a certain probability for the input text, and then the Bert model is used for predicting the shielded words to perform pre-training, the vector code of each character is obtained, but in other embodiments, a pre-training model such as Albert or RoBerta may also be used for pre-training coding, which is not limited in this embodiment.
In one embodiment, before step S200, the method further comprises:
and carrying out countermeasure training on the pre-training model.
In this embodiment, before character encoding is performed on a text to be extracted, a pre-training model is combined with a confrontation training learning method to improve the robustness and accuracy of the model as much as possible, and specifically, a confrontation training algorithm such as FreeLB, FGM, PGD, and the like may be selected, which is not limited in this embodiment.
In one embodiment, the pre-training model is confrontational trained, comprising:
constructing a countermeasure sample, and adding the countermeasure sample into an input embedding layer of the pre-training model for perturbation;
and carrying out countermeasure training on the pre-training model according to the countermeasure sample to update the model parameters, and ending the countermeasure training until the number of updating times reaches a preset number.
In the embodiment, the countermeasure training is an important way for enhancing the robustness of the model, in the process of the countermeasure training, the countermeasure samples are constructed and added into the input embedding layer of the pre-trained model to be disturbed, the input samples of the pre-trained model are mixed with a few tiny disturbances, and the models can identify the real labels of the countermeasure samples through the disturbed countermeasure sample attack models, namely, the countermeasure training is performed on the pre-trained model according to the countermeasure samples during the training, so that the model adapts to the change to update the model parameters until the countermeasure training is finished, thereby improving the robustness of the model when encountering the countermeasure samples, and simultaneously improving the performance and generalization capability of the model to a certain extent.
In specific implementation, FreeLB is adopted for countertraining, and disturbance is calculated through the following formula to attack the weight of the pre-training model:
Figure BDA0003469143110000071
Figure BDA0003469143110000072
wherein, gadvRepresenting the gradient of the pre-trained model during the training of confrontation, X representing the input information, y representing the label information, deltat-1Representing the magnitude of the disturbance at time t-1, fθRepresents the output of the pre-trained model, L represents the loss function,
Figure BDA0003469143110000073
representing graduating the disturbance in the loss function, a representing the learning rate, | |)FIs the Frobenius norm, gtAnd II, representing the gradient of the pre-training model at the moment t, wherein II is a cumulative multiplication sign.
After the pre-training model is disturbed according to the confrontation sample, the formula is followed
Figure BDA0003469143110000074
The gradient of the parameter θ is accumulated, where K represents the number of times the gradient is raised, E represents the mathematical expectation, gt-1The gradient of the pre-trained model at time t-1,
Figure BDA0003469143110000075
representing the gradient of a parameter in the loss function.
And after the accumulated gradient is obtained, updating parameters of the pre-training model, finishing the confrontation training when the updating times reach the preset times, and regularizing the model parameters by a training mode of introducing noise, namely the confrontation training, so that the robustness and the generalization capability of the model are improved.
S300, selecting at least part of the character vectors to carry out semantic extraction on the adjacent texts, and splicing to obtain semantic feature vectors.
In this embodiment, after the corresponding character vectors are obtained through encoding, because a single character mode is adopted when the pretrained model of the Bert series constructs embedding, the mode may cause the loss of vocabulary semantic information in the context of chinese and also cause the boundary problem of text extraction, such as the case where the digital text "3.0975" is only extracted to "3.09", in order to avoid the boundary problem, in this embodiment, at least part of the encoding results of the pretrained model is selected to perform semantic extraction on adjacent texts, that is, the semantic relationship between adjacent characters is learned to better capture local correlation, thereby avoiding the boundary problem caused by a single character and improving the accuracy of text extraction.
In one embodiment, step S300 includes:
selecting coding layers at a plurality of preset positions in the pre-training model as target coding layers;
respectively inputting the output results of the target coding layer into text classification models which are connected in a one-to-one correspondence manner to perform semantic extraction of adjacent texts, wherein the number of the text classification models is the same as that of the target coding layer, and the sizes of the kernels of the text classification models are different;
and performing fusion splicing on the extraction result of each text classification model to obtain the semantic feature vector.
In this embodiment, the pre-training model usually includes multiple coding layers, that is, a hidden layer structure including multiple transforms, and since the number of coding layers of the pre-training model is higher, the data features obtained by the output hidden vectors are finer, so that only the last (highest) layer of the coding results is output in the pre-training model of Bert and the like in the prior art, in this embodiment, to learn the short-distance semantic features, the coding layers at a plurality of preset positions are selected as target coding layers, specifically, the coding layers located at the last 25% -50% of all the coding layers are selected, for example, when the number of the coding layers, that is, the transforms layers in the pre-training model is 12, the last 3 to 6 layers (that is, 3 to 6 layers are counted down from the last layer) are selected, and when the number of the coding layers is 18 layers, the last 5 to 9 layers are selected.
A text classification model is connected behind each selected target coding layer, in this embodiment, a text classification model TextCNN based on a convolutional neural network is adopted, for example, when 12 Transformers layers are adopted, the last 6 Transformers layers are selected as target coding layers, a TextCNN module is connected behind the 6 Transformers layers to perform semantic extraction on adjacent texts, and in order to better capture local correlation, kernel sizes of the textcnns in this embodiment are different, and kernel ambiguity is preferably set to be 3-7. Because the TextCNN module can learn the semantic relationship between the word and the word whose distance is the size of the kernel, the setting of the size of the kernel is equivalent to setting the learning range of the TextCNN model, and the distance cannot exceed the size of the kernel when learning the relationship between the word and the word, the model can learn text semantic information through multiple angles by setting the kernel with different sizes in the embodiment, the generalization capability and the semantic comprehension capability of the model are increased, and the boundary identification capability is improved.
And after the adjacent texts are subjected to semantic extraction through the TextCNN, fusion splicing is performed on extraction results output by each TextCNN in a vector fusion mode to obtain semantic feature vectors, n hidden _ size-dimensional vectors are converted into 1 hidden _ size-dimensional feature vectors through fusion splicing, n is the number of target coding layers, and the model can keep different TextCNNs to learn semantic information through different angles by adopting the fusion mode, so that the semantic learning capability of the model is further enhanced.
S400, performing feature selection on the semantic feature vectors and fusing to obtain effective word feature vectors.
In this embodiment, after short-distance semantic extraction is implemented on the partial encoding layer fused text classification model, the semantic feature vectors obtained by fusion and splicing perform feature selection on a full connection layer of the result, and effective word feature vectors are obtained by selection and fusion.
In one embodiment, step S400 includes:
performing feature selection on the semantic feature vectors through a full connection layer and fusing to obtain effective word feature vectors, wherein the input of the full connection layer is FinputAn output of Foutput
Finput=concat(E1,E2,Ei…,En),
Foutput=softmax(Finput)=softmax(concat(E1,E2,Ei…,En) Wherein E) isiAnd n is the output result of the ith target coding layer, and the number of the target coding layers.
In this embodiment, the output result concat (E) of the text classification model is fused to the target programming layer through the full connection layer1,E2,Ei…,En) And (4) selecting features, specifically classifying through a softmax function, and selecting the most effective word features.
S500, conducting flow distribution decoding on the effective word characteristic vectors to respectively obtain word segmentation results and entity recognition results.
In this embodiment, based on the output of the full connection layer, the downstream task is split-decoded, so that text extraction can be efficiently performed while entity recognition is achieved, and a word segmentation result and an entity recognition result are obtained, specifically, the effective word feature vectors are respectively input to the entity recognition task layer and the word segmentation task layer which have completed training, wherein for the entity recognition task, the output of the full connection layer is extracted again by a Long-distance semantic feature through an LSTM (Long Short-Term Memory) network structure, the output is used as the input of a decoding layer in the entity recognition task, the decoding layer adopts a CRF (conditional random field) to perform entity label prediction, and finally outputs a corresponding entity label; for the word segmentation task, decoding the output of the full connection layer through a CRF decoder, outputting character marks in the valid word feature vector to obtain a word segmentation result, where the character marks include an entity start mark, an entity remaining mark, and a non-entity mark, for example, a text "a deb B mechanism gives C mechanism", and the final segmentation result is "BI 0 biiibiii", where "B" is the entity start mark, and "I" is the entity remaining mark, i.e., other positions in the entity except for the start position, and an "O" non-entity mark, here, an analysis result of a blank space. Words in a sentence can be well segmented through the B, I, O form, so that the model can well learn how to segment the sentence, and accurate text segmentation and extraction are realized.
In order to better understand the implementation process of the text extraction method based on the fusion pre-training provided by the present invention, the following introduces the text extraction process based on the fusion pre-training provided by the present invention with reference to the specific model structure in fig. 2:
as shown in fig. 2, a text to be extracted, "a debt B machine … structure", is obtained, first, character vectorization is performed on an input text through a Bert pre-training model to obtain a character or word vector with fixed dimensions, in addition, FreeLB (FreeLB) countertraining is added into an input embedding layer of the pre-training model to disturb the input embedding so as to increase the robustness of the model, the Bert pre-training model adopts 12 transform high layer structures, in order to learn short-distance semantic features, the last 6 layers of Transformers layer fusion TextCNN module is selected by the semantic feature selection module to carry out semantic extraction on the adjacent text, namely, after the last 6 layers of transformations, connecting a TextCNN with different sizes of kernerl to extract key information in the sentence, enabling the model to learn text semantic information through a plurality of angles, improving the generalization and boundary identification capability of the model, then fusing each TextCNN output by adopting a vector fusion module, and converting vectors of 6 hidden _ size dimensions into feature vectors of 1 hidden _ size dimension; after passing through the semantic feature selection module, the vectors obtained by splicing are subjected to feature selection through a full Connected Layer (full Connected Layer), and the most effective word features are selected and fused; then, split decoding is carried out based on the output of the full connection layer, the output of the full connection layer is decoded and labeled through a CRF (conditional random access memory) decoder in a word segmentation task, an B, I, O-form character labeling result is obtained to accurately segment words and phrases, and the labeling result of the 'A deb B machine …' is 'B, I, B, I, O'; in the entity identification task, the output of the full connection layer is decoded sequentially through the LSTM and the CRF to obtain an entity labeling result, for example, the labeling result of the A debt B machine … is 'B-BN, I-BN, B-ORG and I-ORG O', one word corresponds to one mark, BN and ORG are different entity labels respectively, BN represents an entity of a bond, and ORG represents an entity of an organization, so that accurate word segmentation is realized while entity identification and extraction are realized, and the extraction accuracy is improved.
Another embodiment of the present invention provides a text extraction device based on fusion pre-training, as shown in fig. 3, the device includes:
the acquisition module 11 is used for acquiring a text to be extracted;
the pre-training module 12 is used for pre-training and coding the text to be extracted through a pre-training model to obtain a corresponding character vector;
the semantic extraction module 13 is used for selecting at least part of the character vectors to carry out semantic extraction on the adjacent texts and splicing to obtain semantic feature vectors;
the fusion module 14 is used for performing feature selection on the semantic feature vectors and fusing the semantic feature vectors to obtain effective word feature vectors;
and the segmentation identification module 15 is used for performing flow division decoding on the effective word characteristic vectors to respectively obtain a word segmentation result and an entity identification result.
The acquisition module 11, the pre-training module 12, the semantic extraction module 13, the fusion module 14, and the segmentation recognition module 15 are connected in sequence, the module referred to in the present invention refers to a series of computer program instruction segments capable of completing a specific function, and is more suitable for describing an execution process of text extraction based on fusion pre-training than a program, and the specific implementation of each module refers to the above corresponding method embodiment, and is not described herein again.
Another embodiment of the present invention provides a text extraction system based on fusion pre-training, as shown in fig. 4, the system 10 includes:
one or more processors 110 and a memory 120, where one processor 110 is illustrated in fig. 4, the processor 110 and the memory 120 may be connected by a bus or other means, and fig. 4 illustrates a connection by a bus as an example.
Processor 110 is used to implement various control logic for system 10, which may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single chip, an ARM (Acorn RISC machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. Also, the processor 110 may be any conventional processor, microprocessor, or state machine. Processor 110 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
The memory 120, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions corresponding to the text extraction method based on pre-fusion training in the embodiments of the present invention. The processor 110 executes various functional applications and data processing of the system 10, namely, implementing the text extraction method based on the pre-training fusion in the above method embodiments, by executing the non-volatile software programs, instructions and units stored in the memory 120.
The memory 120 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the system 10, and the like. Further, the memory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 120 optionally includes memory located remotely from processor 110, which may be connected to system 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more units are stored in the memory 120, and when executed by the one or more processors 110, perform the text extraction method based on fusion pre-training in any of the above-described method embodiments, e.g., performing the above-described method steps S100 to S500 in fig. 1.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions for execution by one or more processors, for example, to perform method steps S100-S500 of fig. 1 described above.
By way of example, non-volatile storage media can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as Synchronous RAM (SRAM), dynamic RAM, (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The disclosed memory components or memory of the operating environment described herein are intended to comprise one or more of these and/or any other suitable types of memory.
In summary, in the text extraction method, system and medium based on the fusion pre-training disclosed by the invention, the method obtains the text to be extracted; pre-training and coding the text to be extracted through a pre-training model to obtain a corresponding character vector; selecting at least part of the character vectors to carry out semantic extraction on adjacent texts, and splicing to obtain semantic feature vectors; performing feature selection on the semantic feature vectors and fusing to obtain effective word feature vectors; and carrying out flow division decoding on the effective word characteristic vectors to respectively obtain a word segmentation result and an entity identification result. The character vectors are obtained by encoding based on a pre-training model frame, and at least part of the character vectors are fused to extract the semantics of the adjacent text so as to learn the text semantic information, so that the learning capability of the semantics is enhanced, the problem of fuzzy boundary of the finally obtained word segmentation result can be effectively avoided, and the accuracy of text extraction is improved.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by instructing relevant hardware (such as a processor, a controller, etc.) through a computer program, which may be stored in a non-volatile computer-readable storage medium, and the computer program may include the processes of the above method embodiments when executed. The storage medium may be a memory, a magnetic disk, a floppy disk, a flash memory, an optical memory, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A text extraction method based on fusion pre-training is characterized by comprising the following steps:
acquiring a text to be extracted;
pre-training and coding the text to be extracted through a pre-training model to obtain a corresponding character vector;
selecting at least part of the character vectors to carry out semantic extraction on adjacent texts, and splicing to obtain semantic feature vectors;
performing feature selection on the semantic feature vectors and fusing to obtain effective word feature vectors;
and carrying out flow division decoding on the effective word characteristic vectors to respectively obtain a word segmentation result and an entity identification result.
2. The method for extracting text based on fusion pretraining as claimed in claim 1, wherein before the pretraining coding is performed on the text to be extracted through the pretraining model to obtain the corresponding character vector, the method further comprises:
and carrying out countermeasure training on the pre-training model.
3. The method for text extraction based on fusion pretraining as claimed in claim 2, wherein the performing countermeasure training on the pretrained model comprises:
constructing a countermeasure sample, and adding the countermeasure sample into an input embedding layer of the pre-training model for perturbation;
and carrying out countermeasure training on the pre-training model according to the countermeasure sample to update the model parameters, and ending the countermeasure training until the number of updating times reaches a preset number.
4. The text extraction method based on the pre-fusion training as claimed in claim 3, wherein the constructing of the confrontation sample specifically comprises:
the challenge sample is calculated according to the following formula,
Figure FDA0003469143100000021
Figure FDA0003469143100000022
wherein, gadvRepresenting the gradient of the pre-trained model during the training of confrontation, X representing the input information, y representing the label information, deltat-1Representing the magnitude of the disturbance at time t-1, fθRepresents the output of the pre-trained model, L represents the loss function,
Figure FDA0003469143100000023
representing graduating the disturbance in the loss function, a representing the learning rate, | |)FIs the Frobenius norm, gtAnd II, representing the gradient of the pre-training model at the moment t, wherein II is a cumulative multiplication sign.
5. The method for extracting text based on fusion pretraining as claimed in claim 4, wherein the performing countertraining on the pretrained model according to the countertraining samples to update model parameters until the updating times reach a preset number and the countertraining is finished specifically comprises:
after the pre-training model is disturbed according to the confrontation sample, according to a formula
Figure FDA0003469143100000024
The gradient of the parameter θ is accumulated, where K represents the number of times the gradient is raised, E represents the mathematical expectation, gt-1The gradient of the pre-trained model at time t-1,
Figure FDA0003469143100000025
representing the gradient of a parameter in the loss function;
and updating parameters of the pre-training model according to the accumulated gradient, and ending the confrontation training until the updating times reach the preset times.
6. The method for extracting text based on fusion pre-training as claimed in claim 1, wherein said selecting at least part of said character vectors to perform semantic extraction on neighboring text and concatenating to obtain semantic feature vectors comprises:
selecting coding layers at a plurality of preset positions in the pre-training model as target coding layers;
respectively inputting the output results of the target coding layer into text classification models which are connected in a one-to-one correspondence manner to perform semantic extraction of adjacent texts, wherein the number of the text classification models is the same as that of the target coding layer, and the sizes of the kernels of the text classification models are different;
and performing fusion splicing on the extraction result of each text classification model to obtain the semantic feature vector.
7. The method for extracting text based on fusion pretraining as claimed in claim 6, wherein said performing feature selection and fusion on said semantic feature vectors to obtain valid word feature vectors specifically comprises:
performing feature selection on the semantic feature vectors through a full connection layer and fusing to obtain effective word feature vectors, wherein the input of the full connection layer is FinputAn output of Foutput
Finput=concat(E1,E2,Ei…,En),
Foutput=softmax(Finput)=softmax(concat(E1,E2,Ei…,En) Wherein E) isiAnd n is the output result of the ith target coding layer, and the number of the target coding layers.
8. The method of claim 1, wherein the kernel size of the text classification model is 3-7.
9. A system for text extraction based on fusion pretraining, the system comprising at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method for text extraction based on pre-fusion training of any one of claims 1-8.
10. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the method for text extraction based on fusion pretraining of any one of claims 1-8.
CN202210038607.3A 2022-01-13 2022-01-13 Text extraction method, system and medium based on fusion pre-training Pending CN114398855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210038607.3A CN114398855A (en) 2022-01-13 2022-01-13 Text extraction method, system and medium based on fusion pre-training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210038607.3A CN114398855A (en) 2022-01-13 2022-01-13 Text extraction method, system and medium based on fusion pre-training

Publications (1)

Publication Number Publication Date
CN114398855A true CN114398855A (en) 2022-04-26

Family

ID=81230087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210038607.3A Pending CN114398855A (en) 2022-01-13 2022-01-13 Text extraction method, system and medium based on fusion pre-training

Country Status (1)

Country Link
CN (1) CN114398855A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034302A (en) * 2022-06-07 2022-09-09 四川大学 Relation extraction method, device, equipment and medium for optimizing information fusion strategy
CN115114439A (en) * 2022-08-30 2022-09-27 北京百度网讯科技有限公司 Method and device for multi-task model reasoning and multi-task information processing
CN115238670A (en) * 2022-08-09 2022-10-25 平安科技(深圳)有限公司 Information text extraction method, device, equipment and storage medium
CN115796189A (en) * 2023-01-31 2023-03-14 北京面壁智能科技有限责任公司 Semantic determination method, device, electronic equipment and medium
CN116070638A (en) * 2023-01-03 2023-05-05 广东工业大学 Training updating method and system for Chinese sentence feature construction
CN116150698A (en) * 2022-09-08 2023-05-23 天津大学 Automatic DRG grouping method and system based on semantic information fusion

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034302A (en) * 2022-06-07 2022-09-09 四川大学 Relation extraction method, device, equipment and medium for optimizing information fusion strategy
CN115034302B (en) * 2022-06-07 2023-04-11 四川大学 Relation extraction method, device, equipment and medium for optimizing information fusion strategy
CN115238670A (en) * 2022-08-09 2022-10-25 平安科技(深圳)有限公司 Information text extraction method, device, equipment and storage medium
CN115238670B (en) * 2022-08-09 2023-07-04 平安科技(深圳)有限公司 Information text extraction method, device, equipment and storage medium
CN115114439A (en) * 2022-08-30 2022-09-27 北京百度网讯科技有限公司 Method and device for multi-task model reasoning and multi-task information processing
CN116150698A (en) * 2022-09-08 2023-05-23 天津大学 Automatic DRG grouping method and system based on semantic information fusion
CN116150698B (en) * 2022-09-08 2023-08-22 天津大学 Automatic DRG grouping method and system based on semantic information fusion
CN116070638A (en) * 2023-01-03 2023-05-05 广东工业大学 Training updating method and system for Chinese sentence feature construction
CN116070638B (en) * 2023-01-03 2023-09-08 广东工业大学 Training updating method and system for Chinese sentence feature construction
CN115796189A (en) * 2023-01-31 2023-03-14 北京面壁智能科技有限责任公司 Semantic determination method, device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN114398855A (en) Text extraction method, system and medium based on fusion pre-training
CN111626063B (en) Text intention identification method and system based on projection gradient descent and label smoothing
Chen et al. Long short-term memory neural networks for chinese word segmentation
Shi et al. Contextual spoken language understanding using recurrent neural networks
WO2023160472A1 (en) Model training method and related device
CN114398881A (en) Transaction information identification method, system and medium based on graph neural network
CN113723105A (en) Training method, device and equipment of semantic feature extraction model and storage medium
Harizi et al. Convolutional neural network with joint stepwise character/word modeling based system for scene text recognition
CN115544303A (en) Method, apparatus, device and medium for determining label of video
CN112000809A (en) Incremental learning method and device for text categories and readable storage medium
Wang et al. Gated convolutional LSTM for speech commands recognition
CN114936290A (en) Data processing method and device, storage medium and electronic equipment
CN116311323A (en) Pre-training document model alignment optimization method based on contrast learning
Elleuch et al. The Effectiveness of Transfer Learning for Arabic Handwriting Recognition using Deep CNN.
CN114692624A (en) Information extraction method and device based on multitask migration and electronic equipment
Cui et al. An end-to-end network for irregular printed Mongolian recognition
Zia et al. Recognition of printed Urdu script in Nastaleeq font by using CNN-BiGRU-GRU based encoder-decoder framework
CN114691879A (en) Information extraction method and device based on text features and electronic equipment
EP3627403A1 (en) Training of a one-shot learning classifier
Kišš et al. SoftCTC—semi-supervised learning for text recognition using soft pseudo-labels
KR102542220B1 (en) Method of semantic segmentation based on self-knowledge distillation and semantic segmentation device based on self-knowledge distillation
Sherly et al. An efficient indoor scene character recognition using Bayesian interactive search algorithm-based adaboost-CNN classifier
Mingote et al. Training Speaker Enrollment Models by Network Optimization.
CN110619118B (en) Automatic text generation method
Jangpangi et al. Handwriting recognition using wasserstein metric in adversarial learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination