CN117195898A - Entity relation extraction method and device, electronic equipment and storage medium - Google Patents

Entity relation extraction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117195898A
CN117195898A CN202311177541.7A CN202311177541A CN117195898A CN 117195898 A CN117195898 A CN 117195898A CN 202311177541 A CN202311177541 A CN 202311177541A CN 117195898 A CN117195898 A CN 117195898A
Authority
CN
China
Prior art keywords
sentence
processing
matrix
model
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311177541.7A
Other languages
Chinese (zh)
Inventor
袁美璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202311177541.7A priority Critical patent/CN117195898A/en
Publication of CN117195898A publication Critical patent/CN117195898A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Machine Translation (AREA)

Abstract

The application relates to artificial intelligence technology in the field of financial science and technology, and discloses a method for extracting entity relations, which comprises the following steps: obtaining sentences to be identified, and carrying out text prediction processing on the sentences to be identified by utilizing a pre-training model to obtain predicted text data; performing matrix construction on the predicted text data to obtain a predicted vector matrix, and performing feature extraction processing on the predicted vector matrix to obtain a sentence feature set; performing attention extraction processing on sentence features in the sentence feature set based on an attention mechanism to obtain an attention weight matrix; and carrying out relation extraction processing on the attention weight matrix by using a full connection layer to obtain an entity relation extraction result. In addition, the application also relates to a block chain technology, and a prediction vector matrix can be stored in nodes of the block chain. The application also provides an entity relation extraction device, electronic equipment and a storage medium. The application can improve the efficiency of entity relation extraction in the field of financial science and technology.

Description

Entity relation extraction method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a method and apparatus for extracting an entity relationship, an electronic device, and a storage medium.
Background
In the field of financial science and technology, a large amount of financial documents and materials are contained, a plurality of different entities are involved in different financial documents and materials, and extracting different entities and entity relations can be beneficial to better viewing the financial documents and materials. In the entity relation extraction task, the cyclic neural network can learn high-quality text information, and has good effects in Chinese and English natural language processing tasks. However, the computational power of the recurrent neural network is limited by the length and representation of the text sequence, and the effect of extracting features may be affected when processing longer text content. Traditional CNN networks extract text features by means of local windowing, which can make it difficult to capture long-distance dependencies. Therefore, it is needed to provide a physical relationship extraction method with higher accuracy.
Disclosure of Invention
The application provides a method, a device, electronic equipment and a storage medium for extracting entity relations, which mainly aim to improve the accuracy of entity relation extraction in the field of financial science and technology.
In order to achieve the above object, the present application provides a method for extracting entity relationships, including:
acquiring sentences to be identified, and carrying out text prediction processing on the sentences to be identified by utilizing a pre-training model to obtain predicted text data;
performing matrix construction on the predicted text data to obtain a predicted vector matrix, and performing feature extraction processing on the predicted vector matrix to obtain a sentence feature set;
performing attention extraction processing on sentence features in the sentence feature set based on an attention mechanism to obtain an attention weight matrix;
and carrying out relation extraction processing on the attention weight matrix by using a full connection layer to obtain an entity relation extraction result.
Optionally, before performing text prediction processing on the sentence to be recognized by using the pre-training model to obtain predicted text data, the method further includes:
acquiring a sentence data set with an entity tag in the preset service field, and constructing a RoBERTa model as an initial reference model;
and carrying out model training processing on the initial reference model by utilizing the sentence data set to obtain a pre-training model.
Optionally, the performing model training processing on the initial reference model by using the sentence data set to obtain a pre-training model includes:
performing data preprocessing on the sentence data set with the entity tag to obtain a standard data set;
performing data division on the standard data set according to a preset division proportion to obtain a training data set and a test data set;
respectively carrying out character identification processing on the training data set and the test data set by using preset identification characters to obtain a standard training set and a standard test set;
inputting the standard training set into the initial reference model to obtain a prediction result;
and performing model test processing on the initial reference model according to the standard test set and the prediction result, and performing model screening according to a regularization function to obtain a pre-training model.
Optionally, the performing matrix construction on the predicted text data to obtain a vector matrix corresponding to the predicted text data includes:
splitting the predicted text data into a plurality of characters, and mapping the characters into vectors to obtain a plurality of character vectors;
and splicing the plurality of character vectors into a vector matrix, and taking the matrix as a prediction vector matrix.
Optionally, the feature extraction processing is performed on the prediction vector matrix to obtain a sentence feature set, including:
the method comprises the steps of obtaining a pre-constructed feature extraction model, wherein the feature extraction model is obtained by constructing a convolutional network layer, a Capsule network layer and a classification layer;
and inputting the predictive vector matrix into the feature extraction model to obtain a sentence feature set.
Optionally, the attention-based mechanism performs attention extraction processing on the sentence features in the sentence feature set to obtain an attention weight matrix, including:
performing weight distribution processing on each sentence characteristic in the sentence characteristic set to obtain sentence characteristics with weight;
and carrying out weight calculation processing according to the weights and the sentence characteristics to obtain an attention weight matrix.
Optionally, the performing relationship extraction processing on the attention weight matrix by using a full connection layer to obtain an entity relationship extraction result includes:
inputting the attention weight matrix into the full-connection layer to obtain a corresponding probability value;
and taking the relationship label corresponding to the probability value meeting the preset condition as an entity relationship result.
In order to solve the above-mentioned problem, the present application further provides an entity relationship extraction device, which includes:
the text prediction module is used for acquiring sentences to be recognized, and performing text prediction processing on the sentences to be recognized by utilizing the pre-training model to obtain predicted text data;
the feature extraction module is used for constructing the matrix of the predicted text data to obtain a predicted vector matrix, and carrying out feature extraction processing on the predicted vector matrix to obtain a sentence feature set;
the attention extraction module is used for extracting and processing the sentence characteristics concentrated by the sentence characteristics based on an attention mechanism to obtain an attention weight matrix;
and the relation extraction module is used for carrying out relation extraction processing on the attention weight matrix by utilizing the full connection layer to obtain an entity relation extraction result.
In order to solve the above-mentioned problems, the present application also provides an electronic apparatus including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the entity relationship extraction method described above.
In order to solve the above-mentioned problems, the present application also provides a storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the above-mentioned entity relationship extraction method.
In the embodiment of the application, text prediction processing is carried out on the sentences to be recognized by utilizing a pre-training model to obtain predicted text data, thereby realizing accurate prediction of text, feature extraction processing is carried out on a predicted vector matrix corresponding to the predicted text data to obtain a sentence feature set, attention extraction processing is carried out on sentence features in the sentence feature set based on an attention mechanism to obtain attention weight matrixes, the attention weight matrixes corresponding to different weights can be obtained through the attention mechanism, and relationship extraction processing is carried out on the attention weight matrixes by utilizing a full-connection layer to obtain an accurate entity relationship extraction result. Therefore, the entity relation extraction method, the entity relation extraction device, the electronic equipment and the storage medium can solve the problem of low accuracy of entity relation extraction in the field of financial science and technology.
Drawings
FIG. 1 is a flow chart of a method for extracting physical relationships according to an embodiment of the present application;
FIG. 2 is a detailed flow chart of one of the steps shown in FIG. 1;
FIG. 3 is a functional block diagram of an entity relationship extraction device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device for implementing the entity relationship extraction method according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The embodiment of the application provides a method for extracting entity relations. The execution subject of the entity relationship extraction method includes, but is not limited to, at least one of a server, a terminal, and the like, which can be configured to execute the method provided by the embodiment of the application. In other words, the entity relationship extraction method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Referring to fig. 1, a flow chart of a method for extracting entity relationships according to an embodiment of the application is shown. In this embodiment, the entity relationship extraction method includes the following steps S1 to S4:
s1, acquiring sentences to be identified, and performing text prediction processing on the sentences to be identified by using a pre-training model to obtain predicted text data.
In the embodiment of the present application, the sentence to be recognized refers to input data for entity relationship extraction, and may be related financial sentences in the field of financial science and technology, for example, the sentence to be recognized may be "a competition relationship between the client a and the client B in more than ten years ago".
Specifically, before the text prediction processing is performed on the sentence to be recognized by using the pre-training model to obtain predicted text data, the method further includes:
s11, acquiring a sentence data set with an entity tag in the preset service field, and constructing a RoBERTa model as an initial reference model;
and S12, performing model training processing on the initial reference model by utilizing the sentence data set to obtain a pre-training model.
In detail, the preset business field may be a financial and scientific field, the sentence data set with the entity tag refers to a data set for labeling entities in sentences, and the RoBERTa model is a model for optimizing and improving the BERT model, and not only optimizes a general BERT training strategy, but also optimizes the model at the data level.
Further, the performing model training processing on the initial reference model by using the sentence data set to obtain a pre-training model, including:
performing data preprocessing on the sentence data set with the entity tag to obtain a standard data set;
performing data division on the standard data set according to a preset division proportion to obtain a training data set and a test data set;
respectively carrying out character identification processing on the training data set and the test data set by using preset identification characters to obtain a standard training set and a standard test set;
inputting the standard training set into the initial reference model to obtain a prediction result;
and performing model test processing on the initial reference model according to the standard test set and the prediction result, and performing model screening according to a regularization function to obtain a pre-training model.
In detail, the data preprocessing may be deleting an irrelevant sequence, the preset dividing ratio may be 80% of a training data set, 20% of a test data set, [ CLS ] and [ SEP ] of preset identification characters, and the regularization function is a dropout function.
S2, constructing a matrix of the predicted text data to obtain a predicted vector matrix, and extracting features of the predicted vector matrix to obtain a sentence feature set.
In the embodiment of the present application, the performing matrix construction on the predicted text data to obtain a vector matrix corresponding to the predicted text data includes:
splitting the predicted text data into a plurality of characters, and mapping the characters into vectors to obtain a plurality of character vectors;
and splicing the plurality of character vectors into a vector matrix, and taking the matrix as a prediction vector matrix.
In detail, the predicted text data is split into a plurality of characters using a word splitter, which may be a token.
Specifically, the feature extraction processing is performed on the prediction vector matrix to obtain a sentence feature set, including:
the method comprises the steps of obtaining a pre-constructed feature extraction model, wherein the feature extraction model is obtained by constructing a convolutional network layer, a Capsule network layer and a classification layer;
and inputting the predictive vector matrix into the feature extraction model to obtain a sentence feature set.
In detail, the Capsule network layer is composed of 1 weight matrix layer, 1 conversion matrix layer and 1L 2 quantization layer, the input of the Capsule layer is from the convolution network layer, the input size is 14x14, the depth is 512, the 512-dimensional vector of each position is taken as one Capsule, 196 Capsules can be formed, and the Capsules are converted through the weight matrix layer and the conversion matrix layer.
And S3, performing attention extraction processing on the sentence characteristics in the sentence characteristic set based on an attention mechanism to obtain an attention weight matrix.
In the embodiment of the present application, the attention extraction processing is performed on the sentence features in the sentence feature set based on the attention mechanism to obtain an attention weight matrix, including:
performing weight distribution processing on each sentence characteristic in the sentence characteristic set to obtain sentence characteristics with weight;
and carrying out weight calculation processing according to the weights and the sentence characteristics to obtain an attention weight matrix.
In detail, the attention mechanism in the attention layer is utilized to carry out weight distribution on the characteristics, and sentence characteristic vectors are calculated.
And S4, carrying out relation extraction processing on the attention weight matrix by utilizing a full connection layer to obtain an entity relation extraction result.
In the embodiment of the present application, the relationship extraction processing is performed on the attention weight matrix by using a full connection layer to obtain an entity relationship extraction result, including:
inputting the attention weight matrix into the full-connection layer to obtain a corresponding probability value;
and taking the relationship label corresponding to the probability value meeting the preset condition as an entity relationship result.
In detail, the weight matrix assembly is carried out on the local feature vectors by using the full connection layer, and the relation extraction result in the output layer is output.
The method constructs a Capsule network Chinese character entity relation extraction model based on an attention mechanism. The model applies the convolutional neural network of the attention mechanism to the extraction task of the Chinese character relation, and simultaneously considers the relation between the words and sentences of the text, so that the overall structure information is reserved, and the relation between the local part and the characteristics can be captured better. Meanwhile, the Capsule network uses capsules as basic units, each Capsule can represent different gestures, is more suitable for variable-length input and can process text sequences with different lengths. Furthermore, the Capsule network has certain robustness to noise and disturbance, and the risk of losing part of information can be reduced.
In the embodiment of the application, text prediction processing is carried out on the sentences to be recognized by utilizing a pre-training model to obtain predicted text data, thereby realizing accurate prediction of text, feature extraction processing is carried out on a predicted vector matrix corresponding to the predicted text data to obtain a sentence feature set, attention extraction processing is carried out on sentence features in the sentence feature set based on an attention mechanism to obtain attention weight matrixes, the attention weight matrixes corresponding to different weights can be obtained through the attention mechanism, and relationship extraction processing is carried out on the attention weight matrixes by utilizing a full-connection layer to obtain an accurate entity relationship extraction result. Therefore, the entity relation extraction method provided by the application can solve the problem of low accuracy of entity relation extraction in the field of financial science and technology.
Fig. 3 is a functional block diagram of an entity relationship extraction device according to an embodiment of the application.
The entity relationship extraction apparatus 100 of the present application may be installed in an electronic device. The entity-relationship extraction apparatus 100 may include a text prediction module 101, a feature extraction module 102, an attention extraction module 103, and a relationship extraction module 104, depending on the functions implemented. The module of the application, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the text prediction module 101 is configured to obtain a sentence to be recognized, and perform text prediction processing on the sentence to be recognized by using a pre-training model to obtain predicted text data;
the feature extraction module 102 is configured to perform matrix construction on the predicted text data to obtain a predicted vector matrix, and perform feature extraction processing on the predicted vector matrix to obtain a sentence feature set;
the attention extraction module 103 is configured to perform attention extraction processing on sentence features in the sentence feature set based on an attention mechanism, so as to obtain an attention weight matrix;
the relationship extraction module 104 is configured to perform relationship extraction processing on the attention weight matrix by using a full connection layer, so as to obtain an entity relationship extraction result.
In detail, the specific embodiments of the modules of the entity relationship extraction apparatus 100 are as follows:
step one, obtaining sentences to be identified, and carrying out text prediction processing on the sentences to be identified by utilizing a pre-training model to obtain predicted text data.
In the embodiment of the present application, the sentence to be recognized refers to input data for entity relationship extraction, and may be related financial sentences in the field of financial science and technology, for example, the sentence to be recognized may be "a competition relationship between the client a and the client B in more than ten years ago".
Specifically, before the text prediction processing is performed on the sentence to be recognized by using the pre-training model to obtain predicted text data, the method further includes:
acquiring a sentence data set with an entity tag in the preset service field, and constructing a RoBERTa model as an initial reference model;
and carrying out model training processing on the initial reference model by utilizing the sentence data set to obtain a pre-training model.
In detail, the preset business field may be a financial and scientific field, the sentence data set with the entity tag refers to a data set for labeling entities in sentences, and the RoBERTa model is a model for optimizing and improving the BERT model, and not only optimizes a general BERT training strategy, but also optimizes the model at the data level.
Further, the performing model training processing on the initial reference model by using the sentence data set to obtain a pre-training model, including:
performing data preprocessing on the sentence data set with the entity tag to obtain a standard data set;
performing data division on the standard data set according to a preset division proportion to obtain a training data set and a test data set;
respectively carrying out character identification processing on the training data set and the test data set by using preset identification characters to obtain a standard training set and a standard test set;
inputting the standard training set into the initial reference model to obtain a prediction result;
and performing model test processing on the initial reference model according to the standard test set and the prediction result, and performing model screening according to a regularization function to obtain a pre-training model.
In detail, the data preprocessing may be deleting an irrelevant sequence, the preset dividing ratio may be 80% of a training data set, 20% of a test data set, [ CLS ] and [ SEP ] of preset identification characters, and the regularization function is a dropout function.
And secondly, constructing a matrix of the predicted text data to obtain a predicted vector matrix, and extracting features of the predicted vector matrix to obtain a sentence feature set.
In the embodiment of the present application, the performing matrix construction on the predicted text data to obtain a vector matrix corresponding to the predicted text data includes:
splitting the predicted text data into a plurality of characters, and mapping the characters into vectors to obtain a plurality of character vectors;
and splicing the plurality of character vectors into a vector matrix, and taking the matrix as a prediction vector matrix.
In detail, the predicted text data is split into a plurality of characters using a word splitter, which may be a token.
Specifically, the feature extraction processing is performed on the prediction vector matrix to obtain a sentence feature set, including:
the method comprises the steps of obtaining a pre-constructed feature extraction model, wherein the feature extraction model is obtained by constructing a convolutional network layer, a Capsule network layer and a classification layer;
and inputting the predictive vector matrix into the feature extraction model to obtain a sentence feature set.
In detail, the Capsule network layer is composed of 1 weight matrix layer, 1 conversion matrix layer and 1L 2 quantization layer, the input of the Capsule layer is from the convolution network layer, the input size is 14x14, the depth is 512, the 512-dimensional vector of each position is taken as one Capsule, 196 Capsules can be formed, and the Capsules are converted through the weight matrix layer and the conversion matrix layer.
And thirdly, performing attention extraction processing on the sentence characteristics in the sentence characteristic set based on an attention mechanism to obtain an attention weight matrix.
In the embodiment of the present application, the attention extraction processing is performed on the sentence features in the sentence feature set based on the attention mechanism to obtain an attention weight matrix, including:
performing weight distribution processing on each sentence characteristic in the sentence characteristic set to obtain sentence characteristics with weight;
and carrying out weight calculation processing according to the weights and the sentence characteristics to obtain an attention weight matrix.
In detail, the attention mechanism in the attention layer is utilized to carry out weight distribution on the characteristics, and sentence characteristic vectors are calculated.
And step four, carrying out relation extraction processing on the attention weight matrix by utilizing a full connection layer to obtain an entity relation extraction result.
In the embodiment of the present application, the relationship extraction processing is performed on the attention weight matrix by using a full connection layer to obtain an entity relationship extraction result, including:
inputting the attention weight matrix into the full-connection layer to obtain a corresponding probability value;
and taking the relationship label corresponding to the probability value meeting the preset condition as an entity relationship result.
In detail, the weight matrix assembly is carried out on the local feature vectors by using the full connection layer, and the relation extraction result in the output layer is output.
The method constructs a Capsule network Chinese character entity relation extraction model based on an attention mechanism. The model applies the convolutional neural network of the attention mechanism to the extraction task of the Chinese character relation, and simultaneously considers the relation between the words and sentences of the text, so that the overall structure information is reserved, and the relation between the local part and the characteristics can be captured better. Meanwhile, the Capsule network uses capsules as basic units, each Capsule can represent different gestures, is more suitable for variable-length input and can process text sequences with different lengths. Furthermore, the Capsule network has certain robustness to noise and disturbance, and the risk of losing part of information can be reduced.
In the embodiment of the application, text prediction processing is carried out on the sentences to be recognized by utilizing a pre-training model to obtain predicted text data, thereby realizing accurate prediction of text, feature extraction processing is carried out on a predicted vector matrix corresponding to the predicted text data to obtain a sentence feature set, attention extraction processing is carried out on sentence features in the sentence feature set based on an attention mechanism to obtain attention weight matrixes, the attention weight matrixes corresponding to different weights can be obtained through the attention mechanism, and relationship extraction processing is carried out on the attention weight matrixes by utilizing a full-connection layer to obtain an accurate entity relationship extraction result. Therefore, the entity relation extracting device provided by the application can solve the problem of low accuracy of entity relation extraction in the field of financial science and technology.
Fig. 4 is a schematic structural diagram of an electronic device for implementing the entity relationship extraction method according to an embodiment of the present application.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program stored in the memory 11 and executable on the processor 10, such as a solid state relationship extraction program.
The processor 10 may be formed by an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be formed by a plurality of integrated circuits packaged with the same function or different functions, including one or more central processing units (Central Processing Unit, CPU), a microprocessor, a digital processing chip, a graphics processor, a combination of various control chips, and so on. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device and processes data by running or executing programs or modules (e.g., executing an entity relationship extraction program, etc.) stored in the memory 11, and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, such as a mobile hard disk of the electronic device. The memory 11 may in other embodiments also be an external storage device of the electronic device, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only for storing application software installed in an electronic device and various types of data, such as codes of entity relationship extraction programs, but also for temporarily storing data that has been output or is to be output.
The communication bus 12 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
The communication interface 13 is used for communication between the electronic device and other devices, including a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), or alternatively a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device and for displaying a visual user interface.
Fig. 4 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device may further include a power source (such as a battery) for supplying power to the respective components, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device may further include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described herein.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The entity-relationship extraction program stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
acquiring sentences to be identified, and carrying out text prediction processing on the sentences to be identified by utilizing a pre-training model to obtain predicted text data;
performing matrix construction on the predicted text data to obtain a predicted vector matrix, and performing feature extraction processing on the predicted vector matrix to obtain a sentence feature set;
performing attention extraction processing on sentence features in the sentence feature set based on an attention mechanism to obtain an attention weight matrix;
and carrying out relation extraction processing on the attention weight matrix by using a full connection layer to obtain an entity relation extraction result.
In particular, the specific implementation method of the above instructions by the processor 10 may refer to the description of the relevant steps in the corresponding embodiment of the drawings, which is not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a storage medium if implemented in the form of software functional units and sold or used as separate products. The storage medium may be volatile or nonvolatile. For example, the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present application also provides a storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
acquiring sentences to be identified, and carrying out text prediction processing on the sentences to be identified by utilizing a pre-training model to obtain predicted text data;
performing matrix construction on the predicted text data to obtain a predicted vector matrix, and performing feature extraction processing on the predicted vector matrix to obtain a sentence feature set;
performing attention extraction processing on sentence features in the sentence feature set based on an attention mechanism to obtain an attention weight matrix;
and carrying out relation extraction processing on the attention weight matrix by using a full connection layer to obtain an entity relation extraction result.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present application without departing from the spirit and scope of the technical solution of the present application.

Claims (10)

1. A method for extracting an entity relationship, the method comprising:
acquiring sentences to be identified, and carrying out text prediction processing on the sentences to be identified by utilizing a pre-training model to obtain predicted text data;
performing matrix construction on the predicted text data to obtain a predicted vector matrix, and performing feature extraction processing on the predicted vector matrix to obtain a sentence feature set;
performing attention extraction processing on sentence features in the sentence feature set based on an attention mechanism to obtain an attention weight matrix;
and carrying out relation extraction processing on the attention weight matrix by using a full connection layer to obtain an entity relation extraction result.
2. The entity-relationship extraction method of claim 1, wherein before performing text prediction processing on the sentence to be recognized using a pre-training model to obtain predicted text data, the method further comprises:
acquiring a sentence data set with an entity tag in the preset service field, and constructing a RoBERTa model as an initial reference model;
and carrying out model training processing on the initial reference model by utilizing the sentence data set to obtain a pre-training model.
3. The entity-relationship extraction method of claim 2, wherein performing model training processing on the initial reference model by using the sentence data set to obtain a pre-training model comprises:
performing data preprocessing on the sentence data set with the entity tag to obtain a standard data set;
performing data division on the standard data set according to a preset division proportion to obtain a training data set and a test data set;
respectively carrying out character identification processing on the training data set and the test data set by using preset identification characters to obtain a standard training set and a standard test set;
inputting the standard training set into the initial reference model to obtain a prediction result;
and performing model test processing on the initial reference model according to the standard test set and the prediction result, and performing model screening according to a regularization function to obtain a pre-training model.
4. The entity relationship extraction method of claim 1, wherein the performing matrix construction on the predicted text data to obtain a vector matrix corresponding to the predicted text data includes:
splitting the predicted text data into a plurality of characters, and mapping the characters into vectors to obtain a plurality of character vectors;
and splicing the plurality of character vectors into a vector matrix, and taking the matrix as a prediction vector matrix.
5. The method for extracting physical relationships according to claim 1, wherein the feature extraction processing is performed on the prediction vector matrix to obtain a sentence feature set, including:
the method comprises the steps of obtaining a pre-constructed feature extraction model, wherein the feature extraction model is obtained by constructing a convolutional network layer, a Capsule network layer and a classification layer;
and inputting the predictive vector matrix into the feature extraction model to obtain a sentence feature set.
6. The method of claim 1, wherein the performing attention extraction processing on the sentence features in the sentence feature set based on an attention mechanism to obtain an attention weight matrix includes:
performing weight distribution processing on each sentence characteristic in the sentence characteristic set to obtain sentence characteristics with weight;
and carrying out weight calculation processing according to the weights and the sentence characteristics to obtain an attention weight matrix.
7. The method of claim 1, wherein the performing relationship extraction processing on the attention weight matrix by using a full connection layer to obtain a physical relationship extraction result comprises:
inputting the attention weight matrix into the full-connection layer to obtain a corresponding probability value;
and taking the relationship label corresponding to the probability value meeting the preset condition as an entity relationship result.
8. An entity relationship extraction apparatus, the apparatus comprising:
the text prediction module is used for acquiring sentences to be recognized, and performing text prediction processing on the sentences to be recognized by utilizing the pre-training model to obtain predicted text data;
the feature extraction module is used for constructing the matrix of the predicted text data to obtain a predicted vector matrix, and carrying out feature extraction processing on the predicted vector matrix to obtain a sentence feature set;
the attention extraction module is used for extracting and processing the sentence characteristics concentrated by the sentence characteristics based on an attention mechanism to obtain an attention weight matrix;
and the relation extraction module is used for carrying out relation extraction processing on the attention weight matrix by utilizing the full connection layer to obtain an entity relation extraction result.
9. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the entity relationship extraction method of any one of claims 1 to 7.
10. A storage medium storing a computer program, wherein the computer program when executed by a processor implements the entity-relationship extraction method of any one of claims 1 to 7.
CN202311177541.7A 2023-09-13 2023-09-13 Entity relation extraction method and device, electronic equipment and storage medium Pending CN117195898A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311177541.7A CN117195898A (en) 2023-09-13 2023-09-13 Entity relation extraction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311177541.7A CN117195898A (en) 2023-09-13 2023-09-13 Entity relation extraction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117195898A true CN117195898A (en) 2023-12-08

Family

ID=89004875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311177541.7A Pending CN117195898A (en) 2023-09-13 2023-09-13 Entity relation extraction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117195898A (en)

Similar Documents

Publication Publication Date Title
CN113157927B (en) Text classification method, apparatus, electronic device and readable storage medium
CN113378970B (en) Sentence similarity detection method and device, electronic equipment and storage medium
CN113821622B (en) Answer retrieval method and device based on artificial intelligence, electronic equipment and medium
CN115238670B (en) Information text extraction method, device, equipment and storage medium
CN114398557A (en) Information recommendation method and device based on double portraits, electronic equipment and storage medium
CN114840684A (en) Map construction method, device and equipment based on medical entity and storage medium
CN113486238A (en) Information pushing method, device and equipment based on user portrait and storage medium
CN116521867A (en) Text clustering method and device, electronic equipment and storage medium
CN116630712A (en) Information classification method and device based on modal combination, electronic equipment and medium
CN116468025A (en) Electronic medical record structuring method and device, electronic equipment and storage medium
CN116401602A (en) Event detection method, device, equipment and computer readable medium
CN116340537A (en) Character relation extraction method and device, electronic equipment and storage medium
CN113806540B (en) Text labeling method, text labeling device, electronic equipment and storage medium
CN113139129B (en) Virtual reading trajectory graph generation method and device, electronic equipment and storage medium
CN111414452B (en) Search word matching method and device, electronic equipment and readable storage medium
CN114677526A (en) Image classification method, device, equipment and medium
CN114780688A (en) Text quality inspection method, device and equipment based on rule matching and storage medium
CN113723114A (en) Semantic analysis method, device and equipment based on multi-intent recognition and storage medium
CN113706207A (en) Order transaction rate analysis method, device, equipment and medium based on semantic analysis
CN117195898A (en) Entity relation extraction method and device, electronic equipment and storage medium
CN113706204B (en) Deep learning-based rights issuing method, device, equipment and storage medium
CN114723488B (en) Course recommendation method and device, electronic equipment and storage medium
CN115221875B (en) Word weight generation method, device, electronic equipment and storage medium
CN113704587B (en) User adhesion analysis method, device, equipment and medium based on stage division
CN114742423B (en) Random grouping method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination