WO2023159767A1 - 目标词语的检测方法、装置、电子设备及存储介质 - Google Patents

目标词语的检测方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023159767A1
WO2023159767A1 PCT/CN2022/090743 CN2022090743W WO2023159767A1 WO 2023159767 A1 WO2023159767 A1 WO 2023159767A1 CN 2022090743 W CN2022090743 W CN 2022090743W WO 2023159767 A1 WO2023159767 A1 WO 2023159767A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
entity
feature
text
vector
Prior art date
Application number
PCT/CN2022/090743
Other languages
English (en)
French (fr)
Inventor
吴粤敏
舒畅
陈又新
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2023159767A1 publication Critical patent/WO2023159767A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular to a detection method, device, electronic equipment and storage medium of a target word.
  • the embodiment of the present application provides a method for detecting a target word, the method comprising:
  • Target word detection is performed on the target speech representation vector through the target word detection model to obtain target word data.
  • the embodiment of the present application provides a target word detection device, the device includes:
  • the data acquisition module is used to acquire the original speech data to be detected
  • the entity feature extraction module is used to extract the entity feature of the original speech data through the pre-trained feature extraction model to obtain the text entity feature;
  • a knowledge extraction module configured to perform knowledge extraction on a preset knowledge map according to the text entity features, to obtain entity triples
  • the first feature extraction module is used to perform feature extraction on the original speech data through a pre-trained target word detection model to obtain a target text feature vector, and perform feature extraction on the text entity features through the target word detection model, Get the target entity feature vector;
  • the second feature extraction module is used to perform feature extraction on the entity triple through the target word detection model to obtain a target attribute feature vector
  • a weighted calculation module is used to perform weighted calculations on the target text feature vector, the target attribute feature vector, and the target entity feature vector through the target word detection model to obtain a target speech representation vector;
  • the target word detection module is used to perform target word detection on the target speech representation vector through the target word detection model to obtain target word data.
  • an embodiment of the present application provides an electronic device, the electronic device includes a memory, a processor, a program stored in the memory and operable on the processor, and a program for implementing the processor A data bus connecting and communicating with the memory, when the program is executed by the processor, a method for detecting a target word is realized, wherein the method for detecting a target word includes:
  • Target word detection is performed on the target speech representation vector through the target word detection model to obtain target word data.
  • an embodiment of the present application provides a storage medium, the storage medium is a computer-readable storage medium for computer-readable storage, the storage medium stores one or more programs, and the one or more This program can be executed by one or more processors, to realize a kind of detection method of target word, wherein, the detection method of described target word comprises:
  • Target word detection is performed on the target speech representation vector through the target word detection model to obtain target word data.
  • the detection method, device, electronic equipment and storage medium of the target words proposed by the present application obtain the original speech data to be detected; perform entity feature extraction on the original speech data through a pre-trained feature extraction model, and obtain text entity features, which can The obtained text entity features are more in line with the detection requirements. Furthermore, knowledge extraction is performed on the preset knowledge map according to the text entity features to obtain entity triples, and the original speech data, text entity features and entity triples are extracted through the pre-trained target word detection model to obtain the target Text feature vectors, target entity feature vectors and target attribute feature vectors improve feature extraction efficiency. Further, the target text feature vector, target attribute feature vector, and target entity feature vector are weighted and calculated by the target word detection model to obtain the target speech representation vector.
  • target text feature vector, target attribute feature Vectors and target entity feature vectors are weighted to improve the accuracy of target speech representation vectors.
  • target word detection model is used to detect the target word representation vector to obtain the target word data, which can improve the accuracy of target word detection.
  • Fig. 1 is the flowchart of the detection method of the target word that the embodiment of the present application provides;
  • Fig. 2 is the flowchart of step S102 in Fig. 1;
  • Fig. 3 is the flowchart of step S103 in Fig. 1;
  • Fig. 4 is the flowchart of step S104 in Fig. 1;
  • Fig. 5 is the flowchart of step S105 in Fig. 1;
  • Fig. 6 is the flowchart of step S106 in Fig. 1;
  • Fig. 7 is the flowchart of step S107 in Fig. 1;
  • FIG. 8 is a schematic structural diagram of a target word detection device provided in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • Artificial Intelligence It is a new technical science that studies and develops theories, methods, technologies and application systems for simulating, extending and expanding human intelligence; artificial intelligence is a branch of computer science. Intelligence attempts to understand the essence of intelligence and produce a new intelligent machine that can respond in a manner similar to human intelligence. Research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is also a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • Natural language processing uses computers to process, understand and use human languages (such as Chinese, English, etc.). NLP belongs to a branch of artificial intelligence and is an interdisciplinary subject between computer science and linguistics. Known as computational linguistics. Natural language processing includes syntax analysis, semantic analysis, text understanding, etc. Natural language processing is often used in technical fields such as machine translation, handwritten and printed character recognition, speech recognition and text-to-speech conversion, information intent recognition, information extraction and filtering, text classification and clustering, public opinion analysis and opinion mining. Deal with related data mining, machine learning, knowledge acquisition, knowledge engineering, artificial intelligence research and linguistics research related to language computing, etc.
  • Information Extraction A text processing technology that extracts specified types of factual information such as entities, relationships, and events from natural language texts, and forms structured data output.
  • Information extraction is a technique to extract specific information from text data.
  • Text data is composed of some specific units, such as sentences, paragraphs, and chapters.
  • Text information is composed of some small specific units, such as words, words, phrases, sentences, paragraphs, or combinations of these specific units. . Extracting noun phrases, personal names, and place names in text data is all text information extraction.
  • the information extracted by text information extraction technology can be various types of information.
  • Knowledge Graph It combines the theories and methods of applied mathematics, graphics, information visualization technology, information science and other disciplines with metrology citation analysis, co-occurrence analysis and other methods, and uses the visual graph to display the subject visually.
  • the main goal of the knowledge map is to describe various entities and concepts that exist in the real world, as well as the strong relationship between them. We use relationships to describe the association between two entities. From the perspective of the Web, knowledge graphs support semantic search by establishing semantic links between data, just like hyperlinks between simple texts. From the perspective of natural language processing, knowledge graph is to extract semantic and structured data from text. From the perspective of artificial intelligence, knowledge graph is a tool that uses knowledge base to assist in understanding human language.
  • knowledge map is a method of storing knowledge in the form of graph.
  • the knowledge graph is a relatively general formalized description framework for semantic knowledge. Nodes are used to represent semantic symbols, and edges are used to represent the relationship between semantics.
  • the knowledge graph aims to describe various entities or concepts and their relationships that exist in the real world. It constitutes a huge semantic network graph. Nodes represent entities or concepts, and edges are composed of attributes or relationships.
  • the current knowledge graph has been used to refer to various large-scale knowledge bases.
  • the knowledge map is also called the semantic network. Since the early days, the semantic network has promoted the knowledge representation based on the graph.
  • Entity refers to something that is distinguishable and exists independently. Such as a certain person, a certain city, a certain plant, a certain commodity, etc. Everything in the world is made up of concrete things, which refer to entities. Entities are the most basic elements in knowledge graphs, and different entities have different relationships.
  • Relationship A certain relationship exists between entities and entities, between different concepts and concepts, and between concepts and entities.
  • the relation is formalized as a function that maps k points to a Boolean value.
  • a relation is a function that maps k graph nodes (entities, semantic classes, attribute values) to Boolean values.
  • Attribute The value of an entity-specific attribute, which is the attribute value pointed from an entity to it. Different attribute types correspond to edges with different types of attributes.
  • the attribute value mainly refers to the value of the specified attribute of the object. For example: "area”, “population”, “capital” are several different attributes.
  • the attribute value mainly refers to the value of the specified attribute of the object, such as 9.6 million square kilometers, etc.
  • Attribute The value of an entity-specific attribute, which is the attribute value pointed from an entity to it. Different attribute types correspond to edges with different types of attributes.
  • the attribute value mainly refers to the value of the specified attribute of the object. For example: "area”, “population”, “capital” are several different attributes.
  • the attribute value mainly refers to the value of the specified attribute of the object, such as 9.6 million square kilometers, etc.
  • Embedding is a kind of vector representation, which refers to representing an object with a low-dimensional vector, which can be a word, or a commodity, or a movie, etc.; the nature of this embedding vector is that it can Make the objects corresponding to the vectors with similar distances have similar meanings. For example, the distance between embedding (Avengers) and embedding (Iron Man) will be very close, but the distance between embedding (Avengers) and embedding (Gone with the Wind) will be farther away.
  • Embedding is essentially a mapping from semantic space to vector space, while maintaining the relationship of the original sample in the semantic space as much as possible in the vector space.
  • Embedding can encode an object with a low-dimensional vector and retain its meaning. It is often used in machine learning. In the process of building a machine learning model, the object is encoded as a low-dimensional dense vector and then passed to DNN to improve efficiency.
  • Attention Mechanism originates from the study of human vision. In cognitive science, due to the bottleneck of information processing, humans will selectively focus on a part of all information while ignoring other visible information. The above mechanism is often referred to as the attention mechanism. Attention is generally divided into two types: one is top-down conscious attention, which is called focus attention. Focused attention refers to the attention that has a predetermined purpose, depends on tasks, and actively and consciously focuses on an object; the other is bottom-up unconscious attention, called saliency-based attention. Salience-based attention is attention driven by external stimuli without active intervention and task-independent. If the stimulus information of an object is different from its surrounding information, the gating mechanism can turn the attention to this object.
  • Variants of the attention mechanism include multi-head attention and hard attention.
  • Multi-head attention uses multiple queries to calculate and select multiple information from input information in parallel. Each attention focuses on a different part of the input information.
  • hard attention There are two ways to implement hard attention, one is to select the input information with the highest probability. Another kind of hard attention can be achieved by random sampling on the attention distribution.
  • Bi-LSTM Bi-directional Long Short-Term Memory
  • Bi-LSTM It is a combination of forward LSTM and backward LSTM. It is often used to model contextual information in natural language processing tasks.
  • Bi-LSTM combines the information of the input sequence in both forward and backward directions.
  • the forward LSTM layer has the information of time t and the previous time in the input sequence
  • the backward LSTM layer has the information of time t and the subsequent time in the input sequence.
  • the output of the forward LSTM layer at time t is denoted as
  • the output result of the backward LSTM layer at time t is denoted as
  • the vectors output by the two LSTM layers can be processed by addition, average or connection.
  • Conditional random field algorithm (conditional random field algorithm, CRF): It is a mathematical algorithm; it combines the characteristics of the maximum entropy model and the hidden Markov model, and is an undirected graph model. It has achieved good results in sequence labeling tasks such as entity recognition.
  • the conditional random field is a typical discriminant model, and its joint probability can be written as the multiplication of several potential functions, the most commonly used of which is the linear chain conditional random field.
  • the CRF model of the linear chain Define the joint conditional probability of the state sequence as p(y
  • GRU Gated recurrent unit
  • LSTM Long-term memory and gradient in backpropagation
  • GRU combines the forget gate and the input gate into a single update gate. Also mixed cell state and hidden state, plus some other changes.
  • the resulting model is simpler than the standard LSTM model and is a very popular variant.
  • embodiments of the present application provide a target word detection method, device, electronic equipment, and storage medium, aiming at improving the accuracy of target word detection.
  • the method, device, electronic device, and storage medium for detecting target words provided in the embodiments of the present application are specifically described through the following embodiments. First, the method for detecting target words in the embodiments of the present application is described.
  • AI artificial intelligence
  • the embodiments of the present application may acquire and process relevant data based on artificial intelligence technology.
  • artificial intelligence is the theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results. .
  • Artificial intelligence basic technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes computer vision technology, robotics technology, biometrics technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • the target word detection method provided in the embodiment of the present application relates to the technical field of artificial intelligence.
  • the target word detection method provided in the embodiment of the present application can be applied to a terminal, can also be applied to a server, and can also be software running on a terminal or a server.
  • the terminal can be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc.
  • the server end can be configured as an independent physical server, or can be configured as a server cluster or a distributed system composed of multiple physical servers, or It can be configured as a cloud that provides basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • the server; the software can be the application of the detection method to realize the target word, etc., but is not limited to the above forms.
  • the application can be used in numerous general purpose or special purpose computer system environments or configurations. Examples: personal computers, server computers, handheld or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, including A distributed computing environment for any of the above systems or devices, etc.
  • This application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including storage devices.
  • Fig. 1 is an optional flow chart of a target word detection method provided by the embodiment of the present application.
  • the method in Fig. 1 may include but not limited to steps S101 to S107.
  • Step S101 obtaining the original speech data to be detected
  • Step S102 performing entity feature extraction on the original speech data through a pre-trained feature extraction model to obtain text entity features
  • Step S103 perform knowledge extraction on the preset knowledge map according to the text entity features, and obtain entity triples
  • Step S104 performing feature extraction on the original speech data through the pre-trained target word detection model to obtain the target text feature vector, and performing feature extraction on the text entity features through the target word detection model to obtain the target entity feature vector;
  • Step S105 performing feature extraction on the entity triples through the target word detection model to obtain target attribute feature vectors
  • Step S106 performing weighted calculation on the target text feature vector, the target attribute feature vector, and the target entity feature vector through the target word detection model to obtain a target speech representation vector;
  • Step S107 using the target word detection model to perform target word detection on the target speech representation vector to obtain target word data.
  • the entity feature extraction is performed on the original speech data through the pre-trained feature extraction model, so that the obtained text entity features can better meet the detection requirements.
  • the preset knowledge map is extracted to obtain entity triples
  • the original speech data, text entity features and entity triples are extracted through the pre-trained target word detection model to obtain the target text features.
  • Vector, target entity feature vector and target attribute feature vector can improve feature extraction efficiency.
  • the weighted calculation of the target text feature vector, target attribute feature vector, and target entity feature vector through the target word detection model can accurately perform weighted calculations on the target text feature vector, target attribute feature vector, and target entity feature vector, and improve target speech. Accuracy of characterizing vectors.
  • the target word detection model is used to detect the target word representation vector to obtain the target word data, which can improve the accuracy of target word detection.
  • the original speech data to be detected can be obtained by writing a web crawler, setting the data source, and then crawling the data in a targeted manner.
  • the original speech data can be obtained from different types of social media, for example, Sina Weibo, Knowledge Forum, Baidu Tieba, etc., but not limited thereto.
  • the original speech data may include information such as social news and announcements released by users. For example, the original speech data may be "the winter vacation of a certain elementary school is postponed" and so on.
  • the method before step S102, also includes a pre-trained feature extraction model, which can be trained according to a named entity recognition algorithm (Named Entity Recognition, NER), for example, by BERT model + conditional random field algorithm (CRF), Bi-LSTM algorithm + conditional random field algorithm (CRF), etc., train the initial model to obtain a feature extraction model, wherein the feature extraction model includes the first embedding layer, Bi-LSTM layer and CRF layer.
  • NER named Entity Recognition
  • step S102 may include but not limited to include steps S201 to S203:
  • Step S201 performing word embedding processing on the original speech data through the first embedding layer to obtain text word vectors
  • Step S202 calculate the label probability through the preset function of the Bi-LSTM layer, the preset feature category label and the text word vector, and obtain the predicted probability value of each preset feature category label;
  • step S203 feature extraction is performed according to the preset constraint factors and predicted probability values of the CRF layer to obtain text entity features.
  • step S201 of some embodiments through the Embedding of the first embedding layer, low-dimensional vectors can be used to perform word embedding processing on the original speech data to obtain text word vectors.
  • the preset function may be a softmax function.
  • the bi-LSTM algorithm of the Bi-LSTM layer generates a single output layer at the position where the output is connected by left-to-right long-short-term memory and right-to-left long-short-term memory.
  • the input text word vector can be directly passed to the softmax function, and a probability distribution is created on the preset feature category label through the softmax function, so that the text word vector is marked and classified according to the probability distribution, and the labeled text word is obtained. vector and predicted probability values for each preset feature class label.
  • the feature category labels are screened according to the preset constraint factors and predicted probability values of the CRF layer, and the feature category labels that meet the requirements are retained, so that the corresponding text entities are obtained according to the feature category labels that meet the requirements feature.
  • a constraint could be that the first word in a sentence always starts with the label "B-” or "O", not "I-".
  • the label "B-label1 I-label2 I-label3 I-", label1, label2, label3 should belong to the same type of entity.
  • “B-Person I-Person” is a legal sequence
  • B-Person I-Organization is an illegal label sequence.
  • step S103 may include but not limited to include steps S301 to S303:
  • Step S301 according to the text entity feature, traverse each knowledge node of the knowledge map, and obtain the candidate attribute feature corresponding to the text entity feature;
  • Step S302 according to the feature connection path of the knowledge map, filter the candidate attribute features to obtain the target attribute feature
  • step S303 the target attribute feature and the text entity feature are concatenated to obtain an entity triplet.
  • step S301 of some embodiments by traversing each knowledge node of the knowledge graph, relevant attributes involved in each statement, that is, all attribute features corresponding to text entity features, can be obtained as candidate attribute features.
  • the construction process of this knowledge map can be: constructing the model map of the initial knowledge map based on the known knowledge map, where the known knowledge map is constructed based on the speech data of the selected social media; Convert the structured data and unstructured data into entity-attribute-attribute value triples, integrate the triples into the knowledge map through knowledge fusion, and obtain the data map of the initial knowledge map and the adjusted pattern map ;
  • the logic check is carried out on the initial knowledge graph to obtain the final knowledge graph.
  • a candidate attribute feature directly connected to a text entity feature is selected as a target attribute feature according to the feature connection path of the knowledge graph. Furthermore, in order to expand the amount of data, candidate attribute features that are indirectly connected with text entity features can also be selected. In this way, candidate attribute features can be screened to obtain target attribute features.
  • the target attribute feature, the text entity feature, and the attribute value corresponding to the target attribute feature are spliced and entity aligned to obtain an entity-relationship triplet, that is, an entity triplet.
  • entity triplet Can be expressed as entity-attribute-attribute-value.
  • the method before step S104, the method further includes pre-training a target word detection model, and the target word detection model can be constructed based on an attention mechanism algorithm.
  • the first part of the target word detection model includes a second embedding layer, a third embedding layer, a first GRU layer, a second GRU layer, and a graph convolutional network layer, which are used to encode the input feature data and feature Extract to obtain a feature vector that can better reflect the speech category.
  • the second part of the target word detection model includes a first attention mechanism layer, a second attention mechanism layer, and a third attention mechanism layer, which are used to assign different sizes of weights to feature vectors of different importance levels using the attention mechanism algorithm , to get the target speech representation vector.
  • the third part of the target word detection model includes a fully connected layer and a prediction layer, which are used to predict the target word from the target speech representation vector to obtain target word data.
  • the target word detection model includes a second embedding layer, a third embedding layer, a first GRU layer and a second GRU layer, and step S104 may include but not limited to include steps S401 to step S404:
  • Step S401 encoding the original speech data through the second embedding layer to obtain the initial text feature vector
  • Step S402 encoding text entity features through the third embedding layer to obtain initial entity feature vectors
  • Step S403 performing feature extraction on the initial text feature vector through the first GRU layer to obtain the target text feature vector
  • Step S404 performing feature extraction on the initial entity feature vector through the second GRU layer to obtain a target entity feature vector.
  • step S401 and step S402 of some embodiments carry out content (Content) encoding to original speech data by the first embedding layer, obtain initial text feature vector; Carry out entity (Entity) to text entity feature respectively by the second embedding layer Encode to get the initial entity feature vector.
  • Content Content
  • Entity Entity
  • step S403 of some embodiments the initial text feature vector is sent to the first GRU layer, the timing information of the initial text feature vector is captured by the first GRU layer, and the high-level features of content coding are extracted to obtain the target text feature vector.
  • step S404 of some embodiments the initial entity feature vector is sent to the second GRU layer, the timing information of the initial entity feature vector is captured by the second GRU layer, and the high-level features of the entity code are extracted to obtain the target entity feature vector.
  • step S105 may include but not limited to include steps S501 to step S502:
  • Step S501 encoding entity triples through the fourth embedding layer to obtain initial attribute feature vectors
  • Step S502 performing graph convolution processing on the initial attribute feature vector through the graph convolution network layer to obtain the target attribute feature vector.
  • step S501 of some embodiments attribute encoding is performed on entity triples through the fourth embedding layer to obtain initial attribute feature vectors.
  • the softmax function in the graph convolutional network layer performs entity classification on each node of the initial attribute feature vector to obtain a label entity, and then according to the self-encoder and label entity in the graph convolutional network layer The edge of the graph is reconstructed to obtain the entity feature map, and finally the entity feature map is convoluted through the graph convolution network layer to obtain the target attribute feature vector.
  • the target word detection model includes a first attention mechanism layer, a second attention mechanism layer and a third attention mechanism layer
  • step S106 may include but not limited to steps S601 to S603 :
  • Step S601 performing weighted calculation on the target text feature vector and the target entity feature vector through the first attention mechanism layer and the preset first weight ratio to obtain the first representation vector;
  • Step S602 performing weighted calculation on the target entity feature vector and the target attribute feature vector through the second attention mechanism layer and the preset second weight ratio to obtain the second representation vector;
  • Step S603 weighting the first characterization vector and the second characterization vector through the third attention mechanism layer and the preset third weight ratio to obtain the target speech characterization vector.
  • the target text feature vector (Content Feature), the target entity feature vector (Entity Feature) are input to the first attention mechanism layer (C&E attention layer), through the first attention mechanism layer
  • the attention mechanism algorithm and the first weight ratio perform weighted calculations on the target text feature vector and the target entity feature vector, and assign higher weights to features with higher importance to obtain the first representation vector F(C&E).
  • the target attribute feature vector (Entity Attribute Feature), the target entity feature vector (Entity Feature) are input to the second attention mechanism layer (E&EA attention layer), through the second attention mechanism layer
  • the attention mechanism algorithm and the second weight ratio perform weighted calculations on the target attribute feature vector and the target entity feature vector to obtain the second representation vector F(E&EA).
  • the first characterization vector F (C & E) output by the first attention mechanism layer is used as Query (Q)
  • the second characterization vector F (E & EA) output by the second attention mechanism layer is used as Key(K) and Value(V)
  • the third attention mechanism layer C&E&EA attention layer
  • the first representation vector and the second representation vector are weighted by the attention mechanism algorithm and the third weight of the third attention mechanism layer Calculate and obtain the high-level feature expression S of the speech, that is, the target speech representation vector.
  • the target word detection model includes a fully connected layer and a prediction layer
  • step S107 may include but is not limited to include steps S701 to step S703:
  • Step S701 Map the target speech representation vector to a preset vector space through the fully connected layer to obtain a standard speech representation vector
  • Step S702 calculate the label probability through the prediction function of the prediction layer, the speech category label and the standard speech representation vector, and obtain the predicted probability value of each speech category label;
  • Step S703 according to the magnitude relationship between the predicted probability value and the preset predicted probability threshold, the target word data is obtained.
  • the target speech representation vector is mapped to a preset vector space through the MLP network in the fully connected layer and the feature dimension of the speech category label, so that the obtained standard speech representation vector and speech category label are in the the same feature dimension.
  • the prediction function may be a softmax function. For example, a probability distribution is created on each speech category label through the softmax function, and the predicted probability value of the standard speech representation vector belonging to each speech category is obtained.
  • step S703 of some embodiments a standard speech representation vector whose predicted probability value is greater than or equal to the prediction probability threshold is extracted from the speech category label, and the speech data corresponding to the standard speech representation vector is used as the target word data.
  • the original speech data to be detected is obtained; the entity feature extraction is performed on the original speech data through a pre-trained feature extraction model to obtain text entity features, which can make the obtained text entity features more in line with the detection requirements. Furthermore, knowledge extraction is performed on the preset knowledge map according to the text entity features to obtain entity triples, and the original speech data, text entity features and entity triples are extracted through the pre-trained target word detection model to obtain the target Text feature vectors, target entity feature vectors and target attribute feature vectors improve feature extraction efficiency. Further, the target text feature vector, target attribute feature vector, and target entity feature vector are weighted and calculated by the target word detection model to obtain the target speech representation vector.
  • target text feature vector, target attribute feature Vectors and target entity feature vectors are weighted to improve the accuracy of target speech representation vectors.
  • target word detection model is used to detect the target word representation vector to obtain the target word data, which can improve the accuracy of target word detection.
  • the embodiment of the present application also provides a target word detection device, which can realize the above target word detection method, the device includes:
  • Data acquisition module 801 used to acquire the original speech data to be detected
  • the entity feature extraction module 802 is used to extract entity features from the original speech data through a pre-trained feature extraction model to obtain text entity features;
  • a knowledge extraction module 803, configured to perform knowledge extraction on a preset knowledge graph according to the text entity features to obtain entity triples
  • the first feature extraction module 804 is used to perform feature extraction on the original speech data through a pre-trained target word detection model to obtain a target text feature vector, and perform feature extraction on the text entity features through the target word detection model , get the target entity feature vector;
  • the second feature extraction module 805 is used to perform feature extraction on the entity triple through the target word detection model to obtain a target attribute feature vector
  • a weighted calculation module 806, configured to perform weighted calculations on the target text feature vector, the target attribute feature vector, and the target entity feature vector through the target word detection model to obtain a target speech representation vector;
  • the target word detection module 807 is configured to perform target word detection on the target speech representation vector through the target word detection model to obtain target word data.
  • the specific implementation of the device for detecting target words is basically the same as the specific embodiment of the method for detecting target words above, and will not be repeated here.
  • the embodiment of the present application also provides an electronic device, the electronic device includes: a memory, a processor, a program stored in the memory and operable on the processor, and a data bus for realizing connection and communication between the processor and the memory , when the program is executed by the processor, a method for detecting target words is realized, wherein the method for detecting target words includes: obtaining original speech data to be detected; Feature extraction to obtain text entity features; knowledge extraction is performed on the preset knowledge map according to the text entity features to obtain entity triples; feature extraction is performed on the original speech data through the pre-trained target word detection model to obtain the target Text feature vector, and carry out feature extraction to described text entity feature by described target word detection model, obtain target entity feature vector; Carry out feature extraction to described entity triplet by described target word detection model, obtain target attribute feature Vector; through the target word detection model, the target text feature vector, the target attribute feature vector, and the target entity feature vector are weighted to obtain the target speech representation vector; through the target word detection model, the The target speech representation vector is used
  • FIG. 9 illustrates a hardware structure of an electronic device in another embodiment.
  • the electronic device includes:
  • the processor 901 may be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs, so as to realize The technical solutions provided by the embodiments of the present application;
  • a general-purpose CPU Central Processing Unit, central processing unit
  • a microprocessor an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs, so as to realize The technical solutions provided by the embodiments of the present application;
  • ASIC Application Specific Integrated Circuit
  • the memory 902 may be implemented in the form of a read-only memory (ReadOnlyMemory, ROM), a static storage device, a dynamic storage device, or a random access memory (RandomAccessMemory, RAM).
  • the memory 902 can store operating systems and other application programs.
  • the relevant program codes are stored in the memory 902 and called by the processor 901 to execute the implementation of this application.
  • the input/output interface 903 is used to realize information input and output
  • the communication interface 904 is used to realize the communication interaction between the device and other devices, and the communication can be realized through a wired method (such as USB, network cable, etc.), or can be realized through a wireless method (such as a mobile network, WIFI, Bluetooth, etc.);
  • bus 905 for transferring information between various components of the device (such as processor 901, memory 902, input/output interface 903 and communication interface 904);
  • the processor 901 , the memory 902 , the input/output interface 903 and the communication interface 904 are connected to each other within the device through the bus 905 .
  • the embodiment of the present application also provides a storage medium, the storage medium is a computer-readable storage medium for computer-readable storage, the storage medium stores one or more programs, and one or more programs can be processed by one or more implement, to realize a detection method of a target word, wherein, the detection method of the target word includes: obtaining the original speech data to be detected; performing entity feature extraction on the original speech data through a pre-trained feature extraction model, Obtain text entity features; perform knowledge extraction on the preset knowledge map according to the text entity features, and obtain entity triples; perform feature extraction on the original speech data through the pre-trained target word detection model, and obtain target text feature vectors , and perform feature extraction on the text entity features through the target word detection model to obtain a target entity feature vector; perform feature extraction on the entity triples through the target word detection model to obtain a target attribute feature vector;
  • the target word detection model performs weighted calculation on the target text feature vector, the target attribute feature vector, and the target entity feature vector to obtain a target speech representation vector; the target speech
  • memory can be used to store non-transitory software programs and non-transitory computer-executable programs.
  • the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage devices.
  • the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor via a network.
  • networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the target word detection method, the target word detection device, the electronic device and the storage medium provided in the embodiment of the present application obtain the original speech data to be detected; perform entity feature extraction on the original speech data through a pre-trained feature extraction model, Obtaining the text entity features can make the obtained text entity features more in line with the detection requirements. Furthermore, knowledge extraction is performed on the preset knowledge map according to the text entity features to obtain entity triples, and the original speech data, text entity features and entity triples are extracted through the pre-trained target word detection model to obtain the target Text feature vectors, target entity feature vectors and target attribute feature vectors improve feature extraction efficiency.
  • the target text feature vector, target attribute feature vector, and target entity feature vector are weighted through the attention mechanism layer of the target word detection model and the preset weight ratio, more important attribute features can be paid attention to, and the acquisition rate can be improved.
  • the accuracy of the target speech representation vector obtained.
  • the target word is detected on the target speech representation vector, and the target word data is obtained, which can improve the accuracy of detecting the target word.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including multiple instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, referred to as ROM), random access memory (Random Access Memory, referred to as RAM), magnetic disk or optical disc, etc., which can store programs. medium.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or optical disc etc., which can store programs. medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)

Abstract

本申请提供了一种目标词语的检测方法、装置、电子设备及存储介质,属于人工智能技术领域。该方法包括:获取待检测的原始言论数据;通过预设的特征提取模型对原始言论数据进行实体特征提取,得到文本实体特征;根据文本实体特征对预设知识图谱进行知识抽取,得到实体三元组;通过预设的目标词语检测模型对原始言论数据、文本实体特征、实体三元组进行特征抽取,得到目标文本特征向量、目标实体特征向量、目标属性特征向量;通过目标词语检测模型对目标文本特征向量、目标属性特征向量、目标实体特征向量进行加权计算,得到目标言论表征向量,并对目标言论表征向量进行目标词语检测,得到目标词语数据。本申请能够提高检测目标词语的准确性。

Description

目标词语的检测方法、装置、电子设备及存储介质
本申请要求于2022年2月22日提交中国专利局、申请号为202210160972.1,发明名称为“目标词语的检测方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种目标词语的检测方法、装置、电子设备及存储介质。
背景技术
目前,检测目标词语的方法大多数是基于人工构建的特征来对言论中的目标词语进行检测。
技术问题
以下是发明人意识到的现有技术的技术问题:
相关技术中,人工构建特征往往需要技术人员具有较强的业务知识和领域知识,这会使得基于人工构建的特征具有一定的局限性,影响目标词语的检测准确性,因此,如何提高检测目标词语的准确性,成为了亟待解决的技术问题。
技术解决方案
第一方面,本申请实施例提供了一种目标词语的检测方法,所述方法包括:
获取待检测的原始言论数据;
通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;
通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;
通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
第二方面,本申请实施例提供了一种目标词语的检测装置,所述装置包括:
数据获取模块,用于获取待检测的原始言论数据;
实体特征提取模块,用于通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
知识抽取模块,用于根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
第一特征提取模块,用于通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
第二特征提取模块,用于通过所述目标词语检测模型对所述实体三元组进行特征抽取, 得到目标属性特征向量;
加权计算模块,用于通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;
目标词语检测模块,用于通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
第三方面,本申请实施例提供了一种电子设备,所述电子设备包括存储器、处理器、存储在所述存储器上并可在所述处理器上运行的程序以及用于实现所述处理器和所述存储器之间的连接通信的数据总线,所述程序被所述处理器执行时实现一种目标词语的检测方法,其中,所述目标词语的检测方法包括:
获取待检测的原始言论数据;
通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;
通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;
通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
第四方面,本申请实施例提供了一种存储介质,所述存储介质为计算机可读存储介质,用于计算机可读存储,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现一种目标词语的检测方法,其中,所述目标词语的检测方法包括:
获取待检测的原始言论数据;
通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;
通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;
通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
有益效果
本申请提出的目标词语的检测方法、装置、电子设备及存储介质,其通过获取待检测的原始言论数据;通过预先训练的特征提取模型对原始言论数据进行实体特征提取,得到文本实体特征,能够使得得到的文本实体特征更加符合检测需求。进而,根据文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组,并通过预先训练的目标词语检测模型对原始言论数据、文本实体特征以及实体三元组进行特征抽取,得到目标文本特征向量、目标实体特征向量以及目标属性特征向量,提高了特征提取效率。进一步地,通过目标词语检测模型对目标文本特征向量、目标属性特征向量、目标实体特征向量进行加权计算,得到目标言论 表征向量,这样一来,能够较为准确地对目标文本特征向量、目标属性特征向量、目标实体特征向量进行加权计算,提高目标言论表征向量的准确性。最后,通过目标词语检测模型对目标言论表征向量进行目标词语检测,得到目标词语数据,能够提高检测目标词语的准确性。
附图说明
图1是本申请实施例提供的目标词语的检测方法的流程图;
图2是图1中的步骤S102的流程图;
图3是图1中的步骤S103的流程图;
图4是图1中的步骤S104的流程图;
图5是图1中的步骤S105的流程图;
图6是图1中的步骤S106的流程图;
图7是图1中的步骤S107的流程图;
图8是本申请实施例提供的目标词语的检测装置的结构示意图;
图9是本申请实施例提供的电子设备的硬件结构示意图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
需要说明的是,虽然在装置示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置中的模块划分,或流程图中的顺序执行所示出或描述的步骤。说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
除非另有定义,本申请所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
首先,对本申请中涉及的若干名词进行解析:
人工智能(artificial intelligence,AI):是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学;人工智能是计算机科学的一个分支,人工智能企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器,该领域的研究包括机器人、语言识别、图像识别、自然语言处理和专家系统等。人工智能可以对人的意识、思维的信息过程的模拟。人工智能还是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。
自然语言处理(natural language processing,NLP):NLP用计算机来处理、理解以及运用人类语言(如中文、英文等),NLP属于人工智能的一个分支,是计算机科学与语言学的交叉学科,又常被称为计算语言学。自然语言处理包括语法分析、语义分析、篇章理解等。自然语言处理常用于机器翻译、手写体和印刷体字符识别、语音识别及文语转换、信息意图识别、信息抽取与过滤、文本分类与聚类、舆情分析和观点挖掘等技术领域,它涉及与语言处理相关的数据挖掘、机器学习、知识获取、知识工程、人工智能研究和与语言计算相关的语言学研究等。
信息抽取(Information Extraction,NER):从自然语言文本中抽取指定类型的实体、关系、事件等事实信息,并形成结构化数据输出的文本处理技术。信息抽取是从文本数据中抽取特定信息的一种技术。文本数据是由一些具体的单位构成的,例如句子、段落、篇章,文本信息正是由一些小的具体的单位构成的,例如字、词、词组、句子、段落或是这些具体的单位的组合。抽取文本数据中的名词短语、人名、地名等都是文本信息抽取,当然,文本信 息抽取技术所抽取的信息可以是各种类型的信息。
知识图谱(Knowledge Graph):是通过将应用数学、图形学、信息可视化技术、信息科学等学科的理论与方法与计量学引文分析、共现分析等方法结合,并利用可视化的图谱形象地展示学科的核心结构、发展历史、前沿领域以及整体知识架构达到多学科融合目的现代理论。知识图谱主要目标是用来描述真实世界中存在的各种实体和概念,以及他们之间的强关系,我们用关系去描述两个实体之间的关联。在Web视角下,知识图谱如同简单文本之间的超链接一样,通过建立数据之间的语义链接,支持语义搜索。在自然语言处理视角下,知识图谱就是从文本中抽取语义和结构化的数据。在人工智能视角下,知识图谱是利用知识库来辅助理解人类语言的工具。在数据库视角下,知识图谱是利用图的方式去存储知识的方法。知识图谱是比较通用的语义知识的形式化描述框架,用节点表示语义符号,用边表示语义之间的关系。知识图谱旨在描述真实世界中存在的各种实体或概念及其关系,其构成一张巨大的语义网络图,节点表示实体或概念,边则由属性或关系构成。现在的知识图谱已被用来泛指各种大规模的知识库。知识图谱又被称作语义网络(semantic network),从早期开始,语义网络就推动了基于图的知识表示,例如,推动RDF标准的过程中,在这样一个基于图的知识表示体系里面,实体作为图的节点,节点之间的连线作为关系。在构建知识图谱的过程中,往往需要将文本向量化,因此基于文本数据的Word2Vec应运而生,其通过浅神经网络语言模型将每个词用一个向量表示,通过构建输入层、映射层和输出层,利用神经网络学习可预测在该词上下文中出现概率最大的词语。通过对文本词库的训练将文本转化为n维向量空间中的向量,并通过在空间中的余弦相似度代表词语在语义上的接近程度。
实体:指具有可区别性且独立存在的某种事物。如某一个人、某一个城市、某一种植物等、某一种商品等等。世界万物有具体事物组成,此指实体。实体是知识图谱中的最基本元素,不同的实体间存在不同的关系。
关系:实体与实体之间、不同的概念与概念之间、概念与实体之间存在的某种相互关系。关系形式化为一个函数,它把k个点映射到一个布尔值。在知识图谱上,关系则是一个把k个图节点(实体、语义类、属性值)映射到布尔值的函数。
属性(值):实体指定属性的值,是从一个实体指向它的属性值。不同的属性类型对应于不同类型属性的边。属性值主要指对象指定属性的值。例如:“面积”、“人口”、“首都”是几种不同的属性。属性值主要指对象指定属性的值,例如960万平方公里等。
属性(值):实体指定属性的值,是从一个实体指向它的属性值。不同的属性类型对应于不同类型属性的边。属性值主要指对象指定属性的值。例如:“面积”、“人口”、“首都”是几种不同的属性。属性值主要指对象指定属性的值,例如960万平方公里等。
嵌入(embedding):embedding是一种向量表征,是指用一个低维的向量表示一个物体,该物体可以是一个词,或是一个商品,或是一个电影等等;这个embedding向量的性质是能使距离相近的向量对应的物体有相近的含义,比如embedding(复仇者联盟)和embedding(钢铁侠)之间的距离就会很接近,但embedding(复仇者联盟)和embedding(乱世佳人)的距离就会远一些。embedding实质是一种映射,从语义空间到向量空间的映射,同时尽可能在向量空间保持原样本在语义空间的关系,如语义接近的两个词汇在向量空间中的位置也比较接近。embedding能够用低维向量对物体进行编码还能保留其含义,常应用于机器学习,在机器学习模型构建过程中,通过把物体编码为一个低维稠密向量再传给DNN,以提高效率。
注意力机制(Attention Mechanism):注意力机制源于对人类视觉的研究。在认知科学中,由于信息处理的瓶颈,人类会选择性地关注所有信息的一部分,同时忽略其他可见的信息。上述机制通常被称为注意力机制。注意力一般分为两种:一种是自上而下的有意识的注意力,称为聚焦式(focus)注意力。聚焦式注意力是指有预定目的、依赖任务的、主动有意识地聚焦于某一对象的注意力;另一种是自下而上的无意识的注意力,称为基于显著性(saliency-based)的注意力。基于显著性的注意力是由外界刺激驱动的注意,不需要主动干预,也和任务无关。如果一个对象的刺激信息不同于其周围信息,门控(gating)机制就 可以把注意力转向这个对象。注意力机制的变体包括多头注意力(multi-head attention)和硬性注意力(hardattention),多头注意力是利用多个查询,来平行地计算从输入信息中选取多个信息。每个注意力关注输入信息的不同部分。硬性注意力有两种实现方式,一种是选取最高概率的输入信息。另一种硬性注意力可以通过在注意力分布式上随机采样的方式实现。
双向长短时记忆(Bi-directional Long Short-Term Memory,Bi-LSTM):是由前向LSTM与后向LSTM组合而成。在自然语言处理任务中都常被用来建模上下文信息。Bi-LSTM在LSTM的基础上,结合了输入序列在前向和后向两个方向上的信息。对于t时刻的输出,前向LSTM层具有输入序列中t时刻以及之前时刻的信息,而后向LSTM层中具有输入序列中t时刻以及之后时刻的信息。前向LSTM层t时刻的输出记作,后向LSTM层t时刻的输出结果记作,两个LSTM层输出的向量可以使用相加、平均值或连接等方式进行处理。
条件随机场算法(conditional random field algorithm,CRF):是一种数学算法;结合了最大熵模型和隐马尔可夫模型的特点,是一种无向图模型,近年来在分词、词性标注和命名实体识别等序列标注任务中取得了很好的效果。条件随机场是一个典型的判别式模型,其联合概率可以写成若干势函数联乘的形式,其中最常用的是线性链条件随机场。若让x=(x1,x2,…xn)表示被观察的输入数据序列,y=(y1,y2,…yn)表示一个状态序列,在给定一个输入序列的情况下,线性链的CRF模型定义状态序列的联合条件概率为p(y|x)=exp{}(2-14);Z(x)={}(2-15);其中:Z是以观察序列x为条件的概率归一化因子;fj(yi-1,yi,x,i)是一个任意的特征函数。
门控循环单元(GRU,gated recurrent unit):GRU是为了解决长期记忆和反向传播中的梯度等问题而提出来的。GRU作为LSTM的一种变体,将忘记门和输入门合成了一个单一的更新门。同样还混合了细胞状态和隐藏状态,加诸其他一些改动。最终的模型比标准的LSTM模型要简单,也是非常流行的变体。在GRU模型中只有两个门:分别是更新门和重置门。
目前,检测目标词语的方法大多数是基于人工构建的特征来对言论中的目标词语进行检测,而人工构建特征往往需要技术人员具有较强的业务知识和领域知识,这会使得基于人工构建的特征具有一定的局限性,影响目标词语的检测准确性,因此,如何提高检测目标词语的准确性,成为了亟待解决的技术问题。
基于此,本申请实施例提供了一种目标词语的检测方法、装置、电子设备及存储介质,旨在提高检测目标词语的准确性。
本申请实施例提供的目标词语的检测方法、装置、电子设备及存储介质,具体通过如下实施例进行说明,首先描述本申请实施例中的目标词语的检测方法。
本申请实施例可以基于人工智能技术对相关的数据进行获取和处理。其中,人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。
人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、机器人技术、生物识别技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
本申请实施例提供的目标词语的检测方法,涉及人工智能技术领域。本申请实施例提供的目标词语的检测方法可应用于终端中,也可应用于服务器端中,还可以是运行于终端或服务器端中的软件。在一些实施例中,终端可以是智能手机、平板电脑、笔记本电脑、台式计算机等;服务器端可以配置成独立的物理服务器,也可以配置成多个物理服务器构成的服务器集群或者分布式系统,还可以配置成提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN以及大数据和人工智能平台等基础云计算服务的云服务器;软件可以是实现目标词语的检测方法的应用等,但并不局限于 以上形式。
本申请可用于众多通用或专用的计算机系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的消费电子设备、网络PC、小型计算机、大型计算机、包括以上任何系统或设备的分布式计算环境等等。本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
图1是本申请实施例提供的目标词语的检测方法的一个可选的流程图,图1中的方法可以包括但不限于包括步骤S101至步骤S107。
步骤S101,获取待检测的原始言论数据;
步骤S102,通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
步骤S103,根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
步骤S104,通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
步骤S105,通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;
步骤S106,通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;
步骤S107,通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
本申请实施例的步骤S101至步骤S107,通过预先训练的特征提取模型对原始言论数据进行实体特征提取,能够使得得到的文本实体特征更加符合检测需求。根据文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组,并通过预先训练的目标词语检测模型对原始言论数据、文本实体特征以及实体三元组进行特征抽取,得到目标文本特征向量、目标实体特征向量以及目标属性特征向量,能够提高特征提取效率。通过目标词语检测模型对目标文本特征向量、目标属性特征向量、目标实体特征向量进行加权计算,能够较为准确地对目标文本特征向量、目标属性特征向量、目标实体特征向量进行加权计算,提高目标言论表征向量的准确性。最后,通过目标词语检测模型对目标言论表征向量进行目标词语检测,得到目标词语数据,能够提高检测目标词语的准确性。
在一些实施例的步骤S101中,可以通过编写网络爬虫,设置好数据源之后进行有目标性地爬取数据,得到待检测的原始言论数据。需要说明的是,该原始言论数据可以从不同类型的社交媒体上获取到,例如,新浪微博、知识论坛、百度贴吧等等,不限于此。原始言论数据可以包括用户发布的社会新闻、通知公告等信息,例如,原始言论数据可以是“某小学的寒假时间推迟”等等。
在一些实施例中,步骤S102之前,该方法还包括预训练特征提取模型,该特征提取模型可以根据命名实体识别算法(Named Entity Recognition,NER)训练得到,例如,通过BERT模型+条件随机场算法(CRF)、Bi-LSTM算法+条件随机场算法(CRF)等等对初始模型进行训练,得到特征提取模型,其中,特征提取模型包括第一嵌入层、Bi-LSTM层以及CRF层。
请参阅图2,在一些实施例中,步骤S102可以包括但不限于包括步骤S201至步骤S203:
步骤S201,通过第一嵌入层对原始言论数据进行词嵌入处理,得到文本词向量;
步骤S202,通过Bi-LSTM层的预设函数、预设特征类别标签以及文本词向量进行标签概率计算,得到每一预设特征类别标签的预测概率值;
步骤S203,根据CRF层的预设约束因子和预测概率值进行特征提取,得到文本实体特征。
在一些实施例的步骤S201中,通过第一嵌入层的Embedding能够用低维向量对原始言论数据进行词嵌入处理,得到文本词向量。
在一些实施例的步骤S202中,预设函数可以为softmax函数。Bi-LSTM层的bi-LSTM算法通过左到右的长短记忆和右向左的长短时记忆在输出被连接的位置生成单一的输出层。通过这一输出层可以将输入的文本词向量直接传递到softmax函数上,通过softmax函数在预设特征类别标签上创建一个概率分布,从而根据概率分布对文本词向量进行标记分类,得到标注文本词向量和每一预设特征类别标签的预测概率值。
在一些实施例的步骤S203中,根据CRF层的预设约束因子和预测概率值对特征类别标签进行筛选处理,保留符合要求的特征类别标签,从而根据符合要求的特征类别标签得到对应的文本实体特征。例如,约束因子可以是句子中第一个词总是以标签“B-”或“O”开始,而不是“I-”。或者标签“B-label1 I-label2 I-label3 I-…”,label1,label2,label3应该属于同一类实体。例如,“B-Person I-Person”是合法的序列,但是“B-Person I-Organization”是非法标签序列。通过CRF层的预设约束因子能够提高特征提取的准确性,得到符合检测需求的文本实体特征。
请参阅图3,在一些实施例中,步骤S103可以包括但不限于包括步骤S301至步骤S303:
步骤S301,根据文本实体特征,遍历知识图谱的每一知识节点,得到与文本实体特征对应的候选属性特征;
步骤S302,根据知识图谱的特征连接路径,对候选属性特征进行筛选处理,得到目标属性特征;
步骤S303,对目标属性特征和文本实体特征进行拼接处理,得到实体三元组。
在一些实施例的步骤S301中,通过遍历知识图谱的每一知识节点,可以得到每一条言论中涉及到的有关属性,即与文本实体特征对应的所有属性特征,将其作为候选属性特征。需要说明的是,这一知识图谱的构建过程可以是:根据已知知识图谱构建初始知识图谱的模式图,其中已知知识图谱基于选定的社交媒体的言论数据构建;将已知知识图谱中的结构化数据和非结构化数据转换成实体-属性-属性值的三元组,将三元组通过知识融合的方式整合到知识图谱中,获得初始知识图谱的数据图以及调整后的模式图;根据知识图谱的推理功能对初始知识图谱进行逻辑检查,获得最终的知识图谱。
在一些实施例的步骤S302中,根据知识图谱的特征连接路径,选取与文本实体特征直接相连的候选属性特征作为目标属性特征。进一步地,为了扩大数据量,还可以选取与文本实体特征间接连接的候选属性特征。通过这一方式能够对候选属性特征进行筛选处理,得到目标属性特征。
在一些实施例的步骤S303中,将目标属性特征与文本实体特征以及目标属性特征对应的属性值进行拼接、实体对齐处理,得到实体关系三元组,即实体三元组,该实体三元组可以表示为实体-属性-属性值。
在一些实施例中,步骤S104之前,该方法还包括预训练目标词语检测模型,该目标词语检测模型可以基于注意力机制算法进行构建。具体地,该目标词语检测模型的第一部分包括第二嵌入层、第三嵌入层、第一GRU层以及第二GRU层以及图卷积网络层,用于对输入的特征数据进行编码处理和特征抽取,得到更能够体现出言论类别的特征向量。该目标词语检测模型的第二部分包括第一注意力机制层、第二注意力机制层以及第三注意力机制层,用于利用注意力机制算法对不同重要等级的特征向量赋予不同大小的权重,得到目标言论表征向量。该目标词语检测模型的第三部分包括全连接层和预测层,用于对目标言论表征向量进行目标词语预测,得到目标词语数据。
请参阅图4,在一些实施例中,目标词语检测模型包括第二嵌入层、第三嵌入层、第一GRU层以及第二GRU层,步骤S104可以包括但不限于包括步骤S401至步骤S404:
步骤S401,通过第二嵌入层对原始言论数据进行编码处理,得到初始文本特征向量;
步骤S402,通过第三嵌入层对文本实体特征进行编码处理,得到初始实体特征向量;
步骤S403,通过第一GRU层对初始文本特征向量进行特征抽取,得到目标文本特征向量;
步骤S404,通过第二GRU层对所述初始实体特征向量进行特征抽取,得到目标实体特征向量。
在一些实施例的步骤S401和步骤S402中,通过第一嵌入层对原始言论数据进行内容(Content)编码,得到初始文本特征向量;通过第二嵌入层分别对对文本实体特征进行实体(Entity)编码,得到初始实体特征向量。
在一些实施例的步骤S403中,将初始文本特征向量送入第一GRU层,通过第一GRU层捕捉初始文本特征向量的时序信息,抽取到内容编码的高级特征,得到目标文本特征向量。
在一些实施例的步骤S404中,将初始实体特征向量送入第二GRU层,通过第二GRU层捕捉初始实体特征向量的时序信息,抽取到实体编的高级特征,得到目标实体特征向量。
请参阅图5,在一些实施例中,目标词语检测模型包括第四嵌入层和图卷积网络层,步骤S105可以包括但不限于包括步骤S501至步骤S502:
步骤S501,通过第四嵌入层对实体三元组进行编码处理,得到初始属性特征向量;
步骤S502,通过图卷积网络层对初始属性特征向量进行图卷积处理,得到目标属性特征向量。
在一些实施例的步骤S501中,通过第四嵌入层对实体三元组进行属性编码,得到初始属性特征向量。
在一些实施例的步骤S502中,通过图卷积网络层中的softmax函数对初始属性特征向量的每一节点进行实体分类,得到标签实体,进而根据图卷积网络层中自编码器和标签实体重构图的边,得到实体特征图,最后通过图卷积网络层对实体特征图进行卷积处理,得到目标属性特征向量。
请参阅图6,在一些实施例中,目标词语检测模型包括第一注意力机制层、第二注意力机制层以及第三注意力机制层,步骤S106可以包括但不限于包括步骤S601至步骤S603:
步骤S601,通过第一注意力机制层和预设的第一权重比例对目标文本特征向量、目标实体特征向量进行加权计算,得到第一表征向量;
步骤S602,通过第二注意力机制层和预设的第二权重比例对目标实体特征向量、目标属性特征向量进行加权计算,得到第二表征向量;
步骤S603,通过第三注意力机制层和预设的第三权重比例对第一表征向量、第二表征向量进行加权计算,得到目标言论表征向量。
在一些实施例的步骤S601中,将目标文本特征向量(Content Feature)、目标实体特征向量(Entity Feature)输入至第一注意力机制层(C&E attent ion层),通过第一注意力机制层的注意力机制算法和第一权重比例对目标文本特征向量、目标实体特征向量进行加权计算,对重要程度更高的特征赋予更高的权重,得到第一表征向量F(C&E)。
在一些实施例的步骤S602中,将目标属性特征向量(Entity Attribute Feature)、目标实体特征向量(Entity Feature)输入至第二注意力机制层(E&EA attention层),通过第二注意力机制层的注意力机制算法和第二权重比例对目标属性特征向量、目标实体特征向量进行加权计算,得到第二表征向量F(E&EA)。
在一些实施例的步骤S603中,将第一注意力机制层输出的第一表征向量F(C&E)作为Query(Q),将第二注意力机制层输出的第二表征向量F(E&EA)作为Key(K)和Value(V)送入第三注意力机制层(C&E&EA attention层),通过第三注意力机制层的注意力机制算法和第三权重对一表征向量、第二表征向量进行加权计算,获得该言论的高级特征表达S,即目标言论表征向量。
请参阅图7,在一些实施例中,目标词语检测模型包括全连接层和预测层,步骤S107可以包括但不限于包括步骤S701至步骤S703:
步骤S701,通过全连接层将目标言论表征向量映射到预设的向量空间,得到标准言论表征向量;
步骤S702,通过预测层的预测函数、言论类别标签以及标准言论表征向量进行标签概率计算,得到每一言论类别标签的预测概率值;
步骤S703,根据预测概率值与预设的预测概率阈值的大小关系,得到目标词语数据。
在一些实施例的步骤S701中,通过全连接层中的MLP网络以及言论类别标签的特征维度,将目标言论表征向量映射到预设的向量空间,使得得到的标准言论表征向量与言论类别标签处于同一特征维度。
在一些实施例的步骤S702中,该预测函数可以为softmax函数。例如,通过softmax函数在每一言论类别标签上创建一个概率分布,得到标准言论表征向量属于每一言论类别的预测概率值。
在一些实施例的步骤S703中,提取在言论类别标签中,预测概率值大于或者等于预测概率阈值的标准言论表征向量,将该标准言论表征向量对应的言论数据作为目标词语数据。
本申请实施例通过获取待检测的原始言论数据;通过预先训练的特征提取模型对原始言论数据进行实体特征提取,得到文本实体特征,能够使得得到的文本实体特征更加符合检测需求。进而,根据文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组,并通过预先训练的目标词语检测模型对原始言论数据、文本实体特征以及实体三元组进行特征抽取,得到目标文本特征向量、目标实体特征向量以及目标属性特征向量,提高了特征提取效率。进一步地,通过目标词语检测模型对目标文本特征向量、目标属性特征向量、目标实体特征向量进行加权计算,得到目标言论表征向量,这样一来,能够较为准确地对目标文本特征向量、目标属性特征向量、目标实体特征向量进行加权计算,提高目标言论表征向量的准确性。最后,通过目标词语检测模型对目标言论表征向量进行目标词语检测,得到目标词语数据,能够提高检测目标词语的准确性。
请参阅图8,本申请实施例还提供一种目标词语的检测装置,可以实现上述目标词语的检测方法,该装置包括:
数据获取模块801,用于获取待检测的原始言论数据;
实体特征提取模块802,用于通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
知识抽取模块803,用于根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
第一特征提取模块804,用于通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
第二特征提取模块805,用于通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;
加权计算模块806,用于通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;
目标词语检测模块807,用于通过目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
该目标词语的检测装置的具体实施方式与上述目标词语的检测方法的具体实施例基本相同,在此不再赘述。
本申请实施例还提供了一种电子设备,电子设备包括:存储器、处理器、存储在存储器上并可在处理器上运行的程序以及用于实现处理器和存储器之间的连接通信的数据总线,程序被处理器执行时实现一种目标词语的检测方法,其中,所述目标词语的检测方法包括:获取待检测的原始言论数据;通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实 体三元组;通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。该电子设备可以为包括平板电脑、车载电脑等任意智能终端。
请参阅图9,图9示意了另一实施例的电子设备的硬件结构,电子设备包括:
处理器901,可以采用通用的CPU(CentralProcessingUnit,中央处理器)、微处理器、应用专用集成电路(ApplicationSpecificIntegratedCircuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本申请实施例所提供的技术方案;
存储器902,可以采用只读存储器(ReadOnlyMemory,ROM)、静态存储设备、动态存储设备或者随机存取存储器(RandomAccessMemory,RAM)等形式实现。存储器902可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器902中,并由处理器901来调用执行本申请实施例的目标词语的检测方法;
输入/输出接口903,用于实现信息输入及输出;
通信接口904,用于实现本设备与其他设备的通信交互,可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信;
总线905,在设备的各个组件(例如处理器901、存储器902、输入/输出接口903和通信接口904)之间传输信息;
其中处理器901、存储器902、输入/输出接口903和通信接口904通过总线905实现彼此之间在设备内部的通信连接。
本申请实施例还提供了一种存储介质,存储介质为计算机可读存储介质,用于计算机可读存储,存储介质存储有一个或者多个程序,一个或者多个程序可被一个或者多个处理器执行,以实现一种目标词语的检测方法,其中,所述目标词语的检测方法包括:获取待检测的原始言论数据;通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。此外,所述计算机可读存储介质可以是非易失性,也可以是易失性。
存储器作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序以及非暂态性计算机可执行程序。此外,存储器可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。
在一些实施方式中,存储器可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至该处理器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
本申请实施例提供的目标词语的检测方法、目标词语的检测装置、电子设备及存储介质,其通过获取待检测的原始言论数据;通过预先训练的特征提取模型对原始言论数据进行实体特征提取,得到文本实体特征,能够使得得到的文本实体特征更加符合检测需求。进而,根据文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组,并通过预先训练的目标词语检测模型对原始言论数据、文本实体特征以及实体三元组进行特征抽取,得到目标文本 特征向量、目标实体特征向量以及目标属性特征向量,提高了特征提取效率。进一步地,通过目标词语检测模型的注意力机制层和预设的权重比例对目标文本特征向量、目标属性特征向量、目标实体特征向量进行加权计算,能够关注到更为重要的属性特征,提高获取到的目标言论表征向量的准确性。最后,通过目标词语检测模型的预测函数、言论类别标签对目标言论表征向量进行目标词语检测,得到目标词语数据,能够提高检测目标词语的准确性。
本申请实施例描述的实施例是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域技术人员可知,随着技术的演变和新应用场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本领域技术人员可以理解的是,图1-7中示出的技术方案并不构成对本申请实施例的限定,可以包括比图示更多或更少的步骤,或者组合某些步骤,或者不同的步骤。
以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、设备中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括多指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例的方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等各种可以存储程序的介质。
以上参照附图说明了本申请实施例的优选实施例,并非因此局限本申请实施例的权利范围。本领域技术人员不脱离本申请实施例的范围和实质内所作的任何修改、等同替换和改进,均应在本申请实施例的权利范围之内。

Claims (20)

  1. 一种目标词语的检测方法,其中,所述方法包括:
    获取待检测的原始言论数据;
    通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
    根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
    通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
    通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;
    通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;
    通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
  2. 根据权利要求1所述的目标词语的检测方法,其中,所述特征提取模型包括第一嵌入层、Bi-LSTM层以及CRF层,所述通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征的步骤,包括:
    通过所述第一嵌入层对所述原始言论数据进行词嵌入处理,得到文本词向量;
    通过所述Bi-LSTM层的预设函数、预设特征类别标签以及所述文本词向量进行标签概率计算,得到每一预设特征类别标签的预测概率值;
    根据所述CRF层的预设约束因子和所述预测概率值进行特征提取,得到所述文本实体特征。
  3. 根据权利要求1所述的目标词语的检测方法,其中,所述根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组的步骤,包括:
    根据所述文本实体特征,遍历所述知识图谱的每一知识节点,得到与所述文本实体特征对应的候选属性特征;
    根据所述知识图谱的特征连接路径,对所述候选属性特征进行筛选处理,得到目标属性特征;
    对所述目标属性特征和所述文本实体特征进行拼接处理,得到所述实体三元组。
  4. 根据权利要求1所述的目标词语的检测方法,其中,所述目标词语检测模型包括第二嵌入层、第三嵌入层、第一GRU层以及第二GRU层,所述通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量的步骤,包括:
    通过所述第二嵌入层对所述原始言论数据进行编码处理,得到初始文本特征向量;
    通过所述第三嵌入层对所述文本实体特征进行编码处理,得到初始实体特征向量;
    通过所述第一GRU层对所述初始文本特征向量进行特征抽取,得到所述目标文本特征向量;
    通过所述第二GRU层对所述初始实体特征向量进行特征抽取,得到所述目标实体特征向量。
  5. 根据权利要求1所述的目标词语的检测方法,其中,所述目标词语检测模型包括第四嵌入层和图卷积网络层,所述通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量的步骤,包括:
    通过所述第四嵌入层对所述实体三元组进行编码处理,得到初始属性特征向量;
    通过所述图卷积网络层对所述初始属性特征向量进行图卷积处理,得到所述目标属性特征向量。
  6. 根据权利要求5所述的目标词语的检测方法,其中,所述目标词语检测模型包括第一注意力机制层、第二注意力机制层以及第三注意力机制层,所述通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量的步骤,包括:
    通过所述第一注意力机制层和预设的第一权重比例对所述目标文本特征向量、所述目标实体特征向量进行加权计算,得到第一表征向量;
    通过所述第二注意力机制层和预设的第二权重比例对所述目标实体特征向量、所述目标属性特征向量进行加权计算,得到第二表征向量;
    通过所述第三注意力机制层和预设的第三权重比例对所述第一表征向量、所述第二表征向量进行加权计算,得到所述目标言论表征向量。
  7. 根据权利要求1至6任一项所述的目标词语的检测方法,其中,所述目标词语检测模型包括全连接层和预测层,所述通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据的步骤,包括:
    通过所述全连接层将所述目标言论表征向量映射到预设的向量空间,得到标准言论表征向量;
    通过所述预测层的预测函数、言论类别标签以及所述标准言论表征向量进行标签概率计算,得到每一所述言论类别标签的预测概率值;
    根据所述预测概率值与预设的预测概率阈值的大小关系,得到所述目标词语数据。
  8. 一种目标词语的检测装置,其中,所述装置包括:
    数据获取模块,用于获取待检测的原始言论数据;
    实体特征提取模块,用于通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
    知识抽取模块,用于根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
    第一特征提取模块,用于通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
    第二特征提取模块,用于通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;
    加权计算模块,用于通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;
    目标词语检测模块,用于通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
  9. 一种电子设备,其中,所述电子设备包括存储器、处理器、存储在所述存储器上并可在所述处理器上运行的程序以及用于实现所述处理器和所述存储器之间的连接通信的数据总线,所述程序被所述处理器执行时实现一种目标词语的检测方法,其中,所述目标词语的检测方法包括:
    获取待检测的原始言论数据;
    通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
    根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
    通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
    通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;
    通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目 标实体特征向量进行加权计算,得到目标言论表征向量;
    通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
  10. 根据权利要求9所述的电子设备,其中,所述特征提取模型包括第一嵌入层、Bi-LSTM层以及CRF层,所述通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征的步骤,包括:
    通过所述第一嵌入层对所述原始言论数据进行词嵌入处理,得到文本词向量;
    通过所述Bi-LSTM层的预设函数、预设特征类别标签以及所述文本词向量进行标签概率计算,得到每一预设特征类别标签的预测概率值;
    根据所述CRF层的预设约束因子和所述预测概率值进行特征提取,得到所述文本实体特征。
  11. 根据权利要求9所述的电子设备,其中,所述根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组的步骤,包括:
    根据所述文本实体特征,遍历所述知识图谱的每一知识节点,得到与所述文本实体特征对应的候选属性特征;
    根据所述知识图谱的特征连接路径,对所述候选属性特征进行筛选处理,得到目标属性特征;
    对所述目标属性特征和所述文本实体特征进行拼接处理,得到所述实体三元组。
  12. 根据权利要求9所述的电子设备,其中,所述目标词语检测模型包括第二嵌入层、第三嵌入层、第一GRU层以及第二GRU层,所述通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量的步骤,包括:
    通过所述第二嵌入层对所述原始言论数据进行编码处理,得到初始文本特征向量;
    通过所述第三嵌入层对所述文本实体特征进行编码处理,得到初始实体特征向量;
    通过所述第一GRU层对所述初始文本特征向量进行特征抽取,得到所述目标文本特征向量;
    通过所述第二GRU层对所述初始实体特征向量进行特征抽取,得到所述目标实体特征向量。
  13. 根据权利要求9所述的电子设备,其中,所述目标词语检测模型包括第四嵌入层和图卷积网络层,所述通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量的步骤,包括:
    通过所述第四嵌入层对所述实体三元组进行编码处理,得到初始属性特征向量;
    通过所述图卷积网络层对所述初始属性特征向量进行图卷积处理,得到所述目标属性特征向量。
  14. 根据权利要求13所述的电子设备,其中,所述目标词语检测模型包括第一注意力机制层、第二注意力机制层以及第三注意力机制层,所述通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量的步骤,包括:
    通过所述第一注意力机制层和预设的第一权重比例对所述目标文本特征向量、所述目标实体特征向量进行加权计算,得到第一表征向量;
    通过所述第二注意力机制层和预设的第二权重比例对所述目标实体特征向量、所述目标属性特征向量进行加权计算,得到第二表征向量;
    通过所述第三注意力机制层和预设的第三权重比例对所述第一表征向量、所述第二表征向量进行加权计算,得到所述目标言论表征向量。
  15. 一种存储介质,所述存储介质为计算机可读存储介质,用于计算机可读存储,其中,所述存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执 行,以实现一种目标词语的检测方法,其中,所述目标词语的检测方法包括:
    获取待检测的原始言论数据;
    通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征;
    根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组;
    通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量;
    通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量;
    通过所述目标词语检测模型对所述目标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量;
    通过所述目标词语检测模型对所述目标言论表征向量进行目标词语检测,得到目标词语数据。
  16. 根据权利要求15所述的存储介质,其中,所述特征提取模型包括第一嵌入层、Bi-LSTM层以及CRF层,所述通过预先训练的特征提取模型对所述原始言论数据进行实体特征提取,得到文本实体特征的步骤,包括:
    通过所述第一嵌入层对所述原始言论数据进行词嵌入处理,得到文本词向量;
    通过所述Bi-LSTM层的预设函数、预设特征类别标签以及所述文本词向量进行标签概率计算,得到每一预设特征类别标签的预测概率值;
    根据所述CRF层的预设约束因子和所述预测概率值进行特征提取,得到所述文本实体特征。
  17. 根据权利要求15所述的存储介质,其中,所述根据所述文本实体特征对预设的知识图谱进行知识抽取,得到实体三元组的步骤,包括:
    根据所述文本实体特征,遍历所述知识图谱的每一知识节点,得到与所述文本实体特征对应的候选属性特征;
    根据所述知识图谱的特征连接路径,对所述候选属性特征进行筛选处理,得到目标属性特征;
    对所述目标属性特征和所述文本实体特征进行拼接处理,得到所述实体三元组。
  18. 根据权利要求15所述的存储介质,其中,所述目标词语检测模型包括第二嵌入层、第三嵌入层、第一GRU层以及第二GRU层,所述通过预先训练的目标词语检测模型对所述原始言论数据进行特征抽取,得到目标文本特征向量,并通过所述目标词语检测模型对所述文本实体特征进行特征抽取,得到目标实体特征向量的步骤,包括:
    通过所述第二嵌入层对所述原始言论数据进行编码处理,得到初始文本特征向量;
    通过所述第三嵌入层对所述文本实体特征进行编码处理,得到初始实体特征向量;
    通过所述第一GRU层对所述初始文本特征向量进行特征抽取,得到所述目标文本特征向量;
    通过所述第二GRU层对所述初始实体特征向量进行特征抽取,得到所述目标实体特征向量。
  19. 根据权利要求15所述的存储介质,其中,所述目标词语检测模型包括第四嵌入层和图卷积网络层,所述通过所述目标词语检测模型对所述实体三元组进行特征抽取,得到目标属性特征向量的步骤,包括:
    通过所述第四嵌入层对所述实体三元组进行编码处理,得到初始属性特征向量;
    通过所述图卷积网络层对所述初始属性特征向量进行图卷积处理,得到所述目标属性特征向量。
  20. 根据权利要求19所述的存储介质,其中,所述目标词语检测模型包括第一注意力机制层、第二注意力机制层以及第三注意力机制层,所述通过所述目标词语检测模型对所述目 标文本特征向量、所述目标属性特征向量、所述目标实体特征向量进行加权计算,得到目标言论表征向量的步骤,包括:
    通过所述第一注意力机制层和预设的第一权重比例对所述目标文本特征向量、所述目标实体特征向量进行加权计算,得到第一表征向量;
    通过所述第二注意力机制层和预设的第二权重比例对所述目标实体特征向量、所述目标属性特征向量进行加权计算,得到第二表征向量;
    通过所述第三注意力机制层和预设的第三权重比例对所述第一表征向量、所述第二表征向量进行加权计算,得到所述目标言论表征向量。
PCT/CN2022/090743 2022-02-22 2022-04-29 目标词语的检测方法、装置、电子设备及存储介质 WO2023159767A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210160972.1 2022-02-22
CN202210160972.1A CN114519356B (zh) 2022-02-22 2022-02-22 目标词语的检测方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023159767A1 true WO2023159767A1 (zh) 2023-08-31

Family

ID=81599882

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/090743 WO2023159767A1 (zh) 2022-02-22 2022-04-29 目标词语的检测方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN114519356B (zh)
WO (1) WO2023159767A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117195913A (zh) * 2023-11-08 2023-12-08 腾讯科技(深圳)有限公司 文本处理方法、装置、电子设备、存储介质及程序产品
CN117521639A (zh) * 2024-01-05 2024-02-06 湖南工商大学 一种结合学术文本结构的文本检测方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496039B (zh) * 2022-11-17 2023-05-12 荣耀终端有限公司 一种词语提取方法及计算机设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232443A1 (en) * 2017-02-16 2018-08-16 Globality, Inc. Intelligent matching system with ontology-aided relation extraction
CN110263324A (zh) * 2019-05-16 2019-09-20 华为技术有限公司 文本处理方法、模型训练方法和装置
CN111444709A (zh) * 2020-03-09 2020-07-24 腾讯科技(深圳)有限公司 文本分类方法、装置、存储介质及设备
US20200265116A1 (en) * 2019-02-14 2020-08-20 Wipro Limited Method and system for identifying user intent from user statements
CN113792818A (zh) * 2021-10-18 2021-12-14 平安科技(深圳)有限公司 意图分类方法、装置、电子设备及计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826303A (zh) * 2019-11-12 2020-02-21 中国石油大学(华东) 一种基于弱监督学习的联合信息抽取方法
US11663406B2 (en) * 2020-07-31 2023-05-30 Netapp, Inc. Methods and systems for automated detection of personal information using neural networks
CN112215004B (zh) * 2020-09-04 2023-05-02 中国电子科技集团公司第二十八研究所 一种基于迁移学习在军事装备文本实体抽取中的应用方法
CN113761893B (zh) * 2021-11-11 2022-02-11 深圳航天科创实业有限公司 一种基于模式预训练的关系抽取方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232443A1 (en) * 2017-02-16 2018-08-16 Globality, Inc. Intelligent matching system with ontology-aided relation extraction
US20200265116A1 (en) * 2019-02-14 2020-08-20 Wipro Limited Method and system for identifying user intent from user statements
CN110263324A (zh) * 2019-05-16 2019-09-20 华为技术有限公司 文本处理方法、模型训练方法和装置
CN111444709A (zh) * 2020-03-09 2020-07-24 腾讯科技(深圳)有限公司 文本分类方法、装置、存储介质及设备
CN113792818A (zh) * 2021-10-18 2021-12-14 平安科技(深圳)有限公司 意图分类方法、装置、电子设备及计算机可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117195913A (zh) * 2023-11-08 2023-12-08 腾讯科技(深圳)有限公司 文本处理方法、装置、电子设备、存储介质及程序产品
CN117195913B (zh) * 2023-11-08 2024-02-27 腾讯科技(深圳)有限公司 文本处理方法、装置、电子设备、存储介质及程序产品
CN117521639A (zh) * 2024-01-05 2024-02-06 湖南工商大学 一种结合学术文本结构的文本检测方法
CN117521639B (zh) * 2024-01-05 2024-04-02 湖南工商大学 一种结合学术文本结构的文本检测方法

Also Published As

Publication number Publication date
CN114519356B (zh) 2023-07-18
CN114519356A (zh) 2022-05-20

Similar Documents

Publication Publication Date Title
CN111444340B (zh) 文本分类方法、装置、设备及存储介质
WO2023065544A1 (zh) 意图分类方法、装置、电子设备及计算机可读存储介质
CN112487203B (zh) 一种融入动态词向量的关系抽取系统
WO2023159767A1 (zh) 目标词语的检测方法、装置、电子设备及存储介质
CN112101041B (zh) 基于语义相似度的实体关系抽取方法、装置、设备及介质
CN114064918B (zh) 一种多模态事件知识图谱构建方法
CN113392209B (zh) 一种基于人工智能的文本聚类方法、相关设备及存储介质
WO2023108991A1 (zh) 模型的训练方法、知识分类方法、装置、设备、介质
WO2023108993A1 (zh) 基于深度聚类算法的产品推荐方法、装置、设备及介质
Suman et al. Why pay more? A simple and efficient named entity recognition system for tweets
CN114722069A (zh) 语言转换方法和装置、电子设备及存储介质
CN114897060B (zh) 样本分类模型的训练方法和装置、样本分类方法和装置
CN112528654A (zh) 自然语言处理方法、装置及电子设备
CN114841146B (zh) 文本摘要生成方法和装置、电子设备及存储介质
CN114416995A (zh) 信息推荐方法、装置及设备
CN114840685A (zh) 一种应急预案知识图谱构建方法
CN116719999A (zh) 文本相似度检测方法和装置、电子设备及存储介质
CN114722774B (zh) 数据压缩方法、装置、电子设备及存储介质
CN116432705A (zh) 文本生成模型构建、文本生成方法和装置、设备及介质
CN113591493B (zh) 翻译模型的训练方法及翻译模型的装置
CN115129885A (zh) 实体链指方法、装置、设备及存储介质
CN114090778A (zh) 基于知识锚点的检索方法、装置、电子设备及存储介质
Li et al. DTGCN: a method combining dependency tree and graph convolutional networks for Chinese long-interval named entity relationship extraction
Cheng et al. Negative emotion diffusion and intervention countermeasures of social networks based on deep learning
CN112560487A (zh) 一种基于国产设备的实体关系抽取方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22928038

Country of ref document: EP

Kind code of ref document: A1