CN115510188A - Text keyword association method, device, equipment and storage medium - Google Patents

Text keyword association method, device, equipment and storage medium Download PDF

Info

Publication number
CN115510188A
CN115510188A CN202211149555.3A CN202211149555A CN115510188A CN 115510188 A CN115510188 A CN 115510188A CN 202211149555 A CN202211149555 A CN 202211149555A CN 115510188 A CN115510188 A CN 115510188A
Authority
CN
China
Prior art keywords
vector
text
word
layer
vector conversion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211149555.3A
Other languages
Chinese (zh)
Inventor
邹倩霞
徐亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202211149555.3A priority Critical patent/CN115510188A/en
Publication of CN115510188A publication Critical patent/CN115510188A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3338Query expansion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3347Query execution using vector based model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Abstract

The invention relates to an artificial intelligence technology, and discloses a text keyword association method, which comprises the following steps: adding a plurality of semantic feature layers in a pre-constructed basic vector conversion network to obtain an original vector conversion model, performing model training on the original vector conversion model by using a service text data set to obtain a standard vector conversion model, extracting a candidate related word set based on word frequency and point mutual information values of words in a text to be related, performing vector conversion on the candidate related word set by using the standard vector conversion model to obtain a related word vector set, and performing related word association on target keywords in the candidate related word set based on similarity of each vector to obtain a related word graph. The invention also relates to blockchain techniques, the associated vocabulary may be stored in nodes of the blockchain. The invention also provides a text keyword association device, electronic equipment and a readable storage medium. The method and the device can improve the accuracy of the association of the text keywords.

Description

Text keyword association method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a text keyword association method and device, electronic equipment and a readable storage medium.
Background
With the development of artificial intelligence, text analysis becomes more and more important, and when large-scale text analysis is performed, it is often necessary to know what the texts mainly speak, and keywords in the texts are usually extracted and then associated vocabulary analysis is performed on the keywords, so that the general meanings of the texts can be intuitively understood.
In the prior art, a common method for associating words with keywords is mainly to determine associated words by screening parameters such as word frequency (hotword analysis, word cloud pictures, and the like), word segmentation parts of speech, and pmi (point mutual information). However, the word association effect of the rule parameters is not ideal, the noise is very much, and the more the text amount is, the more the associated words are, the more the words can be associated, and the word association on the keywords cannot be accurately performed.
Disclosure of Invention
The invention provides a text keyword association method, a text keyword association device, electronic equipment and a readable storage medium, and mainly aims to improve the accuracy of text keyword association.
In order to achieve the above object, the present invention provides a method for associating text keywords, comprising:
adding a plurality of semantic feature layers in a pre-constructed basic vector conversion network to obtain an original vector conversion model;
acquiring a service text data set, and performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model;
acquiring a text to be associated, and extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the words in the text to be associated;
and performing vector conversion on the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set, and performing associated word association on target keywords in the candidate associated word set based on the similarity of each vector in the associated word vector set to obtain an associated vocabulary diagram.
Optionally, adding a plurality of semantic feature layers to the pre-constructed basic vector conversion network to obtain an original vector conversion model, including:
adding an entity identification layer and a filter layer behind an input layer of the basic vector conversion network;
adding a dependency syntax analysis layer and a part-of-speech tagging layer between a mapping layer and an output layer of the basic vector conversion network, wherein the dependency syntax analysis layer is connected with the part-of-speech tagging layer in series;
and taking the model added with the entity identification layer, the filtering layer, the dependency syntactic analysis layer and the part of speech tagging layer as an original vector conversion model.
Optionally, the performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model includes:
performing sliding selection on texts in the service text data set by using a sliding window with a preset size to obtain a first training text set, and inputting the first training text set into an input layer of an original vector conversion model;
utilizing an entity recognition layer of the original vector conversion model to perform entity recognition and marking on the texts in the first training text set, and deleting entities with preset marks to obtain a second training text set;
deleting preset types of texts in the second training text set by using the filter layer of the original vector conversion model to obtain a third training text set;
carrying out one-hot coding on the texts in the third training text set to obtain a coding vector set;
carrying out weighted average on vectors in the coding vector set by utilizing a mapping layer of the original vector conversion model to obtain weighted vectors;
performing semantic splicing on the weighted vector by utilizing a dependency syntax analysis layer and a part-of-speech tagging layer of the original vector conversion model to obtain a spliced vector;
and outputting the prediction probability of the spliced vector by using an output layer of the original vector conversion model, adjusting the model parameters of the original vector conversion model when the prediction probability is smaller than a pre-constructed prediction threshold, returning to the step of performing weighted average on the vectors in the coding vector set by using a mapping layer of the original vector conversion model until the original vector conversion model is converged, and stopping training to obtain the standard vector conversion model.
Optionally, the performing semantic stitching on the weighted vector by using the dependency syntax analysis layer and the part-of-speech tagging layer of the original vector conversion model to obtain a stitched vector includes:
marking the grammatical relation among the words in the weighted vector by utilizing the dependency syntax analysis layer to obtain a grammatical relation vector;
marking the part-of-speech relation among the words in the weighted vector by using the part-of-speech tagging layer to obtain a part-of-speech tagging vector;
and splicing the grammatical relation vector and the labeling vector to obtain a spliced vector.
Optionally, the extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the vocabulary in the text to be associated includes:
performing word segmentation processing on the text to be associated, counting word frequency of each word segmentation, and taking the word segmentation with the word frequency larger than or equal to a preset word frequency threshold value as a high-frequency word;
and calculating point mutual information among the high-frequency words, taking the high-frequency words of which the point mutual information is greater than or equal to a preset information threshold value as candidate associated words, and summarizing all the candidate associated words to obtain a candidate associated word set.
Optionally, the performing vector conversion on the candidate related word set by using the standard vector conversion model to obtain a related word vector set, and performing related word association on a target keyword in the candidate related word set based on similarity of each vector in the related word vector set to obtain a related vocabulary diagram, including:
selecting a target keyword from the candidate associated word set based on a user instruction;
performing vector conversion on the target keywords and the non-target keywords in the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set comprising target vectors and non-target vectors;
calculating the similarity between the non-target vector and the target vector, and taking the non-target vector with the similarity larger than or equal to a preset similarity threshold as a first associated vector;
calculating the similarity between the non-target vector and the first association vector, and taking the non-target vector with the similarity larger than or equal to a preset similarity threshold value as a second association vector;
taking the target keyword corresponding to the target vector as a root node, taking the candidate associated word corresponding to the first associated vector as a first associated node, and taking the candidate associated word corresponding to the second associated vector as a second associated node;
and connecting the root node with the first associated node, and connecting the first associated node with the second associated node to obtain the associated vocabulary diagram.
Optionally, the weight vector is calculated by the following formula:
Figure BDA0003855893180000031
wherein V (t) represents a weight vector, E k Denotes the kth text vector, W1 k And representing a first weight matrix corresponding to the kth text vector, wherein n represents the text quantity of the third training text set.
In order to solve the above problem, the present invention further provides a text keyword association apparatus, including:
the model construction module is used for adding a plurality of semantic feature layers in a pre-constructed basic vector conversion network to obtain an original vector conversion model;
the model training module is used for acquiring a service text data set, and performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model;
the candidate associated word extraction module is used for acquiring a text to be associated and extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the words in the text to be associated;
and the text keyword association module is used for performing vector conversion on the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set, and performing associated word association on the target keywords in the candidate associated word set based on the similarity of each vector in the associated word vector set to obtain an associated vocabulary diagram.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
and the processor executes the computer program stored in the memory to realize the text keyword association method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the text keyword association method described above.
According to the invention, a plurality of semantic feature layers are added in the pre-constructed basic vector conversion network, and model training is carried out on the original vector conversion model by utilizing the service text data set of real service, so that noise data can be reduced, the incidence relation between semantics can be learned, and the accuracy of vocabulary association is improved. Meanwhile, a candidate associated word set is extracted through word frequency and point mutual information values, vector conversion is carried out on the candidate associated words through a standard vector conversion model, the compactness of each candidate associated word is further determined through the similarity between vectors, an associated word graph is obtained, and the association relation between the words can be accurately and visually reflected. Therefore, the text keyword association method, the text keyword association device, the electronic equipment and the computer readable storage medium can improve the accuracy of text keyword association.
Drawings
Fig. 1 is a schematic flowchart of a text keyword association method according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of an apparatus for associating text keywords according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device for implementing the text keyword association method according to an embodiment of the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a text keyword association method. The execution subject of the text keyword association method includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiment of the present invention. In other words, the text keyword association method may be performed by software or hardware installed in the terminal device or the server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Fig. 1 is a schematic flow chart of a text keyword association method according to an embodiment of the present invention. In this embodiment, the text keyword association method includes the following steps S1 to S4:
s1, adding a plurality of semantic feature layers in a pre-constructed basic vector conversion network to obtain an original vector conversion model.
In the embodiment of the present invention, the pre-constructed basis vector transformation network may be a CBOW (continuous Bag of Words Model) Model, and the central idea is to predict intermediate Words by combining context information, so as to train the word vectors of the Words.
In an optional embodiment of the present invention, the pre-constructed basis vector transformation network includes an input layer, a mapping layer, and an output layer.
The basic vector conversion network mainly comprises an INPUT layer, a PROJECTION layer and an OUTPUT layer, wherein the INPUT layer obtains a word vector of each word, the PROJECTION layer is used for superposing the vectors, and predicted keywords are obtained through word vector regression analysis operation on the OUTPUT layer.
In the embodiment of the present invention, adding multiple semantic feature layers to a pre-constructed basic vector transformation network to obtain an original vector transformation model includes:
adding an entity identification layer and a filter layer behind an input layer of the basic vector conversion network;
adding a dependency syntax analysis layer and a part-of-speech tagging layer between a mapping layer and an output layer of the basic vector conversion network, wherein the dependency syntax analysis layer is connected with the part-of-speech tagging layer in series;
and taking the model added with the entity identification layer, the filtering layer, the dependency syntactic analysis layer and the part of speech tagging layer as an original vector conversion model.
In an optional embodiment of the present invention, the semantic feature layers include an entity identification layer, a filter layer, a dependency parsing layer, and a part-of-speech tagging layer. Wherein, the Entity identification (NER) layer is used for marking or filtering some entities in the text, such as time, number, amount, etc. for filtering; the FILTERING (FILTERING) layer is used for FILTERING some words without practical meaning, such as punctuation marks, stop words, fictional words and the like; the Dependency syntax analysis (DEP) layer is used for analyzing grammatical relations between words in a sentence and expressing the grammatical relations into a tree structure, and the Dependency relations of the context are obtained; the Part-of-Speech tagging (POS) layer is used for tagging the Part of Speech (also called Part of Speech, grammar category, etc.) of each word in a sentence, for example, taking a hand natural language processing toolkit as an example, methods such as HMM Part of Speech tagging, perceptron Part of Speech tagging, CRF Part of Speech tagging, etc. are mainly used.
In the embodiment of the invention, by adding a plurality of semantic feature layers, the model can output word vectors which concern the context information of the text more, thereby improving the accuracy of key word association.
S2, acquiring a service text data set, and performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model.
In the embodiment of the present invention, the service text data set may be service texts in different fields, for example, in the financial field, and the service text data set may be product texts, transaction texts, user texts of products such as insurance products, funds, and the like.
In detail, the performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model includes:
performing sliding selection on texts in the service text data set by using a sliding window with a preset size to obtain a first training text set and inputting the first training text set into an input layer of an original vector conversion model;
utilizing an entity recognition layer of the original vector conversion model to perform entity recognition and marking on texts in the first training text set, and deleting entities with preset marks to obtain a second training text set;
deleting the preset type of texts in the second training text set by using the filter layer of the original vector conversion model to obtain a third training text set;
carrying out one-hot coding on the texts in the third training text set to obtain a coding vector set;
carrying out weighted average on vectors in the coding vector set by utilizing a mapping layer of the original vector conversion model to obtain weighted vectors;
performing semantic splicing on the weighted vector by utilizing a dependency syntax analysis layer and a part-of-speech tagging layer of the original vector conversion model to obtain a spliced vector;
and outputting the prediction probability of the spliced vector by using an output layer of the original vector conversion model, adjusting the model parameters of the original vector conversion model when the prediction probability is smaller than a pre-constructed prediction threshold, returning to the step of performing weighted average on the vectors in the coding vector set by using a mapping layer of the original vector conversion model until the original vector conversion model is converged, and stopping training to obtain the standard vector conversion model.
In an optional embodiment of the present invention, if the size of the sliding window is n, n words before and after the central word are selected as the training text. For example, when the business text is "XX is money fund", and n =2, when the intermediate word is "coin", the text input by the input layer is "good", "basic", or "gold". The preset mark can be a digital mark, and time, numbers, money amount and the like can be filtered by deleting entities of the digital mark; the preset type of text may be preset stop words, null words, etc.
In an optional embodiment of the present invention, the weight vector is calculated by the following formula:
Figure BDA0003855893180000071
wherein V (t) represents a weight vector, E k Denotes the kth text vector, W1 k And representing a first weight matrix corresponding to the kth text vector, wherein n represents the text quantity of the third training text set.
In detail, the semantic stitching is performed on the weighted vector by using the dependency syntax analysis layer and the part-of-speech tagging layer of the original vector conversion model to obtain a stitched vector, and the method includes:
marking the grammatical relation among the words in the weighted vector by utilizing the dependency syntax analysis layer to obtain a grammatical relation vector;
marking the part-of-speech relation among the words in the weighted vector by using the part-of-speech tagging layer to obtain a part-of-speech tagging vector;
and splicing the grammar relation vector and the label vector to obtain a spliced vector.
In an optional embodiment of the present invention, the dependency parsing layer and the part-of-speech tagging layer may be constructed by a hand natural language processing toolkit.
In an optional embodiment of the invention, after a service text in an input layer is filtered through an NER layer and a FILTERING layer, the text is subjected to one-hot coding and vector coding, the coding vector is multiplied by a first weight matrix W1 in an SUM layer, then the average of the vectors is taken to obtain a weighted vector, the weighted vector is respectively input into a DEP layer and a POS layer, a spliced vector is obtained after splicing, the spliced vector is multiplied by a second weight matrix W2 and OUTPUT to a next OUTPUT layer, softmax is used for predicting the prediction probability of a middle word, iteration is repeatedly carried out to obtain a standard vector conversion model, and as the traditional CBOW model is improved, a plurality of linguistic knowledge (NER, DEP, POS and the like) is fully utilized, so that the model can more accurately learn the associated information among key words, and training is carried out on a large-scale financial text to obtain a word vector more suitable for the knowledge in the financial field.
And S3, acquiring a text to be associated, and extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the words in the text to be associated.
In the embodiment of the invention, the text to be associated can be a text to be associated with vocabularies in the financial field. The point Mutual Information value (PMI) is used to measure the correlation between two words.
In detail, the extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the vocabulary in the text to be associated includes:
performing word segmentation processing on the text to be associated, counting word frequency of each word segmentation, and taking the word segmentation with the word frequency larger than or equal to a preset word frequency threshold value as a high-frequency word;
and calculating point mutual information among the high-frequency words, taking the high-frequency words of which the point mutual information is greater than or equal to a preset information threshold value as candidate associated words, and summarizing all the candidate associated words to obtain a candidate associated word set.
In an optional embodiment of the present invention, the point mutual information between high frequency words is calculated by the following formula:
Figure BDA0003855893180000081
the PMI (x, y) represents point mutual information between the high-frequency vocabulary x and the high-frequency vocabulary y, p (x, y) represents the probability of the high-frequency vocabulary x and the high-frequency vocabulary y appearing at the same time, p (x) represents the probability of the high-frequency vocabulary x appearing, and p (y) represents the probability of the high-frequency vocabulary y appearing.
And S4, performing vector conversion on the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set, and performing associated word association on target keywords in the candidate associated word set based on the similarity of each vector in the associated word vector set to obtain an associated vocabulary diagram.
In detail, the performing vector conversion on the candidate related word set by using the standard vector conversion model to obtain a related word vector set, and performing related word association on the target keyword in the candidate related word set based on the similarity of each vector in the related word vector set to obtain a related vocabulary diagram includes:
selecting a target keyword from the candidate associated word set based on a user instruction;
performing vector conversion on the target keywords and the non-target keywords in the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set comprising target vectors and non-target vectors;
calculating the similarity between the non-target vector and the target vector, and taking the non-target vector with the similarity larger than or equal to a preset similarity threshold value as a first association vector;
calculating the similarity between the non-target vector and the first association vector, and taking the non-target vector with the similarity larger than or equal to a preset similarity threshold value as a second association vector;
taking the target keyword corresponding to the target vector as a root node, taking the candidate associated word corresponding to the first associated vector as a first associated node, and taking the candidate associated word corresponding to the second associated vector as a second associated node;
and connecting the root node with the first associated node, and connecting the first associated node with the second associated node to obtain the associated vocabulary diagram.
In the embodiment of the invention, the vector conversion is carried out through the standard vector conversion model, semantic features can be added into the vector, and the accuracy of semantic association is improved.
In an alternative embodiment of the present invention, the similarity between vectors can be calculated by a cosine similarity algorithm
According to the invention, a plurality of semantic feature layers are added in the pre-constructed basic vector conversion network, and model training is carried out on the original vector conversion model by utilizing the service text data set of real service, so that noise data can be reduced, the incidence relation between semantics can be learned, and the accuracy of vocabulary association is improved. Meanwhile, a candidate associated word set is extracted through the word frequency and point mutual information values, then a standard vector conversion model is used for carrying out vector conversion on the candidate associated words, the compactness of each candidate associated word is further determined through the similarity between vectors, an associated word graph is obtained, and the association relation between the words can be accurately and visually reflected. Therefore, the text keyword association method provided by the invention can improve the accuracy of text keyword association.
Fig. 2 is a functional block diagram of a text keyword association apparatus according to an embodiment of the present invention.
The text keyword association apparatus 100 according to the present invention may be installed in an electronic device. According to the realized functions, the text keyword association apparatus 100 may include a model construction module 101, a model training module 102, a candidate associated word extraction module 103, and a text keyword association module 104. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the model building module 101 is configured to add multiple semantic feature layers to a pre-built basic vector conversion network to obtain an original vector conversion model;
the model training module 102 is configured to obtain a service text data set, perform model training on the original vector conversion model by using the service text data set, and obtain a standard vector conversion model;
the candidate associated word extraction module 103 is configured to acquire a text to be associated, and extract a candidate associated word set from the text to be associated based on a word frequency and a point-to-point mutual information value of a word in the text to be associated;
the text keyword association module 104 is configured to perform vector conversion on the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set, and perform associated word association on a target keyword in the candidate associated word set based on similarity of each vector in the associated word vector set to obtain an associated vocabulary diagram.
In detail, the text keyword association apparatus 100 includes the following modules:
step one, adding a plurality of semantic feature layers in a pre-constructed basic vector conversion network to obtain an original vector conversion model.
In the embodiment of the present invention, the pre-constructed basis vector transformation network may be a CBOW (continuous Bag of Words Model) Model, and the central idea is to combine context information to predict intermediate Words so as to train word vectors of each word.
In an optional embodiment of the present invention, the pre-constructed base vector transformation network comprises an input layer, a mapping layer and an output layer.
The basic vector conversion network mainly comprises an INPUT layer, a PROJECTION layer and an OUTPUT layer, wherein the INPUT layer obtains a word vector of each word, the PROJECTION layer is used for superposing the vectors, and predicted keywords are obtained through word vector regression analysis operation on the OUTPUT layer.
In the embodiment of the present invention, adding multiple semantic feature layers to a pre-constructed basic vector transformation network to obtain an original vector transformation model includes:
adding an entity identification layer and a filter layer behind an input layer of the basic vector conversion network;
adding a dependency syntax analysis layer and a part-of-speech tagging layer between a mapping layer and an output layer of the basic vector conversion network, wherein the dependency syntax analysis layer is connected with the part-of-speech tagging layer in series;
and taking the model added with the entity identification layer, the filter layer, the dependency syntax analysis layer and the part of speech tagging layer as an original vector conversion model.
In an optional embodiment of the present invention, the semantic feature layers include an entity identification layer, a filter layer, a dependency parsing layer, and a part-of-speech tagging layer. Wherein, the Entity identification (NER) layer is used for marking or filtering some entities in the text, such as time, number, amount, etc. for filtering; the FILTERING (FILTERING) layer is used for FILTERING some words without practical meaning, such as punctuation marks, stop words, fictional words and the like; the Dependency syntax analysis (DEP) layer is used for analyzing grammatical relations between words in a sentence and expressing the grammatical relations into a tree structure, and the Dependency relations of the contexts are obtained; the Part-of-Speech tagging (POS) layer is used for tagging the Part of Speech (also called Part of Speech, grammar category, etc.) of each word in a sentence, for example, taking a hand natural language processing toolkit as an example, methods such as HMM Part of Speech tagging, perceptron Part of Speech tagging, CRF Part of Speech tagging, etc. are mainly used.
In the embodiment of the invention, by adding a plurality of semantic feature layers, the model can output word vectors which concern the context information of the text more, thereby improving the accuracy of key word association.
And step two, acquiring a service text data set, and performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model.
In the embodiment of the present invention, the service text data set may be service texts in different fields, for example, in the financial field, and the service text data set may be product texts, transaction texts, user texts of products such as insurance products, funds, and the like.
In detail, the performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model includes:
performing sliding selection on texts in the service text data set by using a sliding window with a preset size to obtain a first training text set, and inputting the first training text set into an input layer of an original vector conversion model;
utilizing an entity recognition layer of the original vector conversion model to perform entity recognition and marking on the texts in the first training text set, and deleting entities with preset marks to obtain a second training text set;
deleting the preset type of texts in the second training text set by using the filter layer of the original vector conversion model to obtain a third training text set;
performing one-hot coding on the texts in the third training text set to obtain a coding vector set;
carrying out weighted average on vectors in the coding vector set by utilizing a mapping layer of the original vector conversion model to obtain weighted vectors;
performing semantic splicing on the weighted vector by using a dependency syntax analysis layer and a part-of-speech tagging layer of the original vector conversion model to obtain a spliced vector;
and outputting the prediction probability of the spliced vector by using an output layer of the original vector conversion model, adjusting the model parameters of the original vector conversion model when the prediction probability is smaller than a pre-constructed prediction threshold, returning to the step of performing weighted average on the vectors in the coding vector set by using a mapping layer of the original vector conversion model until the original vector conversion model is converged, and stopping training to obtain the standard vector conversion model.
In an optional embodiment of the present invention, if the size of the sliding window is n, n words before and after the central word are selected as the training text. For example, when the business text is "XX is money fund", and n =2, when the intermediate word is "coin", the text input by the input layer is "good", "basic", or "gold". The preset mark can be a digital mark, and time, numbers, money amount and the like can be filtered by deleting the entity with the digital mark; the preset type of text may be preset stop words, null words, etc.
In an optional embodiment of the present invention, the weight vector is calculated by the following formula:
Figure BDA0003855893180000121
wherein V (t) represents a weight vector, E k Denotes the kth text vector, W1 k And representing a first weight matrix corresponding to the kth text vector, wherein n represents the text quantity of the third training text set.
In detail, the semantic stitching is performed on the weighted vector by using the dependency syntax analysis layer and the part-of-speech tagging layer of the original vector conversion model to obtain a stitched vector, and the method includes:
marking the grammatical relation among the words in the weighted vector by utilizing the dependency syntax analysis layer to obtain a grammatical relation vector;
marking the part-of-speech relation among the words in the weighted vector by using the part-of-speech tagging layer to obtain a part-of-speech tagging vector;
and splicing the grammar relation vector and the label vector to obtain a spliced vector.
In an optional embodiment of the present invention, the dependency parsing layer and the part-of-speech tagging layer may be constructed by a hand natural language processing toolkit.
In an optional embodiment of the invention, after a service text in an input layer is filtered through an NER layer and a FILTERING layer, the text is subjected to one-hot coding and vector coding, the coding vector is multiplied by a first weight matrix W1 in an SUM layer, then the average of the vectors is taken to obtain a weighted vector, the weighted vector is respectively input into a DEP layer and a POS layer, a spliced vector is obtained after splicing, the spliced vector is multiplied by a second weight matrix W2 and OUTPUT to a next OUTPUT layer, softmax is used for predicting the prediction probability of a middle word, iteration is repeatedly carried out to obtain a standard vector conversion model, and as the traditional CBOW model is improved, a plurality of linguistic knowledge (NER, DEP, POS and the like) is fully utilized, so that the model can more accurately learn the associated information among key words, and training is carried out on a large-scale financial text to obtain a word vector more suitable for the knowledge in the financial field.
And thirdly, acquiring a text to be associated, and extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the words in the text to be associated.
In the embodiment of the invention, the text to be associated can be a text to be associated with vocabularies in the financial field. The point Mutual Information value (PMI) is used to measure the correlation between two words.
In detail, the extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the vocabulary in the text to be associated includes:
performing word segmentation processing on the text to be associated, counting word frequency of each word segmentation, and taking the word segmentation with the word frequency larger than or equal to a preset word frequency threshold value as a high-frequency word;
and calculating point mutual information among the high-frequency words, taking the high-frequency words of which the point mutual information is greater than or equal to a preset information threshold value as candidate associated words, and summarizing all the candidate associated words to obtain a candidate associated word set.
In an optional embodiment of the present invention, the point mutual information between high frequency words is calculated by the following formula:
Figure BDA0003855893180000131
the PMI (x, y) represents point mutual information between the high-frequency vocabulary x and the high-frequency vocabulary y, p (x, y) represents the probability of the high-frequency vocabulary x and the high-frequency vocabulary y appearing simultaneously, p (x) represents the probability of the high-frequency vocabulary x appearing, and p (y) represents the probability of the high-frequency vocabulary y appearing.
And fourthly, performing vector transformation on the candidate associated word set by using the standard vector transformation model to obtain an associated word vector set, and performing associated word association on the target keywords in the candidate associated word set based on the similarity of each vector in the associated word vector set to obtain an associated vocabulary diagram.
In detail, the performing vector conversion on the candidate related word set by using the standard vector conversion model to obtain a related word vector set, and performing related word association on the target keyword in the candidate related word set based on the similarity of each vector in the related word vector set to obtain a related vocabulary diagram includes:
selecting a target keyword from the candidate associated word set based on a user instruction;
performing vector conversion on target keywords and non-target keywords in the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set comprising target vectors and non-target vectors;
calculating the similarity between the non-target vector and the target vector, and taking the non-target vector with the similarity larger than or equal to a preset similarity threshold as a first associated vector;
calculating the similarity between the non-target vector and the first association vector, and taking the non-target vector with the similarity larger than or equal to a preset similarity threshold value as a second association vector;
taking the target key words corresponding to the target vectors as root nodes, taking the candidate associated words corresponding to the first associated vectors as first associated nodes, and taking the candidate associated words corresponding to the second associated vectors as second associated nodes
And connecting the root node with the first associated node, and connecting the first associated node with the second associated node to obtain the associated vocabulary diagram.
In the embodiment of the invention, the vector conversion is carried out through the standard vector conversion model, the semantic features can be added into the vector, and the accuracy of semantic association is improved.
In an alternative embodiment of the invention, the similarity between vectors may be calculated by a cosine similarity algorithm
According to the invention, a plurality of semantic feature layers are added in the pre-constructed basic vector conversion network, and model training is carried out on the original vector conversion model by utilizing the service text data set of real service, so that noise data can be reduced, the incidence relation between semantics can be learned, and the accuracy of vocabulary association is improved. Meanwhile, a candidate associated word set is extracted through the word frequency and point mutual information values, then a standard vector conversion model is used for carrying out vector conversion on the candidate associated words, the compactness of each candidate associated word is further determined through the similarity between vectors, an associated word graph is obtained, and the association relation between the words can be accurately and visually reflected. Therefore, the text keyword association device provided by the invention can improve the accuracy of text keyword association.
Fig. 3 is a schematic structural diagram of an electronic device for implementing the text keyword association method according to an embodiment of the present invention.
The electronic device may comprise a processor 10, a memory 11, a communication interface 12 and a bus 13, and may further comprise a computer program, such as a text keyword association program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of text keyword association programs, etc., but also to temporarily store data that has been output or will be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., text keyword association programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication interface 12 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
The bus 13 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 13 may be divided into an address bus, a data bus, a control bus, etc. The bus 13 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 3 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 3 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used to establish a communication connection between the electronic device and another electronic device.
Optionally, the electronic device may further comprise a user interface, which may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The text keyword association program stored in the memory 11 of the electronic device is a combination of instructions, which when executed in the processor 10, can implement:
adding a plurality of semantic feature layers in a pre-constructed basic vector conversion network to obtain an original vector conversion model;
acquiring a service text data set, and performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model;
acquiring a text to be associated, and extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the words in the text to be associated;
and performing vector conversion on the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set, and performing associated word association on target keywords in the candidate associated word set based on the similarity of each vector in the associated word vector set to obtain an associated vocabulary diagram.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, a recording medium, a usb-disk, a removable hard disk, a magnetic diskette, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
adding a plurality of semantic feature layers in a pre-constructed basic vector conversion network to obtain an original vector conversion model;
acquiring a service text data set, and performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model;
acquiring a text to be associated, and extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the words in the text to be associated;
and performing vector conversion on the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set, and performing associated word association on target keywords in the candidate associated word set based on the similarity of each vector in the associated word vector set to obtain an associated vocabulary.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The embodiment of the invention can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not to denote any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A method for associating text keywords, the method comprising:
adding a plurality of semantic feature layers in a pre-constructed basic vector conversion network to obtain an original vector conversion model;
acquiring a service text data set, and performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model;
acquiring a text to be associated, and extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the vocabulary in the text to be associated;
and performing vector conversion on the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set, and performing associated word association on target keywords in the candidate associated word set based on the similarity of each vector in the associated word vector set to obtain an associated vocabulary.
2. The method for associating text keywords according to claim 1, wherein the step of adding a plurality of semantic feature layers in the pre-constructed basic vector transformation network to obtain an original vector transformation model comprises:
adding an entity identification layer and a filter layer behind an input layer of the basic vector conversion network;
adding a dependency syntactic analysis layer and a part-of-speech tagging layer between a mapping layer and an output layer of the basic vector conversion network, wherein the dependency syntactic analysis layer is connected with the part-of-speech tagging layer in series;
and taking the model added with the entity identification layer, the filter layer, the dependency syntax analysis layer and the part of speech tagging layer as an original vector conversion model.
3. The method of associating text keywords according to claim 2, wherein the model training of the original vector conversion model using the service text data set to obtain a standard vector conversion model comprises:
performing sliding selection on texts in the service text data set by using a sliding window with a preset size to obtain a first training text set, and inputting the first training text set into an input layer of an original vector conversion model;
utilizing an entity recognition layer of the original vector conversion model to perform entity recognition and marking on texts in the first training text set, and deleting entities with preset marks to obtain a second training text set;
deleting preset types of texts in the second training text set by using the filter layer of the original vector conversion model to obtain a third training text set;
performing one-hot coding on the texts in the third training text set to obtain a coding vector set;
carrying out weighted average on vectors in the coding vector set by utilizing a mapping layer of the original vector conversion model to obtain weighted vectors;
performing semantic splicing on the weighted vector by using a dependency syntax analysis layer and a part-of-speech tagging layer of the original vector conversion model to obtain a spliced vector;
and outputting the prediction probability of the spliced vector by using an output layer of the original vector conversion model, adjusting the model parameters of the original vector conversion model when the prediction probability is smaller than a pre-constructed prediction threshold, returning to the step of performing weighted average on the vectors in the coding vector set by using a mapping layer of the original vector conversion model until the original vector conversion model is converged, and stopping training to obtain the standard vector conversion model.
4. The method for associating text keywords according to claim 3, wherein the semantic splicing is performed on the weighted vectors by using the dependency parsing layer and the part-of-speech tagging layer of the original vector transformation model to obtain spliced vectors, and the method comprises the following steps:
marking the grammatical relation among the words in the weighted vector by utilizing the dependency syntax analysis layer to obtain a grammatical relation vector;
marking the part-of-speech relation among the words in the weighted vector by using the part-of-speech tagging layer to obtain a part-of-speech tagging vector;
and splicing the grammar relation vector and the label vector to obtain a spliced vector.
5. The method for associating text keywords according to claim 1, wherein the extracting a candidate associated word set from the text to be associated based on the word frequency and the point-to-point mutual information value of the words in the text to be associated comprises:
performing word segmentation processing on the text to be associated, counting word frequency of each word segmentation, and taking the word segmentation with the word frequency larger than or equal to a preset word frequency threshold value as a high-frequency word;
and calculating point mutual information among the high-frequency words, taking the high-frequency words of which the point mutual information is greater than or equal to a preset information threshold value as candidate associated words, and summarizing all the candidate associated words to obtain a candidate associated word set.
6. The method for associating text keywords according to claim 1, wherein the vector conversion is performed on the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set, and associated word association is performed on target keywords in the candidate associated word set based on similarity of vectors in the associated word vector set to obtain an associated vocabulary diagram, including:
selecting a target keyword from the candidate associated word set based on a user instruction;
performing vector conversion on target keywords and non-target keywords in the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set comprising target vectors and non-target vectors;
calculating the similarity between the non-target vector and the target vector, and taking the non-target vector with the similarity larger than or equal to a preset similarity threshold value as a first association vector;
calculating the similarity between the non-target vector and the first association vector, and taking the non-target vector with the similarity larger than or equal to a preset similarity threshold value as a second association vector;
taking the target keyword corresponding to the target vector as a root node, taking the candidate associated word corresponding to the first associated vector as a first associated node, and taking the candidate associated word corresponding to the second associated vector as a second associated node;
and connecting the root node with the first associated node, and connecting the first associated node with the second associated node to obtain the associated vocabulary diagram.
7. The method of text keyword association of claim 4, wherein the weighting vector is calculated by the following formula:
Figure FDA0003855893170000031
wherein V (t) represents a weight vector, E k Denotes the kth text vector, W1 k And representing a first weight matrix corresponding to the kth text vector, wherein n represents the text quantity of the third training text set.
8. An apparatus for associating text keywords, the apparatus comprising:
the model construction module is used for adding a plurality of semantic feature layers in a pre-constructed basic vector conversion network to obtain an original vector conversion model;
the model training module is used for acquiring a service text data set, and performing model training on the original vector conversion model by using the service text data set to obtain a standard vector conversion model;
the candidate associated word extracting module is used for acquiring a text to be associated and extracting a candidate associated word set from the text to be associated based on the word frequency and the point mutual information value of the words in the text to be associated;
and the text keyword association module is used for performing vector conversion on the candidate associated word set by using the standard vector conversion model to obtain an associated word vector set, and performing associated word association on the target keywords in the candidate associated word set based on the similarity of each vector in the associated word vector set to obtain an associated vocabulary diagram.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the text keyword association method of any one of claims 1 to 7.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the text keyword association method according to any one of claims 1 to 7.
CN202211149555.3A 2022-09-21 2022-09-21 Text keyword association method, device, equipment and storage medium Pending CN115510188A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211149555.3A CN115510188A (en) 2022-09-21 2022-09-21 Text keyword association method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211149555.3A CN115510188A (en) 2022-09-21 2022-09-21 Text keyword association method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115510188A true CN115510188A (en) 2022-12-23

Family

ID=84503383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211149555.3A Pending CN115510188A (en) 2022-09-21 2022-09-21 Text keyword association method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115510188A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116070001A (en) * 2023-02-03 2023-05-05 深圳市艾莉诗科技有限公司 Information directional grabbing method and device based on Internet

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116070001A (en) * 2023-02-03 2023-05-05 深圳市艾莉诗科技有限公司 Information directional grabbing method and device based on Internet
CN116070001B (en) * 2023-02-03 2023-12-19 深圳市艾莉诗科技有限公司 Information directional grabbing method and device based on Internet

Similar Documents

Publication Publication Date Title
CN112988963A (en) User intention prediction method, device, equipment and medium based on multi-process node
CN113704429A (en) Semi-supervised learning-based intention identification method, device, equipment and medium
CN114021582B (en) Spoken language understanding method, device, equipment and storage medium combined with voice information
CN114781402A (en) Method and device for identifying inquiry intention, electronic equipment and readable storage medium
CN115392237B (en) Emotion analysis model training method, device, equipment and storage medium
CN116245097A (en) Method for training entity recognition model, entity recognition method and corresponding device
CN113807973A (en) Text error correction method and device, electronic equipment and computer readable storage medium
CN114840684A (en) Map construction method, device and equipment based on medical entity and storage medium
CN113360654B (en) Text classification method, apparatus, electronic device and readable storage medium
CN113821622A (en) Answer retrieval method and device based on artificial intelligence, electronic equipment and medium
CN115510188A (en) Text keyword association method, device, equipment and storage medium
CN116628162A (en) Semantic question-answering method, device, equipment and storage medium
CN115525750A (en) Robot phonetics detection visualization method and device, electronic equipment and storage medium
CN114708073B (en) Intelligent detection method and device for surrounding mark and serial mark, electronic equipment and storage medium
CN115346095A (en) Visual question answering method, device, equipment and storage medium
CN115221323A (en) Cold start processing method, device, equipment and medium based on intention recognition model
CN115146064A (en) Intention recognition model optimization method, device, equipment and storage medium
CN114780688A (en) Text quality inspection method, device and equipment based on rule matching and storage medium
CN114548114A (en) Text emotion recognition method, device, equipment and storage medium
CN113806540A (en) Text labeling method and device, electronic equipment and storage medium
CN113808616A (en) Voice compliance detection method, device, equipment and storage medium
CN111859985A (en) AI customer service model testing method, device, electronic equipment and storage medium
CN115204120B (en) Insurance field triplet extraction method and device, electronic equipment and storage medium
CN111680513B (en) Feature information identification method and device and computer readable storage medium
CN116579349A (en) Text semantic segmentation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination