WO2021164200A1 - 一种基于深度分层编码的智能语义匹配方法和装置 - Google Patents

一种基于深度分层编码的智能语义匹配方法和装置 Download PDF

Info

Publication number
WO2021164200A1
WO2021164200A1 PCT/CN2020/104724 CN2020104724W WO2021164200A1 WO 2021164200 A1 WO2021164200 A1 WO 2021164200A1 CN 2020104724 W CN2020104724 W CN 2020104724W WO 2021164200 A1 WO2021164200 A1 WO 2021164200A1
Authority
WO
WIPO (PCT)
Prior art keywords
sentence
matching
layer
training
vector
Prior art date
Application number
PCT/CN2020/104724
Other languages
English (en)
French (fr)
Inventor
鹿文鹏
于瑞
张旭
乔新晓
成金勇
王灿
Original Assignee
齐鲁工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 齐鲁工业大学 filed Critical 齐鲁工业大学
Publication of WO2021164200A1 publication Critical patent/WO2021164200A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the invention relates to the technical field of artificial intelligence and natural language processing, in particular to an intelligent semantic matching method and device based on deep layered coding.
  • convolutional neural networks are good at capturing and representing local features with different kernel functions, they ignore the sequence information in the text and are not suitable for processing sequence information tasks; although recurrent neural networks can process sequence information, most of them only Generate the final vector representation without considering the hierarchical relationship of the sentence, which may lose some important intermediate coding information. For the task of semantic matching of sentences, the order of words in the sentence and the level information of the sentence are very important. Therefore, it is almost impossible to obtain satisfactory results by simply using methods based on convolutional neural network models or recurrent neural network models. .
  • the technical task of the present invention is to provide an intelligent semantic matching method and device based on deep layered coding to capture more semantic context information and interactive information between sentences, and to achieve a new layered feature interactive matching
  • the mechanism finally achieves the purpose of intelligent semantic matching of sentences.
  • the technical task of the present invention is realized in the following way, an intelligent semantic matching method based on deep layered coding, the method is to construct and train the embedding layer, the deep layered coding representation layer, the layered feature interactive matching layer and
  • the sentence matching model composed of the prediction layer can realize the deep layered coding representation of the sentence, obtain more semantic context information and interactive information between sentences, and realize a new layered feature interactive matching mechanism to achieve
  • the goal of intelligent semantic matching of sentences specifically as follows:
  • the embedding layer embeds the input sentence and passes the result to the deep layered coding presentation layer
  • the deep layered coding presentation layer performs coding operations on the results obtained by the embedding operation to obtain two different feature coding representations: the intermediate coding representation feature of the sentence and the final coding representation feature of the sentence;
  • the hierarchical feature interactive matching layer performs matching processing on the intermediate coding representation feature of the sentence and the final coding representation feature of the sentence to obtain a matching representation vector;
  • a fully connected layer is used to map the matching representation vector once, and then the sigmoid layer is used to map the result obtained to a value in the specified interval as the matching degree value, which is determined according to the relative size of the matching degree value and the set threshold value Whether the semantics of the input sentence pair match.
  • the embedding layer is used to construct a character mapping conversion table, construct an input layer, and construct a word vector mapping layer;
  • the mapping rule is: start with the number 1, and then sequentially sort in ascending order according to the order in which each character is entered into the character table, so as to form the required character mapping conversion table; where the character table passes through the sentence
  • the matching knowledge base is constructed; then, the present invention uses Word2Vec to train the word vector model to obtain the word vector matrix embedding_matrix of each character;
  • the input layer includes two inputs. For the input sentence sentence1, sentence2, it is formalized as: (sentence1, sentence2); each word in the input sentence is converted into a corresponding digital representation according to the character mapping table;
  • Construct the word vector mapping layer load the word vector matrix weights trained in the step of constructing the character map conversion table to initialize the weight parameters of the current layer; for the input sentences sentence1 and sentence2, get the corresponding sentence vectors sentence1_emd, sentence2_emd; sentence matching knowledge base Each sentence can be transformed into a vector form through word vector mapping.
  • the construction process of the depth layered coding presentation layer is specifically as follows:
  • i represents the relative position of the corresponding word vector in the sentence
  • p i is the corresponding vector representation of each word in the sentence
  • the result of the vector connection that is, the intermediate code representation feature of the sentence
  • the final encoding representation feature of the sentence Use a convolutional neural network CNN to continue encoding the output intermediate encoding representation feature, and its output will be used as the final encoding representation feature of the sentence.
  • the formula is as follows:
  • the layered feature interactive matching layer is used to construct a layered feature interactive matching mechanism; wherein, the layered feature interactive matching mechanism is constructed by processing the deep layered coding representation layer to obtain the intermediate coding representations of sentence1 and sentence2 respectively.
  • Vector representation of features And the final encoding represents the vector representation of the feature According to the obtained two types of vectors, the matching is performed from different angles to generate the matching characterization vector; the details are as follows:
  • Respectively are the average vector representations of the corresponding sentence vectors; Indicates that the intermediate code represents the feature vector The absolute value obtained by calculating the difference between each element and its average value; Represents the final encoding to represent the feature vector The absolute value obtained by calculating the difference between each element and its average value; Express and The value obtained by integrating element by element;
  • the construction process of the prediction layer is as follows:
  • the matching representation vector obtained in the process of constructing the hierarchical feature interactive matching mechanism Input the prediction layer to determine whether the semantics of sentence pairs match; in the prediction layer, match the representation vector After the fully connected layer is processed, it is processed by the Sigmoid function layer; in order to prevent over-fitting, the dropout is set to 0.5 in the fully connected layer, and the sigmoid function calculates the matching degree of the output of the fully connected layer after dropout processing.
  • the matching degree between [0,1] is obtained as y pred , and finally it is compared with the established threshold (0.5) to determine whether the semantics of the sentence pair match, that is, when y pred > 0.5, it is judged as a semantic match, y pred When ⁇ 0.5, it is judged as semantic mismatch.
  • the construction of the sentence matching knowledge base is specifically as follows:
  • Use crawlers to obtain original data Crawl the question set on the online public question and answer platform to obtain the original similar sentence knowledge base; or use the sentence matching data set published on the Internet as the original similar sentence knowledge base;
  • Preprocess the original data preprocess the similar sentence pairs in the original similar sentence knowledge base, and perform hyphenation or word segmentation operations on each sentence to obtain the sentence matching knowledge base;
  • the sentence matching model is obtained by training using a training data set, and the construction process of the training data set is as follows:
  • Construct training positive example Combine the sentence with the standard sentence corresponding to the sentence, construct a positive example, formalized as: (sentence1,sentence2,1); among them, sentence1 means sentence 1; sentence2 means sentence 2; 1 means sentence 1 and The semantics of sentence 2 match, which is a positive example;
  • Construction Training negative example Select a sentence s 1, and then randomly selects one sentence from the sentence does not match sentence s 1 s 2, s 1 and s 2 will be a combination of knowledge base to construct a negative example, formalized :( sentence1,sentence2,0); Among them, sentence1 means sentence s 1 ; sentence2 means sentence s 2 ; 0 means that the semantics of sentence s 1 and sentence s 2 do not match, which is a negative example;
  • Construct training data set Combine all positive sample sentence pairs and negative sample sentence pairs obtained after constructing training positive examples and constructing training negative examples, and shuffle their order to construct the final training data set; Both positive and negative data include three dimensions, namely sentence1, sentence2, 0 or 1;
  • the sentence matching model training is optimized through the training data set, which is specifically as follows:
  • y pred is the result of the matching degree calculation after the hierarchical feature interactive matching mechanism is processed
  • y true is the true label of whether the semantics of the two sentences match, and its value is limited to 0 or 1.
  • An intelligent semantic matching device based on depth layered coding comprising:
  • the sentence matching knowledge base building unit is used to use crawlers to crawl the question set on the online public question and answer platform to obtain the original similar sentence knowledge base, and then perform hyphenation or word segmentation preprocessing on the original similar sentence knowledge base to construct a The sentence matching knowledge base for model training;
  • the training data set generating unit is used to construct training positive example data and training negative example data according to the sentence in the sentence matching knowledge base, and construct the final training data set based on the positive example data and the negative example data;
  • the sentence matching model building unit is used to construct the character mapping conversion table, the input layer, and the word vector mapping layer through the embedding layer, the deep layered coding representation layer, and the layered feature interactive matching layer to build the sentence layered feature interactive matching mechanism and Construct the prediction layer;
  • the sentence matching model construction unit includes,
  • the character mapping conversion table constructs a subunit, which is used to segment each sentence in the sentence matching knowledge base by character, and store each character in a list in turn, so as to obtain a character table, and then start with the number 1. Initially, each character is sorted in ascending order according to the order in which each character is entered into the character table, thereby forming the required character mapping conversion table; wherein, by constructing the character mapping conversion table, each character in the training data set is mapped to a unique digital identifier ; Thereafter, the present invention uses Word2Vec to train the word vector model to obtain the word vector matrix embedding_matrix of each character;
  • the input layer constructs sub-units for formalizing the input sentences sentence1 and sentence2 into: (sentence1, sentence2);
  • the word vector mapping layer construction subunit is used to load the word vector matrix weights obtained by the character mapping conversion table construction subunit training to initialize the weight parameters of the current layer; for the input sentences sentence1 and sentence2, the corresponding sentence vectors sentence1_emd and sentence2_emd are obtained.
  • Each sentence in the sentence matching knowledge base can transform sentence information into vector form through word vector mapping;
  • Deep hierarchical coding represents layer subunits, used to encode and semantically extract the input data; among them, the bidirectional long and short-term memory network encodes sentences twice, and then connects the semantic features obtained by the two encodings to obtain The intermediate encoding of the sentence represents the feature vector; the convolutional neural network continues to perform an encoding operation on the intermediate encoding the feature vector, and its output is used as the final encoding of the sentence to represent the feature vector;
  • the hierarchical feature interactive matching mechanism constructs sub-units, which are used to interactively match the encoded representation features obtained at different layers for each sentence in the sentence pair to generate the final matching representation vector;
  • the prediction layer subunit is used to process the matching characterization vector to obtain a matching degree value, which is compared with the established threshold to determine whether the semantics of the sentence pair match;
  • the sentence matching model training unit is used to construct the loss function needed in the model training process and complete the optimization training of the model.
  • the sentence matching knowledge base building unit includes:
  • the data crawling subunit is used to crawl the question set on the online public question and answer platform to build the original similar sentence knowledge base;
  • Crawling data processing subunit used to hyphenate or segment the sentences in the original similar sentence knowledge base, so as to construct the sentence matching knowledge base for model training;
  • the training data set generating unit includes:
  • the training positive example data construction subunit is used to combine the semantically matched sentences in the sentence matching knowledge base, and add matching label 1 to it to construct the training positive example data;
  • the training negative example data constructs a subunit to select a sentence s 1 from the sentence matching knowledge base, and then randomly select a sentence s 2 that does not match the sentence s 1 semantically from the sentence matching knowledge base, and compare s 1 with s 2 Combine and add a matching label 0 to it, and construct it as training negative example data;
  • the training data set construction subunit is used to combine all the training positive example data and the training negative example data, and disrupt the order to construct the final training data set;
  • the sentence matching model training unit includes:
  • the loss function construction subunit is used to calculate the error of whether the semantics of sentence 1 and sentence 2 match;
  • Each sentence in the knowledge base can be transformed into vector form subunits through word vector mapping, which is used for training and adjusting the parameters in the model training, thereby reducing the prediction during the sentence matching model training process.
  • word vector mapping used for training and adjusting the parameters in the model training, thereby reducing the prediction during the sentence matching model training process.
  • a storage medium stores a plurality of instructions, and the instructions are loaded by a processor to execute the steps of the above-mentioned intelligent semantic matching method based on deep layered coding.
  • An electronic device which includes:
  • the processor is configured to execute instructions in the storage medium.
  • the present invention realizes the deep layered coding representation of sentences, which can capture more semantic context information and the interactive information between sentences; at the same time, it realizes a new layered feature interactive matching mechanism, which can further enhance the inter-sentence
  • the interactive mechanism effectively improves the accuracy of the model's prediction of the internal semantic matching between sentences;
  • the present invention can capture and use different levels of semantic features in sentences and interactive information between sentences, and make more reasonable judgments on the matching of sentences;
  • the present invention can use the deep layered coding representation layer to generate the intermediate coding representation feature and the final coding representation feature of the sentence, which helps to capture the deep semantic features in the sentence, thereby effectively improving the comprehensiveness and accuracy of the semantic representation of the sentence;
  • the layered feature interactive matching mechanism proposed by the present invention can calculate the matching degree of sentence semantic features at different levels, thereby improving the accuracy of sentence semantic matching;
  • the present invention can extract the semantic information contained in the sentence from multiple angles, so as to obtain the intermediate code representation feature and the final code representation feature generated by the deep layered coding presentation layer, and then combine the layered feature interactive matching mechanism to perform Processing, that is, calculate the representation vector of an angle for the intermediate code representation feature of the sentence pair, and then calculate the representation vector of an angle for the final code representation feature of the sentence pair, and then multiply the obtained two vectors element by element, and finally get the sentence
  • the complete matching characterization vector can effectively improve the accuracy of sentence semantic matching, and at the same time, it can effectively improve the accuracy of the model predicting sentence semantic matching;
  • the present invention can express a sentence as a close potential representation, which contains rich semantic information.
  • Figure 1 is a flow chart of an intelligent semantic matching method based on deep layered coding
  • Figure 2 is a block diagram of the process of constructing a sentence matching knowledge base
  • Figure 3 is a block diagram of the process of constructing a training data set
  • Figure 4 is a block diagram of the process of constructing a sentence matching model
  • Figure 5 is a block diagram of the process of training a sentence matching model
  • Fig. 6 is a structural block diagram of an intelligent semantic matching device based on a deep layered coding representation layer
  • Figure 7 is a schematic diagram of the comparison of the influence of different word vector dimensions on the model effect
  • Figure 8 is a block diagram of the process of constructing a deep layered coding presentation layer
  • Fig. 9 is a schematic diagram of the framework of an intelligent semantic matching model based on deep layered coding.
  • the intelligent semantic matching method based on deep layered coding of the present invention is constructed and trained by an embedding layer, a deep layered coding representation layer, a layered feature interactive matching layer and a prediction layer.
  • the sentence matching model realizes the deep hierarchical coding representation of sentences, obtains more semantic context information and interactive information between sentences, and at the same time realizes a new layered feature interactive matching mechanism to achieve intelligent semantic matching of sentences Goals; specifically as follows:
  • the embedding layer embeds the input sentence and passes the result to the deep layered coding presentation layer
  • the depth layered coding presentation layer performs coding operations on the results obtained by the embedding operation, and obtains two different feature coding representations: the intermediate coding representation feature of the sentence and the final coding representation feature of the sentence;
  • the hierarchical feature interactive matching layer performs matching processing on the intermediate coding representation feature of the sentence and the final coding representation feature of the sentence to obtain a matching representation vector;
  • the intelligent semantic matching method based on depth layered coding of the present invention has specific steps as follows:
  • Example An example of similar sentence pairs in the bank's question and answer platform, as shown in the following table:
  • the sentence matching data set publicly available on the Internet as the original knowledge base.
  • the BQ data set [J. Chen, Q. Chen, X. Liu, H. Yang, D. Lu, B. Tang, The bq corpus: A large-scale domain-specific chinese corpus for sentence identification, EMNLP2018. ]
  • this data set contains 120,000 question pairs in online banking service logs. It is a Chinese data set specially used for sentence semantic matching tasks.
  • the BQ data set is currently the largest manually annotated Chinese data set in the banking field. It is very useful for the research on semantic matching of Chinese problems, and the data set is publicly available.
  • Preprocess the original data preprocess the similar sentence pairs in the original similar sentence knowledge base, and perform hyphenation or word segmentation operations on each sentence to obtain a sentence matching knowledge base.
  • the similar sentence pairs obtained in step S101 are preprocessed to obtain a sentence matching knowledge base.
  • hyphenation operation as an example to explain, that is, use each character in Chinese as the basic unit to perform hyphenation operation on each piece of data: separate each Chinese character with a space, and keep each piece of data including numbers, All content including punctuation and special characters.
  • all stop words in the sentence are retained.
  • S202 negative training Construction Example: Select a sentence s 1, and then randomly selects one sentence from the sentence does not match sentence s 1 s 2, s 1 and s 2 will be a combination of knowledge base to construct a negative example, formalized : (Sentence1,sentence2,0); among them, sentence1 means sentence s 1 ; sentence2 means sentence s 2 ; 0 means that the semantics of sentence s 1 and sentence s 2 do not match, which is a negative example;
  • the negative example of construction is:
  • step S203 Constructing a training data set: Combine all the positive sample sentence pairs and negative sample sentence pairs obtained after the operations of step S201 and step S202, and disrupt their order, thereby constructing a final training data set. Regardless of the positive or negative data, they all contain three dimensions, namely sentence1, sentence2, 0 or 1.
  • Construct a sentence matching model The main operations are to construct a character mapping conversion table, construct an input layer, construct a word vector mapping layer, construct a deep hierarchical coding representation layer of a sentence, construct a hierarchical feature interactive matching mechanism, and construct a prediction layer.
  • the three sub-steps of constructing the character mapping conversion table, constructing the input layer, and constructing the word vector mapping layer correspond to the embedding layer in Fig. 9;
  • Layer coding represents the layer.
  • the sub-steps of constructing a hierarchical feature interactive matching mechanism correspond to the hierarchical feature interactive matching layer in Fig. 9, and the sub-steps of constructing a prediction layer correspond to the prediction layer in Fig. 9; as shown in Fig. 4, the specific Proceed as follows:
  • step S301 Construct a character mapping conversion table: the character table is constructed through the sentence matching knowledge base obtained after processing in step S102. After the character table is constructed, each character in the table is mapped to a unique number identifier.
  • the mapping rule is: start with the number 1, and then sort each character in ascending order according to the order in which each character is entered into the character table, thus forming the all The required character mapping conversion table.
  • the present invention uses Word2Vec to train the word vector model to obtain the word vector matrix embedding_matrix of each character.
  • embedding_matrix numpy.zeros([len(tokenizer.word_index)+1,
  • w2v_corpus is the training corpus, that is, the sentence matches all the data in the knowledge base; embedding_dim is the dimension of the word vector, and there is a certain gap in the effect achieved by using different embedding_dim, as shown in Figure 7, when other parameters are fixed, Use different embedding_dim to bring different effects; when embedding_dim is set to 400, Recall, F1-score, and Accuracy all achieve the relatively best results, and Precision is also maintained at a relatively high level at this time, so The model finally sets embedding_dim to 400, and word_set to the vocabulary.
  • Construct a word vector mapping layer initialize the weight parameters of the current layer by loading the word vector matrix weights trained in step S301; for the input sentences sentence1 and sentence2, the corresponding sentence vectors sentence1_emd and sentence2_emd are obtained.
  • Each sentence in the sentence matching knowledge base can transform sentence information into vector form through word vector mapping.
  • embedding_matrix is the weight of the word vector matrix trained in step S301
  • embedding_matrix.shape[0] is the size of the vocabulary (dictionary) of the word vector matrix
  • embedding_dim is the dimension of the output word vector
  • input_length is the length of the input sequence.
  • the corresponding sentences sentence1 and sentence2 are encoded by the Embedding layer to obtain the corresponding sentence vectors sentence1_emd and sentence2_emd.
  • This layer of network is the general network layer of sentence-pair semantic matching model, which realizes the vector representation of each character in the knowledge base.
  • the processing of sentences sentence1 and sentence2 in this layer is exactly the same, so I will not expand the explanation separately.
  • the sentence representation model performs encoding and semantic extraction on the sentence processed in step S303, so as to obtain the intermediate encoding representation feature and the final encoding representation feature of the sentence.
  • the best result can be obtained when the coding dimension of this layer is set to 300; the specific steps are as follows:
  • i represents the relative position of the corresponding word vector in the sentence
  • p i is the corresponding vector representation of each character in the sentence
  • the result of the vector connection that is, the intermediate code representation feature of the sentence
  • the final encoding representation feature of the sentence use a convolutional neural network CNN to continue encoding the output intermediate encoding representation feature, and its output will be used as the final encoding representation feature of the sentence, the formula is as follows:
  • step S305 Construct a hierarchical feature interactive matching mechanism: After processing in step S304, the vector representations of the intermediate coding representation features of sentence1 and sentence2 are obtained respectively And the final encoding represents the vector representation of the feature According to the obtained two types of vectors, the matching is performed from different angles to generate the matching characterization vector; the details are as follows:
  • Respectively are the average vector representations of the corresponding sentence vectors; Indicates that the intermediate code represents the feature vector The absolute value obtained by calculating the difference between each element and its average value; Represents the final encoding to represent the feature vector The absolute value obtained by calculating the difference between each element and its average value; Express and The value obtained by integrating element by element;
  • the present invention adopts a hierarchical feature interactive matching mechanism to fully capture multi-angle interactive matching features between sentence pairs.
  • step S306. Construct a prediction layer: the matching representation vector obtained in step S305 Input the prediction layer to determine whether the semantics of sentence pairs match; in the prediction layer, match the representation vector After the fully connected layer is processed, it is processed by the Sigmoid function layer; in order to prevent over-fitting, the dropout is set to 0.5 in the fully connected layer, and the sigmoid layer calculates the matching degree of the output of the fully connected layer after dropout processing.
  • the matching degree between [0,1] is obtained as y pred
  • the semantic matching of the sentence pair is judged by comparing with the established threshold (0.5), that is, when y pred > 0.5, it is judged as a semantic match, y pred When ⁇ 0.5, it is judged as semantic mismatch.
  • the present invention has achieved results superior to the current advanced model on the BQ data set, and the comparison of the experimental results is shown in Table 1:
  • step S102 it is mentioned that the present invention can process sentences in two ways, namely hyphenation operation or word segmentation operation. Therefore, the HEM char model in the table corresponds to the model obtained after the sentence is hyphenated; the HEM word model corresponds to the model obtained after the sentence is divided into words.
  • the model of the present invention is compared with the existing model, and the experimental results show that the method of the present invention has been greatly improved.
  • the first three rows are the experimental results of the existing technology model [the first three rows of data come from: J. Chen, Q. Chen, X. Liu, H. Yang, D. Lu, B. Tang, The bq corpus: A large-scale domain-specific chinese corpus for sentence semantic equivalence identification, EMNLP2018.], the last two lines are the experimental results of the present invention, which shows that the present invention has a greater improvement over the existing model.
  • the intelligent semantic matching device based on depth layered coding according to Embodiment 2 includes:
  • the sentence matching knowledge base building unit is used to use crawlers to crawl the question set on the online public question and answer platform to obtain the original similar sentence knowledge base, and then perform hyphenation or word segmentation preprocessing on the original similar sentence knowledge base to construct a
  • the sentence matching knowledge base for model training; the construction unit of sentence matching knowledge base includes,
  • the data crawling subunit is used to crawl the question set on the online public question and answer platform to build the original similar sentence knowledge base;
  • Crawling data processing subunit used to hyphenate or segment the sentences in the original similar sentence knowledge base, so as to construct the sentence matching knowledge base for model training;
  • the training data set generation unit is used to construct training positive example data and training negative example data according to the sentences in the sentence matching knowledge base, and build the final training data set based on the positive and negative example data; training data set generation unit include,
  • the training positive example data construction subunit is used to combine the semantically matched sentences in the sentence matching knowledge base, and add matching label 1 to it to construct the training positive example data;
  • the training negative example data constructs a subunit to select a sentence s 1 from the sentence matching knowledge base, and then randomly select a sentence s 2 that does not match the sentence s 1 semantically from the sentence matching knowledge base, and compare s 1 with s 2 Combine and add a matching label 0 to it, and construct it as training negative example data;
  • the training data set construction subunit is used to combine all the training positive example data and the training negative example data, and disrupt the order to construct the final training data set;
  • the sentence matching model building unit is used to construct the character mapping conversion table, the input layer, and the word vector mapping layer through the embedding layer, the deep layered coding representation layer, and the layered feature interactive matching layer to build the sentence layered feature interactive matching mechanism and Construct the prediction layer;
  • the sentence matching model construction unit includes,
  • the character mapping conversion table constructs a subunit, which is used to segment each sentence in the sentence matching knowledge base by character, and store each character in a list in turn, so as to obtain a character table, and then start with the number 1. Initially, each character is sorted in ascending order according to the order in which each character is entered into the character table, thereby forming the required character mapping conversion table; wherein, by constructing the character mapping conversion table, each character in the training data set is mapped to a unique digital identifier ; Thereafter, the present invention uses Word2Vec to train the word vector model to obtain the word vector matrix embedding_matrix of each character;
  • the input layer constructs sub-units for formalizing the input sentences sentence1 and sentence2 into: (sentence1, sentence2);
  • the word vector mapping layer construction subunit is used to load the word vector matrix weights obtained by the character mapping conversion table construction subunit training to initialize the weight parameters of the current layer; for the input sentences sentence1 and sentence2, the corresponding sentence vectors sentence1_emd and sentence2_emd are obtained.
  • Each sentence in the sentence matching knowledge base can transform sentence information into vector form through word vector mapping;
  • Deep hierarchical coding represents layer subunits, used to encode and semantically extract the input data; among them, the bidirectional long and short-term memory network encodes sentences twice, and then connects the semantic features obtained by the two encodings to obtain The intermediate encoding of the sentence represents the feature vector; the convolutional neural network continues to perform an encoding operation on the intermediate encoding the feature vector, and its output is used as the final encoding of the sentence to represent the feature vector;
  • the hierarchical feature interactive matching mechanism constructs sub-units, which are used to interactively match the encoded representation features obtained at different layers for each sentence in the sentence pair to generate the final matching representation vector;
  • the prediction layer subunit is used to process the matching representation vector to obtain a matching degree value, which is compared with the established threshold to determine whether the semantics of the sentence pair match;
  • the sentence matching model training unit is used to construct the loss function needed in the model training process and complete the optimization training of the model; the sentence matching model training unit includes:
  • the loss function construction subunit is used to calculate the error of whether the semantics of sentence 1 and sentence 2 match;
  • a plurality of instructions are stored therein, and the instructions are loaded by the processor, and the steps of the intelligent semantic matching method based on the deep layered coding of the second embodiment are executed.
  • the electronic device includes: the storage medium of embodiment 4; and
  • the processor is configured to execute instructions in the storage medium of Embodiment 4.

Abstract

本发明公开了一种基于深度分层编码的智能语义匹配方法和装置,属于人工智能、自然语言处理技术领域,本发明要解决的技术问题为如何捕获更多的语义上下文信息和句子间的交互信息,以实现句子的智能语义匹配,采用的技术方案为:该方法是通过构建并训练由嵌入层、深度分层编码表示层、分层特征交互匹配层和预测层组成的句子匹配模型,实现对句子的深度分层编码表示,从而获取更多的语义上下文信息和句子间的交互信息,同时实现分层特征交互匹配机制,以达到对句子进行智能语义匹配的目标。该装置包括句子匹配知识库构建单元、训练数据集生成单元、句子匹配模型构建单元及句子匹配模型训练单元。

Description

一种基于深度分层编码的智能语义匹配方法和装置 技术领域
本发明涉及人工智能、自然语言处理技术领域,具体地说是一种基于深度分层编码的智能语义匹配方法和装置。
背景技术
近年来,句子的语义匹配方法在自然语言处理领域越来越受重视。究其原因,众多自然语言处理任务以句子的语义匹配为基础,在一定程度上可以视为句子语义匹配任务的拓展。例如,“自动问答”任务可以通过计算“问题”与“候选答案”的匹配度进行处理;“信息检索”任务可以视为是在计算“查询句子”与“匹配文档”的匹配度。正因如此,句子的语义匹配在自然语言处理领域中起着至关重要的作用。衡量句子之间内在的语义匹配程度是一项非常有挑战性的工作,到目前为止,现有技术仍没有实质性地解决这一问题。
通过分析和研究,不难发现现有技术中大多都是以卷积神经网络模型或循环神经网络模型为基础的,而这两种模型自身特点和局限性导致其无法彻底解决这一问题。其中,卷积神经网络虽然擅长用不同的核函数捕捉和表示局部特征,但它忽略了文本中的序列信息,不适用于处理序列信息任务;循环神经网络虽然可以处理序列信息,但是它们大多只生成最终的向量表示而不考虑句子的层次关系,这可能会丢失一些重要的中间编码信息。而对于句子的语义匹配任务,句子中的词语顺序和句子的层次信息都是至关重要的,因此,单纯使用基于卷积神经网络模型或循环神经网络模型的方法几乎无法获得令人满意的结果。
故如何捕获更多的语义上下文信息和句子间的交互信息,并实现一种更加有效的语义匹配方式,以提高对句子进行智能语义匹配的准确率,是目前亟待解决的技术问题。
发明内容
本发明的技术任务是提供一种基于深度分层编码的智能语义匹配方法和装置,来达到捕获更多的语义上下文信息和句子间的交互信息,并通过实现一种新的分层特征交互匹配机制,最终达到对句子进行智能语义匹配的目的。
本发明的技术任务是按以下方式实现的,一种基于深度分层编码的智能语义匹配方法,该方法是通过构建并训练由嵌入层、深度分层编码表示层、分层 特征交互匹配层和预测层组成的句子匹配模型,以此实现对句子的深度分层编码表示,获取更多的语义上下文信息和句子间的交互信息,同时通过实现一种新的分层特征交互匹配机制,以达到对句子进行智能语义匹配的目标;具体如下:
嵌入层对输入的句子进行嵌入操作,并将结果传递给深度分层编码表示层;
深度分层编码表示层对由嵌入操作获取的结果进行编码操作得到句子的中间编码表示特征与句子的最终编码表示特征两种不同的特征编码表示;
分层特征交互匹配层对句子的中间编码表示特征与句子的最终编码表示特征分别进行匹配处理,得到匹配表征向量;
在预测层使用一个全连接层对匹配表征向量进行一次映射,然后使用sigmoid层将得到的结果映射为指定区间中的一个值作为匹配度数值,根据匹配度数值与设定阀值的相对大小判定输入的句子对的语义是否匹配。
作为优选,所述嵌入层用于构建字符映射转换表、构建输入层及构建字向量映射层;
其中,构建字符映射转换表:映射规则为:以数字1为起始,随后按照每个字符被录入字符表的顺序依次递增排序,从而形成所需的字符映射转换表;其中,字符表通过句子匹配知识库构建;其后,本发明再使用Word2Vec训练字向量模型,得到各字符的字向量矩阵embedding_matrix;
构建输入层:输入层包括两个输入,对输入句子sentence1、sentence2,将其形式化为:(sentence1,sentence2);对于输入句子中的每个字按照字符映射表转化为相应的数字表示;
构建字向量映射层:加载构建字符映射转换表步骤中训练所得的字向量矩阵权重来初始化当前层的权重参数;针对输入句子sentence1和sentence2,得到其相应句子向量sentence1_emd、sentence2_emd;句子匹配知识库中每一个句子都可以通过字向量映射的方式,将句子信息转化为向量形式。
更优地,所述深度分层编码表示层的构建过程具体如下:
句子的中间编码表示特征:使用一个双向长短期记忆网络BiLSTM,对经过字向量映射层处理后的句子进行两次编码处理,再对两次编码获得的语义特征进行联接操作而得到,公式如下:
Figure PCTCN2020104724-appb-000001
Figure PCTCN2020104724-appb-000002
Figure PCTCN2020104724-appb-000003
其中,i表示相应字向量在句子中的相对位置;p i为句子中每个字的相应向量表示;
Figure PCTCN2020104724-appb-000004
为经过BiLSTM第一次编码后的句子向量;
Figure PCTCN2020104724-appb-000005
表示经过BiLSTM第二次编码后的句子向量;
Figure PCTCN2020104724-appb-000006
Figure PCTCN2020104724-appb-000007
向量联接的结果,即该句子的中间编码表示特征;
句子的最终编码表示特征:使用一个卷积神经网络CNN对于输出的中间编码表示特征继续进行编码处理,其输出则作为句子的最终编码表示特征,公式如下:
Figure PCTCN2020104724-appb-000008
其中,
Figure PCTCN2020104724-appb-000009
为经过CNN编码后的句子最终编码表示特征。
更优地,所述分层特征交互匹配层用于构建分层特征交互匹配机制;其中,构建分层特征交互匹配机制是经过深度分层编码表示层处理后分别得到sentence1、sentence2的中间编码表示特征的向量表示
Figure PCTCN2020104724-appb-000010
Figure PCTCN2020104724-appb-000011
和最终编码表示特征的向量表示
Figure PCTCN2020104724-appb-000012
Figure PCTCN2020104724-appb-000013
根据得到的这两类向量从不同的角度进行匹配,从而生成匹配表征向量;具体如下:
计算
Figure PCTCN2020104724-appb-000014
公式如下:
Figure PCTCN2020104724-appb-000015
Figure PCTCN2020104724-appb-000016
Figure PCTCN2020104724-appb-000017
其中,
Figure PCTCN2020104724-appb-000018
表示中间编码表示特征向量
Figure PCTCN2020104724-appb-000019
逐元素之间求差取得的绝对值;
Figure PCTCN2020104724-appb-000020
表示最终编码表示特征向量
Figure PCTCN2020104724-appb-000021
Figure PCTCN2020104724-appb-000022
逐元素之间求差取得的绝对值;
Figure PCTCN2020104724-appb-000023
表示
Figure PCTCN2020104724-appb-000024
Figure PCTCN2020104724-appb-000025
逐元素求积取得的值;
计算
Figure PCTCN2020104724-appb-000026
公式如下:
Figure PCTCN2020104724-appb-000027
Figure PCTCN2020104724-appb-000028
Figure PCTCN2020104724-appb-000029
其中,
Figure PCTCN2020104724-appb-000030
分别为对应句子向量的平均向量表示;
Figure PCTCN2020104724-appb-000031
表示中间编码表示特征向量
Figure PCTCN2020104724-appb-000032
分别与其平均值作差后再进行逐元素之间求差取得的绝对值;
Figure PCTCN2020104724-appb-000033
表示最终编码表示特征向量
Figure PCTCN2020104724-appb-000034
Figure PCTCN2020104724-appb-000035
分别与其平均值作差后再进行逐元素之间求差取得的绝对值;
Figure PCTCN2020104724-appb-000036
表示
Figure PCTCN2020104724-appb-000037
Figure PCTCN2020104724-appb-000038
逐元素求积取得的值;
将计算得出的
Figure PCTCN2020104724-appb-000039
Figure PCTCN2020104724-appb-000040
两个结果进行连接,作为句子对匹配程度的全面表征,公式如下:
Figure PCTCN2020104724-appb-000041
其中,
Figure PCTCN2020104724-appb-000042
表示最终生成的匹配表征向量。
更优地,所述预测层构建过程如下:
将构建分层特征交互匹配机制过程中得到的匹配表征向量
Figure PCTCN2020104724-appb-000043
输入预测层,以判断句子对的语义是否匹配;在预测层中,匹配表征向量
Figure PCTCN2020104724-appb-000044
经过全连接层处理,再由Sigmoid函数层进行处理;为了防止发生过拟合的情况,在全连接层设置dropout为0.5,sigmoid函数对经过dropout处理后的全连接层的输出进行匹配度计算,得到处于[0,1]之间的匹配度表示y pred,最终通过与设立的阈值(0.5)进行比较来判别句子对的语义是否匹配,即y pred>0.5时,判定为语义匹配,y pred<0.5时,判定为语义不匹配。
更优地,所述句子匹配知识库构建具体如下:
使用爬虫获取原始数据:在网上公共问答平台爬取问题集,得到原始相似句子知识库;或者使用网上公开的句子匹配数据集,作为原始相似句子知识库;
预处理原始数据:预处理原始相似句子知识库中的相似句子对,对每个句子进行断字操作或分词操作,得到句子匹配知识库;
所述句子匹配模型通过使用训练数据集进行训练而得到,训练数据集的构建过程如下:
构建训练正例:将句子与句子所对应的标准句子进行组合,构建正例,形式化为:(sentence1,sentence2,1);其中,sentence1表示句子1;sentence2 表示句子2;1表示句子1和句子2的语义相匹配,是正例;
构建训练负例:选中一个句子s 1,再从句子匹配知识库中随机选择一个与句子s 1不匹配的句子s 2,将s 1与s 2进行组合,构建负例,形式化为:(sentence1,sentence2,0);其中,sentence1表示句子s 1;sentence2表示句子s 2;0表示句子s 1和句子s 2的语义不匹配,是负例;
构建训练数据集:将经过构建训练正例和构建训练负例操作后所获得的全部的正例样本句子对和负例样本句子对进行组合,并打乱其顺序,构建最终的训练数据集;无论正例数据还是负例数据均包含三个维度,即sentence1、sentence2、0或1;
所述句子匹配模型构建完成后通过训练数据集进行句子匹配模型的训练优化,具体如下:
构建损失函数:由预测层构建过程可知,y pred是经过分层特征交互匹配机制处理后得到的匹配度计算结果,y true是两个句子语义是否匹配的真实标签,其取值仅限于0或1,本模型采用均方误差作为损失函数,公式如下:
Figure PCTCN2020104724-appb-000045
优化训练模型:使用RMSprop作为本模型的优化算法,除了其学习率设置为0.001外,RMSprop的剩余超参数均选择Keras中的默认值设置;在训练数据集上,对句子匹配模型进行优化训练。
一种基于深度分层编码的智能语义匹配装置,该装置包括,
句子匹配知识库构建单元,用于使用爬虫程序,在网上公共问答平台爬取问题集,得到原始相似句子知识库,再对原始相似句子知识库进行断字或分词的预处理,从而构建用于模型训练的句子匹配知识库;
训练数据集生成单元,用于根据句子匹配知识库中的句子来构建训练正例数据和训练负例数据,并且基于正例数据与负例数据来构建最终的训练数据集;
句子匹配模型构建单元,用于通过嵌入层构建字符映射转换表、输入层、及字向量映射层、构建深度分层编码表示层、通过分层特征交互匹配层构建句子分层特征交互匹配机制和构建预测层;句子匹配模型构建单元包括,
字符映射转换表构建子单元,用于对句子匹配知识库中的每个句子按字符 进行切分,并将每个字符依次存入一个列表中,从而得到一个字符表,随后以数字1为起始,按照每个字符被录入字符表的顺序依次递增排序,从而形成所需要的字符映射转换表;其中,通过构建字符映射转换表,训练数据集中的每个字符均被映射为唯一的数字标识;其后,本发明再使用Word2Vec训练字向量模型,得到各字符的字向量矩阵embedding_matrix;
输入层构建子单元,用于对输入句子sentence1与sentence2,将其形式化为:(sentence1、sentence2);
字向量映射层构建子单元,用于加载字符映射转换表构建子单元训练所得的字向量矩阵权重来初始化当前层的权重参数;针对输入句子sentence1和sentence2,得到其相应句子向量sentence1_emd、sentence2_emd。句子匹配知识库中每一个句子都可以通过字向量映射的方式,将句子信息转化为向量形式;
深度分层编码表示层子单元,用于对输入的数据进行编码和语义提取;其中双向长短期记忆网络对句子进行两次编码操作,再对两次编码获得的语义特征进行联接操作,从而得到句子的中间编码表示特征向量;卷积神经网络对于中间编码表示特征向量继续进行一次编码操作,其输出作为句子的最终编码表示特征向量;
分层特征交互匹配机制构建子单元,用于将句子对中的每个句子在不同层上获得的编码表示特征分别进行交互匹配,生成最终的匹配表征向量;
预测层子单元,用于对匹配表征向量进行处理,从而得出一个匹配度数值,将其与设立的阈值进行比较,判断句子对的语义是否匹配;
句子匹配模型训练单元,用于构建模型训练过程中所需要的损失函数,并完成模型的优化训练。
作为优选,所述句子匹配知识库构建单元包括,
数据爬取子单元,用于在网上公共问答平台爬取问题集,构建原始相似句子知识库;
爬取数据处理子单元,用于将原始相似句子知识库中的句子进行断字处理或分词处理,从而构建用于模型训练的句子匹配知识库;
所述训练数据集生成单元包括,
训练正例数据构建子单元,用于将句子匹配知识库中语义匹配的句子进行组合,并对其添加匹配标签1,构建为训练正例数据;
训练负例数据构建子单元,用于从句子匹配知识库中选取一个句子s 1,再从句子匹配知识库中随机选择一个与句子s 1语义不匹配的句子s 2,将s 1与s 2进行 组合,并对其添加匹配标签0,构建为训练负例数据;
训练数据集构建子单元,用于将所有的训练正例数据与训练负例数据组合在一起,并打乱其顺序,从而构建最终的训练数据集;
所述句子匹配模型训练单元包括,
损失函数构建子单元,用于计算句子1和句子2间的语义是否匹配的误差;
优化句子匹配知识库中每一个句子都可以通过字向量映射的方式,将句子信息转化为向量形式子单元,用于训练并调整模型训练中的参数,从而减小句子匹配模型训练过程中预测的句子1与句子2间语义匹配程度与真实匹配标签之间的误差;
一种存储介质,其中存储有多条指令,所述指令有处理器加载,执行上述的基于深度分层编码的智能语义匹配方法的步骤。
一种电子设备,所述电子设备包括:
上述的存储介质;以及
处理器,用于执行所述存储介质中的指令。
本发明的基于深度分层编码的智能语义匹配方法和装置具有以下优点:
(一)本发明实现了对句子进行深度分层编码表示,可以捕获更多的语义上下文信息和句子间的交互信息;同时实现了一种新的分层特征交互匹配机制,可以进一步增强句子间的互动机制,有效提高模型预测句子间内在语义匹配度的准确性;
(二)本发明能够捕捉和利用句子中不同层次的语义特征和句子间的交互信息,对句子的匹配进行更加合理地判断;
(三)本发明能够利用深度分层编码表示层生成句子的中间编码表示特征和最终编码表示特征,有助于捕捉句子中的深层语义特征,从而有效提高句子语义表征的全面性和准确性;
(四)本发明提出的分层特征交互匹配机制可以对不同层次上的句子语义特征分别进行匹配度计算,从而提高句子语义匹配的准确性;
(五)本发明能够从多角度对句子所包含的语义信息进行提取,从而得到深度分层编码表示层生成的中间编码表示特征和最终编码表示特征,然后结合分层特征交互匹配机制对其进行处理,即针对句子对的中间编码表示特征计算一个角度的表征向量,再针对句子对的最终编码表示特征计算一个角度的表征向量,随后将得到的两个向量进行逐元素相乘,最后得到句子对的完整匹配表征向量,这样可以有效提高句子语义匹配的准确性,同时也可有效提高模型预测句子语义匹配度的准确率;
(六)本发明能够将句子表示为一种紧密的潜在表征,该表征蕴含了丰富的语义信息。
附图说明
下面结合附图对本发明进一步说明。
图1为基于深度分层编码的智能语义匹配方法的流程框图;
图2为构建句子匹配知识库的流程框图;
图3为构建训练数据集的流程框图;
图4为构建句子匹配模型的流程框图;
图5为训练句子匹配模型的流程框图;
图6为基于深度分层编码表示层的智能语义匹配装置的结构框图;
图7为不同的字向量维度对模型效果的影响对比示意图;
图8为构建深度分层编码表示层流程框图;
图9为基于深度分层编码的智能语义匹配模型的框架示意图。
具体实施方式
参照说明书附图和具体实施例对本发明的一种基于深度分层编码的智能语义匹配方法和装置作以下详细地说明。
实施例1:
如附图9所示,本发明的基于深度分层编码的智能语义匹配方法,该方法是通过构建并训练由嵌入层、深度分层编码表示层、分层特征交互匹配层和预测层组成的句子匹配模型实现对句子的深度分层编码表示,获取更多的语义上下文信息和句子间的交互信息,同时通过实现一种新的分层特征交互匹配机制,以达到对句子进行智能语义匹配的目标;具体如下:
(1)、嵌入层对输入的句子进行嵌入操作,并将结果传递给深度分层编码表示层;
(2)、深度分层编码表示层对由嵌入操作获取的结果进行编码操作,得到句子的中间编码表示特征与句子的最终编码表示特征两种不同的特征编码表示;
(3)、分层特征交互匹配层对句子的中间编码表示特征与句子的最终编码表示特征分别进行匹配处理,得到匹配表征向量;
(4)、在预测层使用一个全连接层对匹配表征向量进行一次映射,然后使用sigmoid层将得到的结果映射为指定区间中的一个值作为匹配度数值,根据匹配度数值与设定阀值的相对大小判定输入的句子对间的语义是否匹配。
实施例2:
如附图1所示,本发明的基于深度分层编码的智能语义匹配方法,具体步 骤如下:
S1、构建句子匹配知识库,如附图2所示,具体步骤如下:
S101、使用爬虫获取原始数据:编写爬虫程序,在网上公共问答平台爬取问题集,得到原始相似句子知识库;或者使用网上公开的句子匹配数据集,作为原始相似句子知识库。
互联网上的公共问答分享平台中有着大量的问答数据及相似问题的推荐,这些都是面向大众开放的。因此我们可以根据问答平台的特点,设计相应的爬虫程序,以此来获取语义相似的问题集合,从而构建原始相似句子知识库。
举例:银行问答平台中的相似句子对示例,如下表:
句子1 还款期限可以延后一天吗?
句子2 是否可以申请延期一天还款?
或者,使用网上公开的句子匹配数据集,作为原始知识库。比如BQ数据集【J.Chen,Q.Chen,X.Liu,H.Yang,D.Lu,B.Tang,The bq corpus:A large-scale domain-specific chinese corpus for sentence semantic equivalence identification,EMNLP2018.】,该数据集包含网上银行服务日志中的120000个问题对,是一种专门用于句子语义匹配任务的中文数据集。BQ数据集是目前银行领域最大的、经人工注释过的中文数据集,对中文问题语义匹配研究很有用,而且该数据集是公开可用的。
S102、预处理原始数据:预处理原始相似句子知识库中的相似句子对,对每个句子进行断字操作或分词操作,得到句子匹配知识库。
对步骤S101中获得的相似句子对进行预处理,得到句子匹配知识库。以断字操作为例进行说明,即以中文里的每个字作为基本单位,对每条数据进行断字操作:每个汉字之间用空格进行切分,并且保留每条数据中包括数字、标点以及特殊字符在内的所有内容。在此步骤中,为了避免语义信息的丢失,保留了句子中的所有停用词。
举例:以步骤S101中展示的句子1“还款期限可以延后一天吗?”为例,对其进行断字处理后得到“还款期限可以延后一天吗?”。
若想以分词的方式对句子进行处理,可借助jieba分词工具来处理句子,分词时选择默认模式(精准模式)即可。
举例:仍以步骤S101中展示的句子1“还款期限可以延后一天吗?”为例,使用jieba分词工具对其进行分词处理后得到:
“还款期限可以延后一天吗?”。
由于经过断字处理和分词处理后得到的结果在后续各个步骤中的操作是完全一致的,故在后文中不再分别进行说明。
S2、构建句子匹配模型的训练数据集:对于每一个句子,在句子匹配知识库中都会有一个与之对应的标准句子,此句子可与其组合用来构建训练正例;其他不匹配的句子可自由组合用来构建训练负例;用户可根据句子匹配知识库大小来设定负例的数量,从而构建训练数据集;如附图3所示,具体步骤如下:
S201、构建训练正例:将句子与其所对应的标准句子进行组合,构建正例,可形式化为:(sentence1,sentence2,1);其中,sentence1表示句子1,sentence2表示句子2,1表示句子1和句子2的语义相匹配,是正例。
举例:对步骤S101中展示的句子1和句子2,经过步骤S102断字处理后,构建的正例为:
(“还款期限可以延后一天吗?”,“是否可以申请延期一天还款?”,1)。
S202、构建训练负例:选中一个句子s 1,再从句子匹配知识库中随机选择一个与句子s 1不匹配的句子s 2,将s 1与s 2进行组合,构建负例,形式化为:(sentence1,sentence2,0);其中,sentence1表示句子s 1;sentence2表示句子s 2;0表示句子s 1和句子s 2的语义不匹配,是负例;
举例:根据步骤S201中的所展示的示例数据,仍然使用原问句作为s 1,再从句子匹配知识库中随机选择一个与句子s 1语义不匹配的句子s 2,将s 1与s 2进行组合,构建的负例为:
(“还款期限可以延后一天吗?”,“为什么银行客户端登陆出现网络错误?”,0)。
S203、构建训练数据集:将经过步骤S201和步骤S202操作后所获得的全部的正例样本句子对和负例样本句子对进行组合,并打乱其顺序,以此构建最终的训练数据集。无论正例数据还是负例数据,它们都包含了三个维度,即sentence1、sentence2、0或1。
S3、构建句子匹配模型:主要操作为构建字符映射转换表、构建输入层、构建字向量映射层、构建句子的深度分层编码表示层、构建分层特征交互匹配机制、构建预测层。其中,构建字符映射转换表、构建输入层、构建字向量映射层的三个子步骤对应附图9中的嵌入层,构建句子的深度分层编码表示层的子步骤对应附图9中的深度分层编码表示层,构建分层特征交互匹配机制的子 步骤对应附图9中的分层特征交互匹配层,构建预测层子步骤对应附图9中的预测层;如附图4所示,具体步骤如下:
S301、构建字符映射转换表:字符表是通过步骤S102处理后得到的句子匹配知识库来构建的。字符表构建完成后,表中每个字符均被映射为唯一的数字标识,其映射规则为:以数字1为起始,随后按照每个字符被录入字符表的顺序依次递增排序,从而形成所需要的字符映射转换表。
举例:以步骤S102断字后的内容,“还款期限可以延后一天吗?”,构建字符表及字符映射转换表如下:
字符
映射 1 2 3 4 5 6 7 8 9
字符            
映射 10 11 12            
其后,本发明再使用Word2Vec训练字向量模型,得到各字符的字向量矩阵embedding_matrix。
举例说明:在Keras中,对于上面描述的代码实现如下所示:
w2v_model=genism.models.Word2Vec(w2v_corpus,size=embedding_dim,
window=5,min_count=1,sg=1,
workers=4,seed=1234,iter=25)
embedding_matrix=numpy.zeros([len(tokenizer.word_index)+1,
embedding_dim])
tokenizer=keras.preprocessing.text.Tokenizer(num_words=len(word_set))
for word,idx in tokenizer.word_index.items():
embedding_matrix[idx,:]=w2v_model.wv[word]
其中,w2v_corpus为训练语料,即句子匹配知识库中的所有数据;embedding_dim为字向量维度,采用不同的embedding_dim所达到的效果是有一定差距的,如附图7所示,在其他参数固定时,分别采用不同的embedding_dim所带来的不同效果;当embedding_dim取400时,Recall、F1-score、Accuracy均达到了相对最好的结果,而Precision在此时也保持在一个相对较高的水平,故本模型最终设置embedding_dim为400,word_set为词表。
S302、构建输入层:输入层中包括两个输入,对输入句子sentence1、sentence2,将其形式化为:(sentence1,sentence2)。
对于输入句子中的每个字而言,都按照在步骤S301中构建完成的字符映射转换表将其转化为相应的数字标识。
举例说明:使用步骤S201中展示的句子对作为样例,以此组成一条输入数据,其结果如下:
(“还款期限可以延后一天吗?”,“是否可以申请延期一天还款?”)
根据词表中的映射将上述的输入数据转换为数值表示(假定出现在句子2中但没有出现在句子1中的字的映射分别为“是”:13,“否”:14,“申”:15,“请”:16,“延”:17),结果如下:
(“1,2,3,4,5,6,7,8,9,10,11,12”,“13,14,5,6,15,16,17,3,9,10,1,2”)。
S303、构建字向量映射层:通过加载步骤S301中训练所得的字向量矩阵权重来初始化当前层的权重参数;针对输入句子sentence1和sentence2,得到其相应句子向量sentence1_emd、sentence2_emd。句子匹配知识库中每一个句子都可以通过字向量映射的方式,将句子信息转化为向量形式。
举例说明:在Keras中,对于上面描述的代码实现如下所示:
Figure PCTCN2020104724-appb-000046
其中,embedding_matrix是步骤S301中训练所得的字向量矩阵权重,embedding_matrix.shape[0]是字向量矩阵的词汇表(词典)的大小,embedding_dim是输出的字向量的维度,input_length是输入序列的长度。
相应的句子sentence1和sentence2,经过Embedding层编码后得到相应的句子向量sentence1_emd、sentence2_emd。
这一层网络是句子对语义匹配模型的通用网络层,它实现了知识库中每一个字符对应的向量表示。本层对于句子sentence1与sentence2的处理是完全一样的,所以不再分别展开说明。
S304、构建句子的深度分层编码表示层:提出了一种新的句子编码表示方法,主要体现在句子的深度分层编码表示层中;如附图8所示,该层对一个句子进行处理后,可得到两种不同的语义特征表示,即句子匹配模型中间层输出的中间编码表示特征与输出层输出的最终编码表示特征。相比于现有的一些只 能获得最终输出层上特征的方法,此模型可以有效防止句子在经过编码表示层处理时丢失重要的信息,从而捕获更多的语义特征,最终提高句子语义匹配的准确率。该句子表示模型针对经过步骤S303处理后的句子进行编码和语义提取,从而获得句子的中间编码表示特征和最终编码表示特征。此外,根据实践经验,本层的编码维度设置为300时,可获得最优结果;具体步骤如下:
S30401、句子的中间编码表示特征:使用一个双向长短期记忆网络BiLSTM,对经过字向量映射层处理后的句子进行两次编码处理后,再对两次编码获得的语义特征进行联接操作而得到的,公式如下:
Figure PCTCN2020104724-appb-000047
Figure PCTCN2020104724-appb-000048
Figure PCTCN2020104724-appb-000049
其中,i表示相应字向量在句子中的相对位置;p i为句子中每个字符的相应向量表示;
Figure PCTCN2020104724-appb-000050
为经过BiLSTM第一次编码后的句子向量;
Figure PCTCN2020104724-appb-000051
表示经过BiLSTM第二次编码后的句子向量;
Figure PCTCN2020104724-appb-000052
Figure PCTCN2020104724-appb-000053
向量联接的结果,即该句子的中间编码表示特征;
S30402、句子的最终编码表示特征:使用一个卷积神经网络CNN对于输出的中间编码表示特征继续进行编码处理,其输出则作为句子的最终编码表示特征,公式如下:
Figure PCTCN2020104724-appb-000054
其中,
Figure PCTCN2020104724-appb-000055
为经过CNN编码后的sentence1句子最终编码表示特征。
S305、构建分层特征交互匹配机制:经过步骤S304处理后分别得到sentence1、sentence2的中间编码表示特征的向量表示
Figure PCTCN2020104724-appb-000056
Figure PCTCN2020104724-appb-000057
和最终编码表示特征的向量表示
Figure PCTCN2020104724-appb-000058
Figure PCTCN2020104724-appb-000059
根据得到的这两类向量从不同的角度进行匹配,从而生成匹配表征向量;具体如下:
计算
Figure PCTCN2020104724-appb-000060
公式如下:
Figure PCTCN2020104724-appb-000061
Figure PCTCN2020104724-appb-000062
Figure PCTCN2020104724-appb-000063
其中,
Figure PCTCN2020104724-appb-000064
表示中间编码表示特征向量
Figure PCTCN2020104724-appb-000065
逐元素之间求差取得的绝对值;
Figure PCTCN2020104724-appb-000066
表示最终编码表示特征向量
Figure PCTCN2020104724-appb-000067
Figure PCTCN2020104724-appb-000068
逐元素之间求差取得的绝对值;
Figure PCTCN2020104724-appb-000069
表示
Figure PCTCN2020104724-appb-000070
Figure PCTCN2020104724-appb-000071
逐元素求积取得的值;
为了能够捕获句子间的多角度交互信息,除了需要计算上述值,还需要使用另一方式进行同样的操作,计算
Figure PCTCN2020104724-appb-000072
公式如下:
Figure PCTCN2020104724-appb-000073
Figure PCTCN2020104724-appb-000074
Figure PCTCN2020104724-appb-000075
其中,
Figure PCTCN2020104724-appb-000076
分别为对应句子向量的平均向量表示;
Figure PCTCN2020104724-appb-000077
表示中间编码表示特征向量
Figure PCTCN2020104724-appb-000078
分别与其平均值作差后再进行逐元素之间求差取得的绝对值;
Figure PCTCN2020104724-appb-000079
表示最终编码表示特征向量
Figure PCTCN2020104724-appb-000080
Figure PCTCN2020104724-appb-000081
分别与其平均值作差后再进行逐元素之间求差取得的绝对值;
Figure PCTCN2020104724-appb-000082
表示
Figure PCTCN2020104724-appb-000083
Figure PCTCN2020104724-appb-000084
逐元素求积取得的值;
将计算得出的
Figure PCTCN2020104724-appb-000085
Figure PCTCN2020104724-appb-000086
两个结果进行连接,作为句子对匹配程度的全面表征,公式如下:
Figure PCTCN2020104724-appb-000087
其中,
Figure PCTCN2020104724-appb-000088
表示最终生成的匹配表征向量;本发明采用分层特征交互匹配机制能够全面捕获句子对之间的多角度交互匹配特征。
S306、构建预测层:步骤S305所得到的匹配表征向量
Figure PCTCN2020104724-appb-000089
输入预测层,以判断句子对的语义是否匹配;在预测层中,匹配表征向量
Figure PCTCN2020104724-appb-000090
经过全连接层处理,再由Sigmoid函数层进行处理;为了防止发生过拟合的情况,在全连接层设置 dropout为0.5,sigmoid层对经过dropout处理后的全连接层的输出进行匹配度计算,得到处于[0,1]之间的匹配度表示y pred,最终通过与设立的阈值(0.5)进行比较来判别句子对的语义是否匹配,即y pred>0.5时,判定为语义匹配,y pred<0.5时,判定为语义不匹配。
S4、训练句子匹配模型:在步骤S2所得的训练数据集上对步骤S3构建的句子匹配模型进行训练,如附图5所示,具体如下:
S401、构建损失函数:由预测层构建过程可知,y pred是经过分层特征交互匹配机制处理后得到的匹配度计算结果,y true是两个句子语义是否匹配的真实标签,其取值仅限于0或1,本模型采用均方误差作为损失函数,公式如下:
Figure PCTCN2020104724-appb-000091
S402、优化训练模型:使用RMSprop作为优化算法,除了其学习率设置为0.001外,RMSprop的剩余超参数均选择Keras中的默认值设置;在训练数据集上,对句子匹配模型进行优化训练;
举例说明:上面描述的优化函数及其设置在Keras中使用代码表示为:
optim=keras.optimizers.RMSprop(lr=0.001)。
本发明在BQ数据集上取得了优于当前先进模型的结果,实验结果的对比具体见表1:
Figure PCTCN2020104724-appb-000092
在步骤S102中提到本发明对句子的处理可以有两种方式,即断字操作或分词操作。故表中HEM char模型对应的是对句子进行断字操作处理后得到的模型;HEM word模型对应的是对句子进行分词操作处理后得到的模型。
本发明模型和现有模型进行了比较,实验结果显示本发明方法有了很大的提升。其中,前三行是现有技术的模型的实验结果【前三行数据来自:J.Chen,Q.Chen,X.Liu,H.Yang,D.Lu,B.Tang,The bq corpus:A large-scale domain-specific chinese corpus for sentence semantic equivalence identification,EMNLP2018.】,最后两行是本发明的实验结果,由此可知本发 明比现有模型有了较大提升。
实施例3:
如附图6所示,基于实施例2的基于深度分层编码的智能语义匹配装置,该装置包括,
句子匹配知识库构建单元,用于使用爬虫程序,在网上公共问答平台爬取问题集,得到原始相似句子知识库,再对原始相似句子知识库进行断字或分词的预处理,从而构建用于模型训练的句子匹配知识库;句子匹配知识库构建单元包括,
数据爬取子单元,用于在网上公共问答平台爬取问题集,构建原始相似句子知识库;
爬取数据处理子单元,用于将原始相似句子知识库中的句子进行断字处理或分词处理,从而构建用于模型训练的句子匹配知识库;
训练数据集生成单元,用于根据句子匹配知识库中的句子来构建训练正例数据和训练负例数据,并且基于正例数据与负例数据来构建最终的训练数据集;训练数据集生成单元包括,
训练正例数据构建子单元,用于将句子匹配知识库中语义匹配的句子进行组合,并对其添加匹配标签1,构建为训练正例数据;
训练负例数据构建子单元,用于从句子匹配知识库中选取一个句子s 1,再从句子匹配知识库中随机选择一个与句子s 1语义不匹配的句子s 2,将s 1与s 2进行组合,并对其添加匹配标签0,构建为训练负例数据;
训练数据集构建子单元,用于将所有的训练正例数据与训练负例数据组合在一起,并打乱其顺序,从而构建最终的训练数据集;
句子匹配模型构建单元,用于通过嵌入层构建字符映射转换表、输入层、及字向量映射层、构建深度分层编码表示层、通过分层特征交互匹配层构建句子分层特征交互匹配机制和构建预测层;句子匹配模型构建单元包括,
字符映射转换表构建子单元,用于对句子匹配知识库中的每个句子按字符进行切分,并将每个字符依次存入一个列表中,从而得到一个字符表,随后以数字1为起始,按照每个字符被录入字符表的顺序依次递增排序,从而形成所需要的字符映射转换表;其中,通过构建字符映射转换表,训练数据集中的每个字符均被映射为唯一的数字标识;其后,本发明再使用Word2Vec训练字向量模型,得到各字符的字向量矩阵embedding_matrix;
输入层构建子单元,用于对输入句子sentence1与sentence2,将其形式化为:(sentence1、sentence2);
字向量映射层构建子单元,用于加载字符映射转换表构建子单元训练所得的字向量矩阵权重来初始化当前层的权重参数;针对输入句子sentence1和sentence2,得到其相应句子向量sentence1_emd、sentence2_emd。句子匹配知识库中每一个句子都可以通过字向量映射的方式,将句子信息转化为向量形式;
深度分层编码表示层子单元,用于对输入的数据进行编码和语义提取;其中双向长短期记忆网络对句子进行两次编码操作,再对两次编码获得的语义特征进行联接操作,从而得到句子的中间编码表示特征向量;卷积神经网络对于中间编码表示特征向量继续进行一次编码操作,其输出作为句子的最终编码表示特征向量;
分层特征交互匹配机制构建子单元,用于将句子对中的每个句子在不同层上获得的编码表示特征分别进行交互匹配,生成最终的匹配表征向量;
预测层子单元,用于对匹配表征向量进行处理,从而得出一个匹配度数值,将其与设立的阈值进行比较,以此判断句子对的语义是否匹配;
句子匹配模型训练单元,用于构建模型训练过程中所需要的损失函数,并完成模型的优化训练;句子匹配模型训练单元包括,
损失函数构建子单元,用于计算句子1和句子2间语义是否匹配的误差;
优化训练模型子单元,用于训练并调整模型训练中的参数,从而减小句子匹配模型训练过程中预测的句子1与句子2间语义匹配程度与真实匹配标签之间的误差;
实施例4:
基于实施例2的存储介质,其中存储有多条指令,指令有处理器加载,执行实施例2的基于深度分层编码的智能语义匹配方法的步骤。
实施例5:
基于实施例4的电子设备,电子设备包括:实施例4的存储介质;以及
处理器,用于执行实施例4的存储介质中的指令。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (10)

  1. 一种基于深度分层编码的智能语义匹配方法,其特征在于,该方法是通过构建并训练由嵌入层、深度分层编码表示层、分层特征交互匹配层和预测层组成的句子匹配模型,以此实现对句子的深度分层编码表示,获取更多的语义上下文信息和句子间的交互信息,同时实现分层特征交互匹配机制,以达到对句子进行智能语义匹配的目标;具体如下:
    嵌入层对输入的句子进行嵌入操作,并将结果传递给深度分层编码表示层;
    深度分层编码表示层对由嵌入操作获取的结果进行编码操作,得到句子的中间编码表示特征与句子的最终编码表示特征两种不同的特征编码表示;
    分层特征交互匹配层对句子的中间编码表示特征与句子的最终编码表示特征分别进行匹配处理,得到句子对的匹配表征向量;
    在预测层使用一个全连接层对匹配表征向量进行一次映射,然后使用sigmoid层将得到的结果映射为指定区间中的一个值作为匹配度数值,根据匹配度数值与设定阀值的相对大小判定输入的句子对之间的语义是否匹配。
  2. 根据权利要求1所述的基于深度分层编码的智能语义匹配方法,其特征在于,所述嵌入层用于构建字符映射转换表、构建输入层及构建字向量映射层;
    其中,构建字符映射转换表:映射规则为:以数字1为起始,随后按照每个字符被录入字符表的顺序依次递增排序,从而形成所需的字符映射转换表;其中,字符表通过句子匹配知识库构建;其后,再使用Word2Vec训练字向量模型,得到各字符的字向量矩阵embedding_matrix;
    构建输入层:输入层包括两个输入,对输入句子sentence1、sentence2,将其形式化为:(sentence1,sentence2);对于输入句子中的每个字按照字符映射表转化为相应的数字表示;
    构建字向量映射层:加载字符映射转换表构建步骤中训练所得的字向量矩 阵权重来初始化当前层的权重参数;针对输入句子sentence1和sentence2,得到其相应句子向量sentence1_emd、sentence2_emd;句子匹配知识库中每一个句子均通过字向量映射的方式,将句子信息转化为向量形式。
  3. 根据权利要求1或2所述的基于深度分层编码的智能语义匹配方法,其特征在于,所述深度分层编码表示层的构建过程具体如下:
    句子的中间编码表示特征:使用一个双向长短期记忆网络BiLSTM,对经过字向量映射层处理后的句子进行两次编码处理,再对两次编码获得的语义特征进行联接操作而得到,公式如下:
    Figure PCTCN2020104724-appb-100001
    Figure PCTCN2020104724-appb-100002
    Figure PCTCN2020104724-appb-100003
    其中,i表示相应字向量在句子中的相对位置;p i为句子中每个字的相应向量表示;
    Figure PCTCN2020104724-appb-100004
    为经过BiLSTM第一次编码后的句子向量;
    Figure PCTCN2020104724-appb-100005
    表示经过BiLSTM第二次编码后的句子向量;
    Figure PCTCN2020104724-appb-100006
    Figure PCTCN2020104724-appb-100007
    向量联接的结果,即该句子的中间编码表示特征;
    句子的最终编码表示特征:使用一个卷积神经网络CNN对于输出的中间编码表示特征继续进行编码处理,其输出则作为句子的最终编码表示特征,公式如下:
    Figure PCTCN2020104724-appb-100008
    其中,
    Figure PCTCN2020104724-appb-100009
    为经过CNN编码后的句子最终编码表示特征。
  4. 根据权利要求3所述的基于深度分层编码的智能语义匹配方法,其特征在于,所述分层特征交互匹配层用于构建分层特征交互匹配机制;其中,构建分层特征交互匹配机制是对经过深度分层编码表示层处理后得到的sentence1、sentence2的中间编码表示特征的向量表示
    Figure PCTCN2020104724-appb-100010
    和最终编码表示特征的向量表示
    Figure PCTCN2020104724-appb-100011
    根据得到的这两 类向量从不同的角度进行匹配,从而生成匹配表征向量;具体如下:
    计算
    Figure PCTCN2020104724-appb-100012
    公式如下:
    Figure PCTCN2020104724-appb-100013
    Figure PCTCN2020104724-appb-100014
    Figure PCTCN2020104724-appb-100015
    其中,
    Figure PCTCN2020104724-appb-100016
    表示中间编码表示特征向量
    Figure PCTCN2020104724-appb-100017
    逐元素之间求差取得的绝对值;
    Figure PCTCN2020104724-appb-100018
    表示最终编码表示特征向量
    Figure PCTCN2020104724-appb-100019
    Figure PCTCN2020104724-appb-100020
    逐元素之间求差取得的绝对值;
    Figure PCTCN2020104724-appb-100021
    表示
    Figure PCTCN2020104724-appb-100022
    Figure PCTCN2020104724-appb-100023
    逐元素求积取得的值;
    计算
    Figure PCTCN2020104724-appb-100024
    公式如下:
    Figure PCTCN2020104724-appb-100025
    Figure PCTCN2020104724-appb-100026
    Figure PCTCN2020104724-appb-100027
    其中,
    Figure PCTCN2020104724-appb-100028
    分别为对应句子向量的平均向量表示;
    Figure PCTCN2020104724-appb-100029
    表示中间编码表示特征向量
    Figure PCTCN2020104724-appb-100030
    分别与其平均值作差后再进行逐元素之间求差取得的绝对值;
    Figure PCTCN2020104724-appb-100031
    表示最终编码表示特征向量
    Figure PCTCN2020104724-appb-100032
    Figure PCTCN2020104724-appb-100033
    分别与其平均值作差后再进行逐元素之间求差取得的绝对值;
    Figure PCTCN2020104724-appb-100034
    表示
    Figure PCTCN2020104724-appb-100035
    Figure PCTCN2020104724-appb-100036
    逐元素求积取得的值;
    将计算得出的两个结果
    Figure PCTCN2020104724-appb-100037
    Figure PCTCN2020104724-appb-100038
    进行联接,作为句子对匹配程度的全面表征,公式如下:
    Figure PCTCN2020104724-appb-100039
    其中,
    Figure PCTCN2020104724-appb-100040
    表示最终生成的匹配表征向量。
  5. 根据权利要求4所述的基于深度分层编码的智能语义匹配方法,其特征在于,所述预测层构建过程如下:
    将构建分层特征交互匹配机制过程中得到的匹配表征向量
    Figure PCTCN2020104724-appb-100041
    输入预测层,以判断句子对的语义是否匹配;在预测层中,匹配表征向量
    Figure PCTCN2020104724-appb-100042
    经过全连接层处理,再由Sigmoid层进行处理;为了防止发生过拟合的情况,在全连接层设置dropout为0.5,sigmoid层对经过dropout处理后的全连接层的输出进行匹配度计算,得到处于[0,1]之间的匹配度表示y pred,最终通过与设立的阈值0.5进行比较来判别句子对的语义是否匹配,即y pred>0.5时,判定为语义匹配,y pred<0.5时,判定为语义不匹配。
  6. 根据权利要求5所述的基于深度分层编码的智能语义匹配方法,其特征在于,所述句子匹配知识库构建具体如下:
    使用爬虫获取原始数据:在网上公共问答平台爬取问题集,得到原始相似句子知识库;或者使用网上公开的句子匹配数据集,作为原始相似句子知识库;
    预处理原始数据:预处理原始相似句子知识库中的相似句子对,对每个句子进行断字操作或分词操作,得到句子匹配知识库;
    所述句子匹配模型通过使用训练数据集进行训练而得到,训练数据集的构建过程如下:
    构建训练正例:将句子与句子所对应的标准句子进行组合,构建正例,形式化为:(sentence1,sentence2,1);其中,sentence1表示句子1;sentence2表示句子2;1表示句子1和句子2的语义相匹配,是正例;
    构建训练负例:选中一个句子s 1,再从句子匹配知识库中随机选择一个与句子s 1不匹配的句子s 2,将s 1与s 2进行组合,构建负例,形式化为:(sentence1,sentence2,0);其中,sentence1表示句子s 1;sentence2表示句子s 2;0表示句子s 1和句子s 2的语义不匹配,是负例;
    构建训练数据集:将经过构建训练正例和构建训练负例操作后所获得的全部的正例样本句子对和负例样本句子对进行组合,并打乱其顺序,构建最终的训练数据集;无论正例数据还是负例数据均包含三个维度,即sentence1、 sentence2、0或1;
    所述句子匹配模型构建完成后通过训练数据集进行句子匹配模型的训练与优化,具体如下:
    构建损失函数:由预测层构建过程可知,y pred是经过分层特征交互匹配机制处理后得到的匹配度计算结果,y true是两个句子语义是否匹配的真实标签,其取值仅限于0或1,采用均方误差作为损失函数,公式如下:
    Figure PCTCN2020104724-appb-100043
    优化训练模型:使用RMSprop作为优化算法,除了其学习率设置为0.001外,RMSprop的剩余超参数均选择Keras中的默认值设置;在训练数据集上,对句子匹配模型进行优化训练。
  7. 一种基于深度分层编码的智能语义匹配装置,其特征在于,该装置包括,
    句子匹配知识库构建单元,用于使用爬虫程序,在网上公共问答平台爬取问题集,得到原始相似句子知识库,再对原始相似句子知识库进行断字或分词的预处理,从而构建用于模型训练的句子匹配知识库;
    训练数据集生成单元,用于根据句子匹配知识库中的句子来构建训练正例数据和训练负例数据,并且基于正例数据与负例数据来构建最终的训练数据集;
    句子匹配模型构建单元,用于通过嵌入层构建字符映射转换表、输入层、及字向量映射层、构建深度分层编码表示层、通过分层特征交互匹配层构建句子分层特征交互匹配机制和构建预测层;句子匹配模型构建单元包括,
    字符映射转换表构建子单元,用于对句子匹配知识库中的每个句子按字符进行切分,并将每个字符依次存入一个列表中,从而得到一个字符表,随后以数字1为起始,按照每个字符被录入字符表的顺序依次递增排序,从而形成所需要的字符映射转换表;其中,通过构建字符映射转换表,训练数据集中的每个字符均被映射为唯一的数字标识;其后,再使用Word2Vec训练字向量模型, 得到各字符的字向量矩阵embedding_matrix;
    输入层构建子单元,用于对输入句子sentence1与sentence2,将其形式化为:(sentence1、sentence2);
    字向量映射层子单元,用于加载字符映射转换表构建子单元训练所得的字向量矩阵权重来初始化当前层的权重参数;针对输入句子sentence1和sentence2,得到其相应句子向量sentence1_emd、sentence2_emd;句子匹配知识库中每一个句子都可以通过字向量映射的方式,将句子信息转化为向量形式;
    深度分层编码表示层子单元,用于对输入的数据进行编码和语义提取;其中双向长短期记忆网络对句子进行两次编码操作,再对两次编码获得的语义特征进行联接操作,从而得到句子的中间编码表示特征向量;卷积神经网络对于中间编码表示特征向量继续进行一次编码操作,其输出作为句子的最终编码表示特征向量;
    分层特征交互匹配机制构建子单元,用于将句子对中的每个句子在不同层上获得的编码表示特征分别进行匹配,生成最终的匹配表征向量;
    预测层子单元,用于对匹配表征向量进行处理,从而得出一个匹配度数值,将其与设立的阈值进行比较,以此判断句子对的语义是否匹配;
    句子匹配模型训练单元,用于构建模型训练过程中所需要的损失函数,并完成模型的优化训练。
  8. 根据权利要求7所述的基于深度分层编码的智能语义匹配方法,其特征在于,所述句子匹配知识库构建单元包括,
    数据爬取子单元,用于在网上公共问答平台爬取问题集,构建原始相似句子知识库;
    爬取数据处理子单元,用于将原始相似句子知识库中的句子进行断字处理或分词处理,从而构建用于模型训练的句子匹配知识库;
    所述训练数据集生成单元包括,
    训练正例数据构建子单元,用于将句子匹配知识库中语义匹配的句子进行组合,并对其添加匹配标签1,构建为训练正例数据;
    训练负例数据构建子单元,用于从句子匹配知识库中选取一个句子s 1,再从句子匹配知识库中随机选择一个与句子s 1语义不匹配的句子s 2,将s 1与s 2进行组合,并对其添加匹配标签0,构建为训练负例数据;
    训练数据集构建子单元,用于将所有的训练正例数据与训练负例数据组合在一起,并打乱其顺序,从而构建最终的训练数据集;
    所述句子匹配模型训练单元包括,
    损失函数构建子单元,用于计算句子1和句子2间语义匹配度的误差;
    优化训练模型子单元,用于训练并调整模型训练中的参数,从而减小句子匹配模型训练过程中预测的句子1与句子2的语义匹配程度与真实匹配标签之间的误差;
  9. 一种存储介质,其中存储有多条指令,其特征在于,所述指令有处理器加载,执行权利要求1-6中所述的基于深度分层编码的智能语义匹配方法的步骤。
  10. 一种电子设备,其特征在于,所述电子设备包括:
    权利要求9所述的存储介质;以及
    处理器,用于执行所述存储介质中的指令。
PCT/CN2020/104724 2020-02-20 2020-07-27 一种基于深度分层编码的智能语义匹配方法和装置 WO2021164200A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010103505.6 2020-02-20
CN202010103505.6A CN111325028B (zh) 2020-02-20 2020-02-20 一种基于深度分层编码的智能语义匹配方法和装置

Publications (1)

Publication Number Publication Date
WO2021164200A1 true WO2021164200A1 (zh) 2021-08-26

Family

ID=71172754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/104724 WO2021164200A1 (zh) 2020-02-20 2020-07-27 一种基于深度分层编码的智能语义匹配方法和装置

Country Status (2)

Country Link
CN (1) CN111325028B (zh)
WO (1) WO2021164200A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113868322A (zh) * 2021-12-03 2021-12-31 杭州未名信科科技有限公司 一种语义结构解析方法、装置、设备及虚拟化系统、介质
CN114238563A (zh) * 2021-12-08 2022-03-25 齐鲁工业大学 基于多角度交互的中文句子对语义智能匹配方法和装置
CN114911909A (zh) * 2022-06-08 2022-08-16 北京青萌数海科技有限公司 结合深度卷积网络和注意力机制的地址匹配方法以及装置
CN117216771A (zh) * 2023-11-09 2023-12-12 中机寰宇认证检验股份有限公司 一种二进制程序漏洞智能挖掘方法及系统
CN117520786A (zh) * 2024-01-03 2024-02-06 卓世科技(海南)有限公司 基于nlp和循环神经网络的大语言模型构建方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325028B (zh) * 2020-02-20 2021-06-18 齐鲁工业大学 一种基于深度分层编码的智能语义匹配方法和装置
CN112000770B (zh) * 2020-08-24 2023-10-24 齐鲁工业大学 面向智能问答的基于语义特征图的句子对语义匹配方法
CN112001166B (zh) * 2020-08-24 2023-10-17 齐鲁工业大学 面向政务咨询服务的智能问答句子对语义匹配方法和装置
CN112000772B (zh) * 2020-08-24 2022-09-06 齐鲁工业大学 面向智能问答基于语义特征立方体的句子对语义匹配方法
CN112000771B (zh) * 2020-08-24 2023-10-24 齐鲁工业大学 一种面向司法公开服务的句子对智能语义匹配方法和装置
CN113515930B (zh) * 2021-05-14 2023-05-30 北京邮电大学 一种融合语义信息的异构设备本体匹配方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145290A (zh) * 2018-07-25 2019-01-04 东北大学 基于字向量与自注意力机制的语义相似度计算方法
CN109214001A (zh) * 2018-08-23 2019-01-15 桂林电子科技大学 一种中文语义匹配系统及方法
CN110032635A (zh) * 2019-04-22 2019-07-19 齐鲁工业大学 一种基于深度特征融合神经网络的问题对匹配方法和装置
CN110083692A (zh) * 2019-04-22 2019-08-02 齐鲁工业大学 一种金融知识问答的文本交互匹配方法及装置
CN110348014A (zh) * 2019-07-10 2019-10-18 电子科技大学 一种基于深度学习的语义相似度计算方法
CN110390107A (zh) * 2019-07-26 2019-10-29 腾讯科技(深圳)有限公司 基于人工智能的下文关系检测方法、装置及计算机设备
CN111325028A (zh) * 2020-02-20 2020-06-23 齐鲁工业大学 一种基于深度分层编码的智能语义匹配方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817650B2 (en) * 2017-05-19 2020-10-27 Salesforce.Com, Inc. Natural language processing using context specific word vectors
CN110321419B (zh) * 2019-06-28 2021-06-15 神思电子技术股份有限公司 一种融合深度表示与交互模型的问答匹配方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145290A (zh) * 2018-07-25 2019-01-04 东北大学 基于字向量与自注意力机制的语义相似度计算方法
CN109214001A (zh) * 2018-08-23 2019-01-15 桂林电子科技大学 一种中文语义匹配系统及方法
CN110032635A (zh) * 2019-04-22 2019-07-19 齐鲁工业大学 一种基于深度特征融合神经网络的问题对匹配方法和装置
CN110083692A (zh) * 2019-04-22 2019-08-02 齐鲁工业大学 一种金融知识问答的文本交互匹配方法及装置
CN110348014A (zh) * 2019-07-10 2019-10-18 电子科技大学 一种基于深度学习的语义相似度计算方法
CN110390107A (zh) * 2019-07-26 2019-10-29 腾讯科技(深圳)有限公司 基于人工智能的下文关系检测方法、装置及计算机设备
CN111325028A (zh) * 2020-02-20 2020-06-23 齐鲁工业大学 一种基于深度分层编码的智能语义匹配方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ATOUM ISSA, OTOOM AHMED, KULATHURAMAIYER NARAYANAN: "A Comprehensive Comparative Study of Word and Sentence Similarity Measures", INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS, vol. 135, no. 1, 17 February 2016 (2016-02-17), pages 2 - 9, XP055838733, DOI: 10.5120/ijca2016908259 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113868322A (zh) * 2021-12-03 2021-12-31 杭州未名信科科技有限公司 一种语义结构解析方法、装置、设备及虚拟化系统、介质
CN114238563A (zh) * 2021-12-08 2022-03-25 齐鲁工业大学 基于多角度交互的中文句子对语义智能匹配方法和装置
CN114911909A (zh) * 2022-06-08 2022-08-16 北京青萌数海科技有限公司 结合深度卷积网络和注意力机制的地址匹配方法以及装置
CN114911909B (zh) * 2022-06-08 2023-01-10 北京青萌数海科技有限公司 结合深度卷积网络和注意力机制的地址匹配方法以及装置
CN117216771A (zh) * 2023-11-09 2023-12-12 中机寰宇认证检验股份有限公司 一种二进制程序漏洞智能挖掘方法及系统
CN117216771B (zh) * 2023-11-09 2024-01-30 中机寰宇认证检验股份有限公司 一种二进制程序漏洞智能挖掘方法及系统
CN117520786A (zh) * 2024-01-03 2024-02-06 卓世科技(海南)有限公司 基于nlp和循环神经网络的大语言模型构建方法
CN117520786B (zh) * 2024-01-03 2024-04-02 卓世科技(海南)有限公司 基于nlp和循环神经网络的大语言模型构建方法

Also Published As

Publication number Publication date
CN111325028A (zh) 2020-06-23
CN111325028B (zh) 2021-06-18

Similar Documents

Publication Publication Date Title
WO2021164200A1 (zh) 一种基于深度分层编码的智能语义匹配方法和装置
WO2021164199A1 (zh) 基于多粒度融合模型的中文句子语义智能匹配方法及装置
CN111310439B (zh) 一种基于深度特征变维机制的智能语义匹配方法和装置
Qiu et al. Convolutional neural tensor network architecture for community-based question answering
WO2022198868A1 (zh) 开放式实体关系的抽取方法、装置、设备及存储介质
CN111259127B (zh) 一种基于迁移学习句向量的长文本答案选择方法
CN110032635A (zh) 一种基于深度特征融合神经网络的问题对匹配方法和装置
CN111522910B (zh) 一种基于文物知识图谱的智能语义检索方法
CN111159485B (zh) 尾实体链接方法、装置、服务器及存储介质
US20210018332A1 (en) Poi name matching method, apparatus, device and storage medium
WO2021121198A1 (zh) 基于语义相似度的实体关系抽取方法、装置、设备及介质
CN111858932A (zh) 基于Transformer的多重特征中英文情感分类方法及系统
WO2021204014A1 (zh) 一种模型训练的方法及相关装置
CN113127632B (zh) 基于异质图的文本摘要方法及装置、存储介质和终端
CN111651558A (zh) 基于预训练语义模型的超球面协同度量推荐装置和方法
CN111985228A (zh) 文本关键词提取方法、装置、计算机设备和存储介质
CN112232053A (zh) 一种基于多关键词对匹配的文本相似度计算系统、方法、及存储介质
CN111339249A (zh) 一种联合多角度特征的深度智能文本匹配方法和装置
CN115238053A (zh) 基于bert模型的新冠知识智能问答系统及方法
CN113468854A (zh) 一种多文档自动摘要生成方法
CN113672693A (zh) 基于知识图谱和标签关联的在线问答平台的标签推荐方法
CN113761890A (zh) 一种基于bert上下文感知的多层级语义信息检索方法
CN114791958A (zh) 一种基于变分自编码器的零样本跨模态检索方法
CN114298055B (zh) 基于多级语义匹配的检索方法、装置、计算机设备和存储介质
CN112149410A (zh) 语义识别方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20919439

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20919439

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20919439

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 15/03/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20919439

Country of ref document: EP

Kind code of ref document: A1