WO2020114109A1 - Interpretation method and apparatus for embedding result - Google Patents
Interpretation method and apparatus for embedding result Download PDFInfo
- Publication number
- WO2020114109A1 WO2020114109A1 PCT/CN2019/112106 CN2019112106W WO2020114109A1 WO 2020114109 A1 WO2020114109 A1 WO 2020114109A1 CN 2019112106 W CN2019112106 W CN 2019112106W WO 2020114109 A1 WO2020114109 A1 WO 2020114109A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- embedding
- salient
- dimension
- interpretation
- value
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 110
- 238000012545 processing Methods 0.000 claims abstract description 7
- 230000004913 activation Effects 0.000 claims description 32
- 230000001629 suppression Effects 0.000 claims description 25
- 239000000284 extract Substances 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 13
- 238000013145 classification model Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 10
- 230000005764 inhibitory process Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 239000013598 vector Substances 0.000 description 16
- 238000010586 diagram Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000060 site-specific infrared dichroism spectroscopy Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000009182 swimming Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
Definitions
- This specification relates to the field of machine learning technology, and in particular to an interpretation method and device for embedded results.
- Embedding represents a kind of mapping in mathematics, which can map one space to another space, and retain the basic attributes.
- the embedding algorithm can transform some complex and difficult-to-express features into easy-to-calculate forms, such as vectors and matrices, which are convenient for the prediction model to process.
- the embedding algorithm is not explanatory and cannot meet the needs of business scenarios.
- this specification provides a method and device for interpreting embedded results.
- An interpretation method for embedded results including:
- the embedding result includes embedding values of several dimensions
- the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
- a result interpretation method for graph embedding including:
- the embedding algorithm is used to embed the graph nodes to obtain the embedding result of each graph node, and the embedding result includes embedding values of several dimensions;
- the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
- a result interpretation method for word embedding including:
- the embedding algorithm is used to embed the vocabulary in the text to obtain the word embedding result corresponding to each text, and the word embedding result includes embedding values in several dimensions;
- the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
- a device for interpreting embedded results including:
- the embedding processing unit embeds the embedded objects by using an embedding algorithm to obtain the embedding result of each embedding object, and the embedding result includes embedding values of several dimensions;
- the sample extraction unit extracts, as a significant training sample, an embedded object whose embedded value satisfies the significant condition in each dimension according to the extreme value of the embedded value;
- the model training unit for each dimension, uses the sample features of the salient training samples and salient category labels in that dimension to train the explanatory model;
- the feature interpretation unit determines the interpretation feature of the salient training sample belonging to the salient category based on the trained interpretation model as the interpretation feature of the embedding result in this dimension.
- a device for interpreting embedded results including:
- Memory for storing machine executable instructions
- the embedding result includes embedding values of several dimensions
- the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
- this specification can extract embedded objects whose embedding value meets the salient conditions as salient training samples based on the extreme values of the embedding values in the embedding results as salient training samples, and use the salient training samples to interpret the interpretation.
- the model is trained, and then the interpretation characteristics of the embedded result in the corresponding dimension are determined according to the trained interpretation model, and the feature interpretation of the embedded result is realized, which provides a basis for the developer to repair the deviation of the original prediction model and helps improve the original prediction The generalization ability and performance of the model, and help to avoid legal risks and moral hazards.
- FIG. 1 is a schematic flowchart of an embedding result interpretation method shown in an exemplary embodiment of this specification.
- FIG. 2 is a schematic flowchart of another method for interpreting an embedded result shown in an exemplary embodiment of this specification.
- FIG. 3 is a schematic structural diagram of an apparatus for interpreting embedded results shown in an exemplary embodiment of the present specification.
- Fig. 4 is a block diagram of an apparatus for interpreting an embedded result shown in an exemplary embodiment of this specification.
- first, second, third, etc. may be used to describe various information in this specification, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
- first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
- word “if” as used herein may be interpreted as "when” or “when” or “in response to a determination”.
- An embedding algorithm can be used to embed an embedded object to obtain an embedding result that includes embedding values in several dimensions. Then, based on the extreme value of the embedding value, extract significant training samples in each dimension The training model is used to train the interpretation model, and the interpretation features of the significant training samples are used as the interpretation features of the embedding result in the corresponding dimension to realize the interpretation of the embedding result.
- FIG. 1 and FIG. 2 are flowcharts of a method for explaining an embedding result shown in an exemplary embodiment of this specification.
- the embedding algorithm may include a graph embedding (Graph Embedding) algorithm, and the graph embedding algorithm may map the graph data to low-dimensional dense embedding results, such as vectors, matrices, etc.; the embedding algorithm may also include: word embedding (Word Embedding) Algorithm, word embedding can map vocabulary into low-dimensional embedding results, such as vector, matrix, etc.
- Graph Embedding graph embedding
- Word Embedding word embedding Algorithm
- the method for interpreting the embedded result may include the following steps:
- Step S102 embedding the embedded objects using an embedding algorithm to obtain an embedding result of each embedding object, the embedding result including embedding values of several dimensions.
- the embedded object may be a graph node in the graph structure.
- the embedded object may be a user node in a user network graph.
- the user network map may be established based on the user's payment data, friend relationship data, and the like.
- the vector corresponding to each user node can be obtained.
- the embedded object may be text to be clustered, such as news, information, and the like.
- the embedding algorithm is used to embed the vocabulary included in each text, and the vector corresponding to each vocabulary in each text can be obtained, and the vector set corresponding to each text can be obtained.
- embedding results may include embedding values in several dimensions.
- each element of the vector can be regarded as one dimension, and each element value is an embedding value in the corresponding dimension.
- each element of the matrix may also be regarded as a dimension, and each element value is an embedding value in the corresponding dimension.
- each row or column of the matrix can also be regarded as a dimension.
- each row of the matrix can be regarded as a row vector, and then the sum of squares of each element in the row vector can be used as the embedded value in the corresponding dimension.
- the elements and values of the row vector or the mean value of the elements may also be used as the embedded values in the corresponding dimensions, which is not particularly limited in this specification.
- the embedding results of different embedding objects include embedding values of the same dimension.
- the embedded value is usually a value in the real number space, and is not interpretable.
- the resulting embedding result includes a 50-dimensional vector.
- the embedding result vector obtained after the embedding process has 50 elements.
- the extreme value of all embedding values can be obtained.
- the original prediction model may be trained using the embedding result of each embedded object, and after the training is completed, the original prediction model may output the extreme value of the embedding value in the embedding result.
- a storage bit may be added to the original prediction model to record the extreme value of the embedded value passing through the model network unit, and the extreme value may be output after the model is trained.
- the above-mentioned original prediction models may include: classification models, regression models, clustering models, and so on.
- the extreme value of the embedded value may also be obtained in other ways, which is not specifically limited in this specification.
- the extreme value may include a maximum value and a minimum value.
- the embedding result obtained by using the embedding algorithm includes the embedding value of 50 dimensions as an example.
- the maximum value of 5000 embedding values (100 ⁇ 50) can be obtained e max and minimum value e min .
- step S104 an embedding object whose embedding value meets the salient condition in each dimension is extracted as a salient training sample according to the extreme value of the embedding value.
- the saliency condition of the saliency training sample may be determined according to the extreme value of the embedding value, and then the embedding object whose embedding value meets the saliency condition in each dimension is extracted as the saliency training sample in the dimension.
- the extreme value includes a maximum value and a minimum value.
- the salient conditions may include salient activation conditions and salient suppression conditions
- the salient training samples may include salient activation training samples and salient suppression training samples
- the salient category label of the salient activation training samples is salient activation Label
- the significant category label of the significant suppression training sample is a significant suppression label.
- the significant activation condition is that the embedding value is greater than or equal to the difference between the maximum value and the preset change parameter, and at the same time is less than or equal to the maximum value.
- ⁇ is used to represent the preset change parameter
- the value range of the embedded value e i that satisfies the significant activation condition is: e max - ⁇ e i ⁇ e max .
- the significant suppression condition is that the embedding value is greater than or equal to the above minimum value, and at the same time is less than or equal to the sum value of the minimum value and the preset change parameter. That is, the value range of the embedding value e i that satisfies the significant suppression condition is: e min ⁇ e i ⁇ e min + ⁇ .
- an embedded object that satisfies the aforementioned significant activation condition may be referred to as a significant activation training sample, and an embedded object that satisfies the aforementioned significant suppression condition may be referred to as a significant suppression training sample.
- salient activation training samples and salient suppression training samples can be extracted.
- the first dimension of the embedding result it can be judged in sequence whether the first embedding value of the embedding result of each embedding object processed by the embedding algorithm satisfies the above-mentioned significant suppression condition or significant activation condition.
- the embedded object is used as the salient training sample in the first dimension.
- the embedded object in the foregoing step S102.
- the embedding object can be extracted as Significant training samples in the second dimension.
- step S102 is embedded in the m-th results of the embedded object embedded in the second value e m2 meets the above conditions significantly or significantly inhibiting the activation conditions and the like.
- the same embedded object may be a significantly activated training sample in some dimensions, and may also be a significantly suppressed training sample in other dimensions.
- the embedded object m may be a significant activation training sample in the first dimension, a significant suppression training sample in the second dimension, and not a significant training sample in the third dimension.
- step S106 for each dimension, a significant training sample in that dimension is used to train the interpretation model.
- the explanatory model may be a binary classification model with better interpretability, such as a linear model, a decision tree, etc., which is not particularly limited in this specification. It is worth noting that, since the multi-classification model is a special form of two-classification model, the above-mentioned two-classification model may include a multi-classification model.
- the interpretation model can be trained using the sample features and sample labels of the salient training samples.
- the sample label may be determined based on the previously trained prediction model.
- the sample features may include original features and topological features of the sample.
- the original feature is usually a feature already present in the sample itself.
- the original characteristics of the user node may include the user's age, gender, occupation, income, and so on.
- the original features of the text may include part of speech of the vocabulary, word frequency, and so on.
- the topological features can be used to represent the topological structure of the embedded object.
- the topological characteristics may include: first-order neighbor data, number of second-order neighbors, average number of neighbors of first-order neighbors, statistics of first-order neighbors under a specified original feature dimension, etc.
- the statistics of the first-order neighbors under the specified original feature dimension may be the average age of the first-order neighbors, the maximum age of the first-order neighbors, the average annual income of the first-order neighbors, and the Minimum annual income, etc.
- the topological features may include: the vocabulary that appears most often in front of the vocabulary, the number of vocabularies that often appear in conjunction with the vocabulary, and so on.
- topological features are used to supplement the original features.
- it can solve the problem that some samples have no original features.
- the topological structure of the samples can be added to the sample features, thereby improving the interpretation of the training results of the model. accuracy.
- the weight of each sample feature in that dimension can be obtained.
- the weight of sample feature 1 is W11, the weight of sample feature 2 is W12...; in dimension 2, the weight of sample feature 1 is W21, and the weight of sample feature 2 is W22... Wait.
- step S108 the interpretation features of the significant training samples are determined based on the trained interpretation model as the interpretation features of the embedding result in this dimension.
- the weight of each sample feature can be determined based on the interpreted model trained in each dimension, and according to the weight, several sample features that significantly affect the prediction result in the corresponding dimension can be determined as the interpreted features of the significant training sample.
- the interpretation feature of the significant training sample may also be determined as the interpretation feature of the embedding result in this dimension.
- the sample features may be sorted according to the order of weight from large to small, and then the sample features arranged in the top N bits are extracted as the interpretation features.
- the value of N can be set in advance, N can be equal to 3, 5, etc., this specification does not make any special restrictions.
- this specification can extract embedded objects whose embedding value meets the salient conditions as salient training samples based on the extreme values of the embedding values in the embedding results as salient training samples, and use the salient training samples to interpret the interpretation.
- the model is trained, and then the interpretation characteristics of the embedded result in the corresponding dimension are determined according to the trained interpretation model, and the feature interpretation of the embedded result is realized, which provides a basis for the developer to repair the deviation of the original prediction model and helps improve the original prediction The generalization ability and performance of the model, and help to avoid legal risks and moral hazards.
- This specification also provides a method for interpreting the results of graph embedding.
- an embedding algorithm may be used to embed the graph nodes to obtain an embedding result of each graph node, and the embedding result includes embedding values in several dimensions.
- the graph nodes whose embedding values meet the salient conditions in each dimension can be extracted as salient training samples according to the extreme values of the embedding values, and then for each dimension, the sample features and salient categories of the salient training samples in that dimension are used
- the label trains the interpretation model, and can determine the interpretation feature of the salient training sample belonging to the salient category based on the trained interpretation model as the interpretation feature of the embedding result in this dimension.
- a user network diagram may be constructed based on data such as user payment data and interaction data. For each user node in the user network graph, an embedding algorithm can be used to obtain the embedding result of the user node, such as a vector.
- user nodes whose embedding value meets the salient condition in each dimension can be extracted as a salient training sample.
- the sample features and salient category labels of the salient training samples in this dimension can be used to train the interpretation model, and the interpretation features of the embedding result in this dimension can be determined based on the trained interpretation model .
- the interpretation characteristics of the embedding result in dimension 1 may include: no fixed occupation, annual income less than 80,000, resident place in Guangxi, age 18-25 years old, etc.
- the interpretation characteristics of the embedding result under dimension 2 may include: no fixed occupation, annual income less than 100,000, place of residence in Yunnan, age 20-28 years old, SSID using Wi-Fi network is 12345, etc.
- This specification also provides a method for interpreting the results of word embedding.
- an embedding algorithm may be used to embed words in the text to obtain a word embedding result corresponding to each text, and the word embedding result includes embedding values in several dimensions.
- the vocabulary whose embedding value meets the salient condition in each dimension can be extracted as a salient training sample, and then for each dimension, the sample features and salient category labels of the salient training sample in that dimension are used
- the interpretation model is trained, and the interpretation feature of the salient training sample belonging to the salient category can be determined based on the trained interpretation model as the interpretation feature of the embedding result in this dimension.
- the interpretation characteristics of the embedded result in dimension 1 may include: computer, artificial intelligence, technology, innovation, the word frequency of technology is greater than 0.01, etc.
- the interpretation characteristics of the embedded result under dimension 2 may include: football, basketball, sports, swimming, recording, etc.
- the word embedding result corresponding to the text may be a mosaic of the embedding results of each vocabulary included in the text, or each of the embedding results of each vocabulary.
- the embedding value is average added, etc., this manual does not make any special restrictions.
- the salient training samples can also be extracted in units of text, which is not particularly limited in this specification.
- this specification also provides an embodiment of the embedding result interpretation device.
- the embodiment of the apparatus for interpreting embedded results in this specification can be applied to a server.
- the device embodiments may be implemented by software, or by hardware or a combination of hardware and software. Taking software implementation as an example, as a logical device, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory through the processor of the server where it is located and running. From the hardware level, as shown in Figure 3, this is a hardware structure diagram of the server where the interpretation device of the embedded result is located, except for the processor, memory, network interface, and non-volatile memory shown in Figure 3 In addition, in the embodiment, the server where the device is located usually includes other hardware according to the actual function of the server, which will not be repeated here.
- Fig. 4 is a block diagram of an apparatus for interpreting an embedded result shown in an exemplary embodiment of this specification.
- the apparatus 300 for interpreting embedded results may be applied to the server shown in FIG. 3, including: an embedding processing unit 301, a sample extraction unit 302, a model training unit 303 and a feature interpretation unit 304.
- the embedding processing unit 301 uses an embedding algorithm to embed the embedding objects to obtain embedding results of each embedding object, and the embedding results include embedding values of several dimensions;
- the sample extraction unit 302 extracts embedded objects whose embedding values satisfy the salient conditions in each dimension according to the extreme values of the embedding values as salient training samples;
- the model training unit 303 uses the sample features of the salient training samples in the dimension and the salient category labels to train the explanatory model;
- the feature interpretation unit 304 determines the interpretation feature of the salient training sample belonging to the salient category based on the trained interpretation model as the interpretation feature of the embedding result in this dimension.
- the extreme value includes: a maximum value and a minimum value
- the significant conditions include: significant activation conditions and significant inhibition conditions;
- the salient category tags correspond to the salient conditions, including salient activation tags and salient suppression tags;
- the determination process of the salient condition includes:
- the embedding value is greater than or equal to the difference and less than or equal to the maximum value
- the significant suppression condition is determined as: the embedding value is greater than or equal to the minimum value and less than or equal to the summation value.
- the feature interpretation unit 304 :
- the sample features ranked in the top N are extracted as the explanatory features of the saliency training sample belonging to the saliency category, and N is a natural number greater than or equal to 1.
- sample features include: original features and topological features.
- the topological features include one or more of the following:
- the interpretation model is a binary classification model.
- the relevant parts can be referred to the description of the method embodiments.
- the device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located One place, or can be distributed to multiple network elements. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solution in this specification. Those of ordinary skill in the art can understand and implement without paying creative labor.
- the system, device, module or unit explained in the above embodiments may be specifically implemented by a computer chip or entity, or implemented by a product with a certain function.
- a typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email sending and receiving device, and a game control Desk, tablet computer, wearable device, or any combination of these devices.
- this specification also provides an apparatus for interpreting embedded results, which includes a processor and a memory for storing machine-executable instructions.
- the processor and the memory are usually connected to each other via an internal bus.
- the device may also include an external interface to be able to communicate with other devices or components.
- the embedding result includes embedding values of several dimensions
- the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
- the extreme value includes: a maximum value and a minimum value
- the significant conditions include: significant activation conditions and significant inhibition conditions;
- the salient category tags correspond to the salient conditions, including salient activation tags and salient suppression tags;
- the determination process of the salient condition includes:
- the embedding value is greater than or equal to the difference and less than or equal to the maximum value
- the significant suppression condition is determined as: the embedding value is greater than or equal to the minimum value and less than or equal to the summation value.
- the processor is prompted to:
- the sample features ranked in the top N are extracted as the explanatory features of the saliency training sample belonging to the saliency category, and N is a natural number greater than or equal to 1.
- sample features include: original features and topological features.
- the topological features include one or more of the following:
- the interpretation model is a binary classification model.
- this specification also provides a computer-readable storage medium that stores a computer program on the computer-readable storage medium, and the program implements the following steps when executed by the processor:
- the embedding result includes embedding values of several dimensions
- the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
- the extreme value includes: a maximum value and a minimum value
- the significant conditions include: significant activation conditions and significant inhibition conditions;
- the salient category tags correspond to the salient conditions, including salient activation tags and salient suppression tags;
- the determination process of the salient condition includes:
- the embedding value is greater than or equal to the difference and less than or equal to the maximum value
- the significant suppression condition is determined as: the embedding value is greater than or equal to the minimum value and less than or equal to the summation value.
- the determining the interpretation feature of the salient training sample belonging to the salient category based on the trained interpretation model includes:
- the sample features ranked in the top N are extracted as the explanatory features of the saliency training sample belonging to the saliency category, and N is a natural number greater than or equal to 1.
- sample features include: original features and topological features.
- the topological features include one or more of the following:
- the interpretation model is a binary classification model.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Disclosed are an interpretation method and apparatus for an embedding result. The method comprises: using an embedding algorithm to carry out embedding processing on embedded objects, and obtaining an embedding result of each embedded object, wherein the embedding result comprises embedding values under several dimensions (S102); according to an extreme value of the embedding values, extracting an embedded object of which an embedding value in each dimension satisfies a significant condition, and taking same as a significant training sample (S104); for each dimension, using a sample feature and significant category tag of the significant training sample under the dimension to train an interpretation model (106); and based on the trained interpretation model, determining an interpretation feature of the significant training sample belonging to a significant category, and taking same as an interpretation feature of the embedding result under the dimension (S108).
Description
本说明书涉及机器学习技术领域,尤其涉及一种嵌入结果的解释方法和装置。This specification relates to the field of machine learning technology, and in particular to an interpretation method and device for embedded results.
嵌入(Embedding)在数学上表示一种映射,可将一个空间映射到另一个空间,并保留基本属性。利用嵌入算法可将一些复杂的难以表达的特征转换成易计算的形式,例如向量、矩阵等,便于预测模型进行处理。然而,嵌入算法并不具有解释性,无法满足业务场景的需求。Embedding represents a kind of mapping in mathematics, which can map one space to another space, and retain the basic attributes. The embedding algorithm can transform some complex and difficult-to-express features into easy-to-calculate forms, such as vectors and matrices, which are convenient for the prediction model to process. However, the embedding algorithm is not explanatory and cannot meet the needs of business scenarios.
发明内容Summary of the invention
有鉴于此,本说明书提供一种嵌入结果的解释方法和装置。In view of this, this specification provides a method and device for interpreting embedded results.
具体地,本说明书是通过如下技术方案实现的:Specifically, this specification is implemented through the following technical solutions:
一种嵌入结果的解释方法,包括:An interpretation method for embedded results, including:
采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值;Use an embedding algorithm to embed the embedded object to obtain the embedding result of each embedding object, the embedding result includes embedding values of several dimensions;
根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌入对象作为显著训练样本;According to the extreme value of the embedding value, extract the embedding object whose embedding value meets the salient condition in each dimension as a salient training sample;
针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;
基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
一种图嵌入的结果解释方法,包括:A result interpretation method for graph embedding, including:
采用嵌入算法对图节点进行嵌入处理,得到每个图节点的嵌入结果,所述嵌入结果包括若干维度的嵌入值;The embedding algorithm is used to embed the graph nodes to obtain the embedding result of each graph node, and the embedding result includes embedding values of several dimensions;
根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的图节点作为显著训练样本;Extracting graph nodes whose embedding values meet the salient conditions in each dimension according to the extreme values of the embedding values as salient training samples;
针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;
基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
一种词嵌入的结果解释方法,包括:A result interpretation method for word embedding, including:
采用嵌入算法对文本中的词汇进行嵌入处理,得到每个文本对应的词嵌入结果,所述词嵌入结果包括若干维度的嵌入值;The embedding algorithm is used to embed the vocabulary in the text to obtain the word embedding result corresponding to each text, and the word embedding result includes embedding values in several dimensions;
根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的词汇作为显著训练样本;Extract the vocabulary whose embedding value meets the salient condition in each dimension according to the extreme value of the embedding value as a salient training sample;
针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;
基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
一种嵌入结果的解释装置,包括:A device for interpreting embedded results, including:
嵌入处理单元,采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值;The embedding processing unit embeds the embedded objects by using an embedding algorithm to obtain the embedding result of each embedding object, and the embedding result includes embedding values of several dimensions;
样本提取单元,根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌入对象作为显著训练样本;The sample extraction unit extracts, as a significant training sample, an embedded object whose embedded value satisfies the significant condition in each dimension according to the extreme value of the embedded value;
模型训练单元,针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;The model training unit, for each dimension, uses the sample features of the salient training samples and salient category labels in that dimension to train the explanatory model;
特征解释单元,基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。The feature interpretation unit determines the interpretation feature of the salient training sample belonging to the salient category based on the trained interpretation model as the interpretation feature of the embedding result in this dimension.
一种嵌入结果的解释装置,包括:A device for interpreting embedded results, including:
处理器;processor;
用于存储机器可执行指令的存储器;Memory for storing machine executable instructions;
其中,通过读取并执行所述存储器存储的与嵌入结果的解释逻辑对应的机器可执行指令,所述处理器被促使:Wherein, by reading and executing the machine executable instructions stored in the memory corresponding to the interpretation logic of the embedded result, the processor is prompted to:
采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值;Use an embedding algorithm to embed the embedded object to obtain the embedding result of each embedding object, the embedding result includes embedding values of several dimensions;
根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌入对象作为显著训练样本;According to the extreme value of the embedding value, extract the embedding object whose embedding value meets the salient condition in each dimension as a salient training sample;
针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;
基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
由以上描述可以看出,本说明书可基于嵌入结果中嵌入值的极值为嵌入结果的各维度提取嵌入值满足显著条件的嵌入对象作为显著训练样本,并采用显著训练样本对具有解释性的解释模型进行训练,进而根据训练后的解释模型确定嵌入结果在对应维度下的解释特征,实现嵌入结果的特征解释,为开发者修复所述原始预测模型的偏差提供依据,有助于提升该原始预测模型的泛化能力和性能,并且有助于规避法律风险和道德风险。As can be seen from the above description, this specification can extract embedded objects whose embedding value meets the salient conditions as salient training samples based on the extreme values of the embedding values in the embedding results as salient training samples, and use the salient training samples to interpret the interpretation. The model is trained, and then the interpretation characteristics of the embedded result in the corresponding dimension are determined according to the trained interpretation model, and the feature interpretation of the embedded result is realized, which provides a basis for the developer to repair the deviation of the original prediction model and helps improve the original prediction The generalization ability and performance of the model, and help to avoid legal risks and moral hazards.
图1是本说明书一示例性实施例示出的一种嵌入结果的解释方法的流程示意图。FIG. 1 is a schematic flowchart of an embedding result interpretation method shown in an exemplary embodiment of this specification.
图2是本说明书一示例性实施例示出的另一种嵌入结果的解释方法的流程示意图。FIG. 2 is a schematic flowchart of another method for interpreting an embedded result shown in an exemplary embodiment of this specification.
图3是本说明书一示例性实施例示出的一种用于嵌入结果的解释装置的一结构示意图。FIG. 3 is a schematic structural diagram of an apparatus for interpreting embedded results shown in an exemplary embodiment of the present specification.
图4是本说明书一示例性实施例示出的一种嵌入结果的解释装置的框图。Fig. 4 is a block diagram of an apparatus for interpreting an embedded result shown in an exemplary embodiment of this specification.
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本说明书相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本说明书的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail here, examples of which are shown in the drawings. When referring to the drawings below, unless otherwise indicated, the same numerals in different drawings represent the same or similar elements. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with this specification. Rather, they are merely examples of devices and methods consistent with some aspects of this specification as detailed in the appended claims.
在本说明书使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本说明 书。在本说明书和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terminology used in this specification is for the purpose of describing particular embodiments only, and is not intended to limit this description. The singular forms "a", "said" and "the" used in this specification and the appended claims are also intended to include most forms unless the context clearly indicates other meanings. It should also be understood that the term "and/or" as used herein refers to and includes any or all possible combinations of one or more associated listed items.
应当理解,尽管在本说明书可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本说明书范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used to describe various information in this specification, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other. For example, without departing from the scope of this specification, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information. Depending on the context, the word "if" as used herein may be interpreted as "when" or "when" or "in response to a determination".
本说明书提供一种嵌入结果的解释方案,可先采用嵌入算法对嵌入对象进行嵌入处理,得到包括若干维度嵌入值的嵌入结果,然后基于所述嵌入值的极值提取各维度下的显著训练样本,采用该显著训练样本对解释模型进行训练,得到显著训练样本的解释特征以作为对应维度下嵌入结果的解释特征,实现嵌入结果的解释。This specification provides an interpretation solution for embedding results. An embedding algorithm can be used to embed an embedded object to obtain an embedding result that includes embedding values in several dimensions. Then, based on the extreme value of the embedding value, extract significant training samples in each dimension The training model is used to train the interpretation model, and the interpretation features of the significant training samples are used as the interpretation features of the embedding result in the corresponding dimension to realize the interpretation of the embedding result.
图1和图2是本说明书一示例性实施例示出的嵌入结果的解释方法的流程示意图。FIG. 1 and FIG. 2 are flowcharts of a method for explaining an embedding result shown in an exemplary embodiment of this specification.
所述嵌入算法可包括图嵌入(Graph Embedding)算法,图嵌入算法可将图数据映射为低维稠密的嵌入结果,例如向量、矩阵等;所述嵌入算法还可包括:词嵌入(Word Embedding)算法,词嵌入可将词汇映射为低维嵌入结果,例如向量、矩阵等。The embedding algorithm may include a graph embedding (Graph Embedding) algorithm, and the graph embedding algorithm may map the graph data to low-dimensional dense embedding results, such as vectors, matrices, etc.; the embedding algorithm may also include: word embedding (Word Embedding) Algorithm, word embedding can map vocabulary into low-dimensional embedding results, such as vector, matrix, etc.
请参考图1和图2,所述嵌入结果的解释方法可包括以下步骤:Please refer to FIG. 1 and FIG. 2, the method for interpreting the embedded result may include the following steps:
步骤S102,采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值。Step S102, embedding the embedded objects using an embedding algorithm to obtain an embedding result of each embedding object, the embedding result including embedding values of several dimensions.
在一个例子中,所述嵌入对象可以是图结构中的图节点。In one example, the embedded object may be a graph node in the graph structure.
例如,所述嵌入对象可以是用户网络图中的用户节点。所述用户网络图可基于用户的支付数据、好友关系数据等建立。For example, the embedded object may be a user node in a user network graph. The user network map may be established based on the user's payment data, friend relationship data, and the like.
采用嵌入算法对用户网络图中的用户节点进行嵌入处理后,可得到每个用户节点对应的向量。After the embedding algorithm is used to embed the user nodes in the user network graph, the vector corresponding to each user node can be obtained.
在另一个例子中,所述嵌入对象可以是待聚类的文本,例如:新闻、资讯等。In another example, the embedded object may be text to be clustered, such as news, information, and the like.
采用嵌入算法对每个文本所包括的词汇进行嵌入处理,可得到每个文本中各个词汇对应的向量,即可得到每个文本对应的向量集。The embedding algorithm is used to embed the vocabulary included in each text, and the vector corresponding to each vocabulary in each text can be obtained, and the vector set corresponding to each text can be obtained.
在本实施例中,为便于描述,可将嵌入对象经嵌入算法处理后得到的向量、矩阵 等统称为嵌入结果。所述嵌入结果可包括若干维度的嵌入值。In this embodiment, for ease of description, vectors, matrices, etc. obtained by embedding objects processed by an embedding algorithm may be collectively referred to as embedding results. The embedding result may include embedding values in several dimensions.
当所述嵌入结果是向量时,可将向量的每个元素看作一个维度,每个元素值是对应维度下的嵌入值。When the embedding result is a vector, each element of the vector can be regarded as one dimension, and each element value is an embedding value in the corresponding dimension.
当所述嵌入结果是矩阵时,也可将矩阵的每个元素看作一个维度,每个元素值是对应维度下的嵌入值。When the embedding result is a matrix, each element of the matrix may also be regarded as a dimension, and each element value is an embedding value in the corresponding dimension.
当所述嵌入结果是矩阵时,还可将矩阵的每一行或者每一列看作一个维度。以行为例,可将矩阵的每一行看作一个行向量,然后可将行向量中各元素平方和作为对应维度下的嵌入值。当然,在其他例子中,也可将行向量的元素和值或元素均值等作为对应维度下的嵌入值,本说明书对此不作特殊限制。When the embedding result is a matrix, each row or column of the matrix can also be regarded as a dimension. Taking behavior as an example, each row of the matrix can be regarded as a row vector, and then the sum of squares of each element in the row vector can be used as the embedded value in the corresponding dimension. Of course, in other examples, the elements and values of the row vector or the mean value of the elements may also be used as the embedded values in the corresponding dimensions, which is not particularly limited in this specification.
在本实施例中,采用嵌入算法分别将每个嵌入对象进行嵌入处理得到嵌入结果后,不同嵌入对象的嵌入结果包括相同维度的嵌入值。所述嵌入值通常是实数空间内的取值,不具有解释性。In this embodiment, after an embedding algorithm is used to embed each embedding object to obtain an embedding result, the embedding results of different embedding objects include embedding values of the same dimension. The embedded value is usually a value in the real number space, and is not interpretable.
举例来说,假设嵌入对象有100个,采用嵌入算法对嵌入对象进行嵌入处理后,得到的嵌入结果是包括有50维度的向量。换言之,嵌入处理后得到的嵌入结果向量有50个元素。在本例中,可将第m个嵌入对象嵌入处理后得到的嵌入结果向量记为E
m,E
m={e
m1,e
m2,...,e
m50}。
For example, assuming that there are 100 embedded objects, after embedding the embedded objects using an embedding algorithm, the resulting embedding result includes a 50-dimensional vector. In other words, the embedding result vector obtained after the embedding process has 50 elements. In the present embodiment, the m-th embedded object can be obtained after processing the embedded embedding result vector referred to as E m, E m = {e m1, e m2, ..., e m50}.
在本实施例中,在得到每个嵌入对象的嵌入结果后,可得到所有嵌入值中的极值。In this embodiment, after the embedding result of each embedding object is obtained, the extreme value of all embedding values can be obtained.
在一个例子中,可采用各嵌入对象的嵌入结果对原始预测模型进行训练,在训练结束后,所述原始预测模型可输出所述嵌入结果中嵌入值的极值。In one example, the original prediction model may be trained using the embedding result of each embedded object, and after the training is completed, the original prediction model may output the extreme value of the embedding value in the embedding result.
例如,可在所述原始预测模型中增加存储位,用于记录经过模型网络单元的嵌入值的极值,当模型训练完毕后,可输出所述极值。For example, a storage bit may be added to the original prediction model to record the extreme value of the embedded value passing through the model network unit, and the extreme value may be output after the model is trained.
上述原始预测模型可包括:分类模型、回归模型、聚类模型等。The above-mentioned original prediction models may include: classification models, regression models, clustering models, and so on.
在其他例子中,也可采用其他方式得到所述嵌入值的极值,本说明书对此不作特殊限制。In other examples, the extreme value of the embedded value may also be obtained in other ways, which is not specifically limited in this specification.
在本实施例中,所述极值可包括最大值和最小值。仍以嵌入对象有100个,采用嵌入算法得到的嵌入结果包括50维度的嵌入值为例,本步骤在对原始预测模型进行训练后,可得到5000个嵌入值(100×50)中的最大值e
max和最小值e
min。
In this embodiment, the extreme value may include a maximum value and a minimum value. Still taking 100 embedded objects, the embedding result obtained by using the embedding algorithm includes the embedding value of 50 dimensions as an example. After training the original prediction model in this step, the maximum value of 5000 embedding values (100×50) can be obtained e max and minimum value e min .
步骤S104,根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌入对象 作为显著训练样本。In step S104, an embedding object whose embedding value meets the salient condition in each dimension is extracted as a salient training sample according to the extreme value of the embedding value.
在本实施例中,可先根据嵌入值的极值确定显著训练样本的显著条件,然后提取每个维度下嵌入值满足所述显著条件的嵌入对象作为所述维度下的显著训练样本。In this embodiment, the saliency condition of the saliency training sample may be determined according to the extreme value of the embedding value, and then the embedding object whose embedding value meets the saliency condition in each dimension is extracted as the saliency training sample in the dimension.
在本实施例中,所述极值包括最大值和最小值。与极值相对应,所述显著条件可包括显著激活条件和显著抑制条件,所述显著训练样本可包括显著激活训练样本和显著抑制训练样本,所述显著激活训练样本的显著类别标签是显著激活标签,所述显著抑制训练样本的显著类别标签是显著抑制标签。In this embodiment, the extreme value includes a maximum value and a minimum value. Corresponding to extreme values, the salient conditions may include salient activation conditions and salient suppression conditions, the salient training samples may include salient activation training samples and salient suppression training samples, and the salient category label of the salient activation training samples is salient activation Label, the significant category label of the significant suppression training sample is a significant suppression label.
其中,所述显著激活条件是嵌入值大于等于最大值与预设变化参数的差值,同时小于等于所述最大值。假设,采用δ来表示所述预设变化参数,则满足显著激活条件的嵌入值e
i的取值范围是:e
max-δ≤e
i≤e
max。
Wherein, the significant activation condition is that the embedding value is greater than or equal to the difference between the maximum value and the preset change parameter, and at the same time is less than or equal to the maximum value. Assuming that δ is used to represent the preset change parameter, the value range of the embedded value e i that satisfies the significant activation condition is: e max -δ≤e i ≤e max .
所述显著抑制条件是嵌入值大于等于上述最小值,同时小于等于所述最小值和所述预设变化参数的求和值。即,满足所述显著抑制条件的嵌入值e
i的取值范围是:e
min≤e
i≤e
min+δ。
The significant suppression condition is that the embedding value is greater than or equal to the above minimum value, and at the same time is less than or equal to the sum value of the minimum value and the preset change parameter. That is, the value range of the embedding value e i that satisfies the significant suppression condition is: e min ≦e i ≦e min +δ.
在本实施例中,可将满足上述显著激活条件的嵌入对象称为显著激活训练样本,满足上述显著抑制条件的嵌入对象称为显著抑制训练样本。In this embodiment, an embedded object that satisfies the aforementioned significant activation condition may be referred to as a significant activation training sample, and an embedded object that satisfies the aforementioned significant suppression condition may be referred to as a significant suppression training sample.
在本实施例中,在确定所述显著激活条件和所述显著抑制条件后,针对嵌入结果的每个维度,可进行显著激活训练样本以及显著抑制训练样本的提取。In this embodiment, after determining the salient activation condition and the salient suppression condition, for each dimension of the embedding result, salient activation training samples and salient suppression training samples can be extracted.
以嵌入结果的第一个维度为例,可依次判断各嵌入对象经嵌入算法处理后得到的嵌入结果的第一个嵌入值是否满足上述显著抑制条件或显著激活条件,若满足,则可提取该嵌入对象作为第一个维度下的显著训练样本。Taking the first dimension of the embedding result as an example, it can be judged in sequence whether the first embedding value of the embedding result of each embedding object processed by the embedding algorithm satisfies the above-mentioned significant suppression condition or significant activation condition. The embedded object is used as the salient training sample in the first dimension.
举例来说,请参考前述步骤S102中的第m个嵌入对象,在本步骤中,可判断这个嵌入对象的嵌入结果的第一个嵌入值e
m1是否满足上述显著激活条件或显著抑制条件。若满足上述显著激活条件,则可提取该嵌入对象作为第一个维度下的显著激活训练样本;若满足上述显著抑制条件,则可提取该嵌入对象作为第一个维度下的显著抑制训练样本;若均不满足,则可确认该嵌入对象不可作为第一个维度下的显著训练样本。
For example, please refer to the m-th embedded object in the foregoing step S102. In this step, it can be determined whether the first embedding value em1 of the embedding result of this embedded object satisfies the aforementioned significant activation condition or significant suppression condition. If the above significant activation condition is satisfied, the embedded object can be extracted as the significant activation training sample in the first dimension; if the above significant suppression condition is satisfied, the embedded object can be extracted as the significant inhibition training sample in the first dimension; If neither is satisfied, it can be confirmed that the embedded object cannot be used as a significant training sample in the first dimension.
类似的,针对嵌入结果的第二个维度,可依次判断各嵌入对象的嵌入结果的第二个嵌入值是否满足上述显著抑制条件或显著激活条件,若满足其一,则可提取该嵌入对象作为第二个维度下的显著训练样本。Similarly, for the second dimension of the embedding result, it can be judged in sequence whether the second embedding value of the embedding result of each embedding object satisfies the above significant suppression condition or significant activation condition, and if one of them is satisfied, the embedding object can be extracted as Significant training samples in the second dimension.
例如,判断前述步骤S102中的第m个嵌入对象的嵌入结果的第二个嵌入值e
m2是否满足上述显著抑制条件或显著激活条件等。
For example, the determination in step S102 is embedded in the m-th results of the embedded object embedded in the second value e m2 meets the above conditions significantly or significantly inhibiting the activation conditions and the like.
在本实施例中,同一个嵌入对象可能是某些维度的显著激活训练样本,同时还可能是其他维度的显著抑制训练样本。In this embodiment, the same embedded object may be a significantly activated training sample in some dimensions, and may also be a significantly suppressed training sample in other dimensions.
例如,嵌入对象m可能是第一个维度下的显著激活训练样本,同时是第二个维度下的显著抑制训练样本,不是第三个维度下的显著训练样本等。For example, the embedded object m may be a significant activation training sample in the first dimension, a significant suppression training sample in the second dimension, and not a significant training sample in the third dimension.
在本实施例中,基于本步骤,可为各个维度完成显著训练样本的提取。In this embodiment, based on this step, significant training samples can be extracted for each dimension.
步骤S106,针对每个维度,采用该维度下的显著训练样本对解释模型进行训练。In step S106, for each dimension, a significant training sample in that dimension is used to train the interpretation model.
在本实施例中,所述解释模型可以是具有较好解释性的二分类模型,例如线性模型、决策树等,本说明书对此不作特殊限制。值得注意的是,由于多分类模型是一种特殊形式的二分类模型,上述二分类模型可包括多分类模型。In this embodiment, the explanatory model may be a binary classification model with better interpretability, such as a linear model, a decision tree, etc., which is not particularly limited in this specification. It is worth noting that, since the multi-classification model is a special form of two-classification model, the above-mentioned two-classification model may include a multi-classification model.
在本实施例中,可采用所述显著训练样本的样本特征和样本标签对所述解释模型进行训练。In this embodiment, the interpretation model can be trained using the sample features and sample labels of the salient training samples.
其中,所述样本标签可基于前述已训练的预测模型确定。Wherein, the sample label may be determined based on the previously trained prediction model.
所述样本特征可包括样本的原始特征和拓扑特征。The sample features may include original features and topological features of the sample.
所述原始特征通常是样本自身已有的特征。The original feature is usually a feature already present in the sample itself.
例如,用户节点的原始特征可包括用户的年龄、性别、职业、收入等。For example, the original characteristics of the user node may include the user's age, gender, occupation, income, and so on.
再例如,文本的原始特征可包括词汇的词性、词频等。As another example, the original features of the text may include part of speech of the vocabulary, word frequency, and so on.
所述拓扑特征可用于表示嵌入对象的拓扑结构。The topological features can be used to represent the topological structure of the embedded object.
以嵌入对象是图节点为例,所述拓扑特征可包括:一阶邻居数据、二阶邻居数量、一阶邻居的平均邻居数量、一阶邻居在指定原始特征维度下的统计值等。Taking an embedded object as a graph node as an example, the topological characteristics may include: first-order neighbor data, number of second-order neighbors, average number of neighbors of first-order neighbors, statistics of first-order neighbors under a specified original feature dimension, etc.
以风险团伙识别为例,所述一阶邻居在指定原始特征维度下的统计值可以是一阶邻居的平均年龄、一阶邻居的年龄最大值、一阶邻居的平均年收入、一阶邻居的年收入最小值等。Taking risk gang identification as an example, the statistics of the first-order neighbors under the specified original feature dimension may be the average age of the first-order neighbors, the maximum age of the first-order neighbors, the average annual income of the first-order neighbors, and the Minimum annual income, etc.
以嵌入对象是文本所包括的词汇为例,所述拓扑特征可包括:最常出现在该词汇前面的词汇、经常和该词汇搭配出现的词汇个数等。Taking the embedded object as a vocabulary included in the text as an example, the topological features may include: the vocabulary that appears most often in front of the vocabulary, the number of vocabularies that often appear in conjunction with the vocabulary, and so on.
在本实施例中,采用拓扑特征对原始特征进行补充,一方面可解决部分样本没有 原始特征的问题,另一方面还可将样本的拓扑结构补充到样本特征中,从而提高解释模型训练结果的准确性。In this embodiment, topological features are used to supplement the original features. On the one hand, it can solve the problem that some samples have no original features. On the other hand, the topological structure of the samples can be added to the sample features, thereby improving the interpretation of the training results of the model. accuracy.
在本实施例中,针对每个维度,在完成对解释模型的训练后,可得到该维度下各样本特征的权重。In this embodiment, for each dimension, after completing the training of the interpretation model, the weight of each sample feature in that dimension can be obtained.
表1Table 1
请参考表1的示例,在维度1中,样本特征1的权重是W11,样本特征2的权重是W12…;在维度2中,样本特征1的权重是W21,样本特征2的权重是W22…等。Please refer to the example in Table 1. In dimension 1, the weight of sample feature 1 is W11, the weight of sample feature 2 is W12...; in dimension 2, the weight of sample feature 1 is W21, and the weight of sample feature 2 is W22... Wait.
步骤S108,基于已训练的解释模型确定所述显著训练样本的解释特征,作为所述嵌入结果在该维度下的解释特征。In step S108, the interpretation features of the significant training samples are determined based on the trained interpretation model as the interpretation features of the embedding result in this dimension.
基于前述步骤S106,基于每个维度下已训练的解释模型可确定各样本特征的权重,根据所述权重可确定对应维度下对预测结果影响显著的若干样本特征作为显著训练样本的解释特征,在本实施例中,还可将所述显著训练样本的解释特征确定为嵌入结果在该维度下的解释特征。Based on the foregoing step S106, the weight of each sample feature can be determined based on the interpreted model trained in each dimension, and according to the weight, several sample features that significantly affect the prediction result in the corresponding dimension can be determined as the interpreted features of the significant training sample. In this embodiment, the interpretation feature of the significant training sample may also be determined as the interpretation feature of the embedding result in this dimension.
例如,可按照权重从大到小的顺序对样本特征进行排序,然后提取排列在前N位的样本特征作为所述解释特征。其中,N的取值可预先设置,N可等于3、5等,本说明书对此不作特殊限制。For example, the sample features may be sorted according to the order of weight from large to small, and then the sample features arranged in the top N bits are extracted as the interpretation features. Among them, the value of N can be set in advance, N can be equal to 3, 5, etc., this specification does not make any special restrictions.
请继续参考表1的示例,假设在维度1下,W11>W12>W13>Wi,N的取值是3,则可将嵌入结果在维度1下的解释特征确定为特征1、特征2和特征3。Please continue to refer to the example in Table 1. Assuming that in dimension 1, W11>W12>W13>Wi, and the value of N is 3, the interpreted features of the embedded result in dimension 1 can be determined as feature 1, feature 2 and feature 3.
由以上描述可以看出,本说明书可基于嵌入结果中嵌入值的极值为嵌入结果的各维度提取嵌入值满足显著条件的嵌入对象作为显著训练样本,并采用显著训练样本对具 有解释性的解释模型进行训练,进而根据训练后的解释模型确定嵌入结果在对应维度下的解释特征,实现嵌入结果的特征解释,为开发者修复所述原始预测模型的偏差提供依据,有助于提升该原始预测模型的泛化能力和性能,并且有助于规避法律风险和道德风险。As can be seen from the above description, this specification can extract embedded objects whose embedding value meets the salient conditions as salient training samples based on the extreme values of the embedding values in the embedding results as salient training samples, and use the salient training samples to interpret the interpretation. The model is trained, and then the interpretation characteristics of the embedded result in the corresponding dimension are determined according to the trained interpretation model, and the feature interpretation of the embedded result is realized, which provides a basis for the developer to repair the deviation of the original prediction model and helps improve the original prediction The generalization ability and performance of the model, and help to avoid legal risks and moral hazards.
本说明书还提供一种图嵌入的结果解释方法。This specification also provides a method for interpreting the results of graph embedding.
一方面,可采用嵌入算法对图节点进行嵌入处理,得到每个图节点的嵌入结果,所述嵌入结果包括若干维度的嵌入值。On the one hand, an embedding algorithm may be used to embed the graph nodes to obtain an embedding result of each graph node, and the embedding result includes embedding values in several dimensions.
另一方面,可根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的图节点作为显著训练样本,然后针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练,并可基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。On the other hand, the graph nodes whose embedding values meet the salient conditions in each dimension can be extracted as salient training samples according to the extreme values of the embedding values, and then for each dimension, the sample features and salient categories of the salient training samples in that dimension are used The label trains the interpretation model, and can determine the interpretation feature of the salient training sample belonging to the salient category based on the trained interpretation model as the interpretation feature of the embedding result in this dimension.
以用户网络图为例,本实施例可根据用户的支付数据、交互数据等数据构建用户网络图。针对用户网络图中的每个用户节点可采用嵌入算法得到该用户节点的嵌入结果,例如向量。Taking a user network diagram as an example, in this embodiment, a user network diagram may be constructed based on data such as user payment data and interaction data. For each user node in the user network graph, an embedding algorithm can be used to obtain the embedding result of the user node, such as a vector.
根据所述嵌入值的极值可提取各维度下嵌入值满足显著条件的用户节点作为显著训练样本。According to the extreme value of the embedding value, user nodes whose embedding value meets the salient condition in each dimension can be extracted as a salient training sample.
针对每个嵌入结果的每个维度,可采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练,并可基于已训练的解释模型确定嵌入结果在该维度下的解释特征。For each dimension of each embedding result, the sample features and salient category labels of the salient training samples in this dimension can be used to train the interpretation model, and the interpretation features of the embedding result in this dimension can be determined based on the trained interpretation model .
例如,嵌入结果在维度1下的解释特征可包括:无固定职业、年收入低于8万、常住地广西、年龄18-25周岁等。For example, the interpretation characteristics of the embedding result in dimension 1 may include: no fixed occupation, annual income less than 80,000, resident place in Guangxi, age 18-25 years old, etc.
再例如,嵌入结果在维度2下的解释特征可包括:无固定职业、年收入低于10万、常住地云南、年龄20-28周岁、使用Wi-Fi网络的SSID是12345等。For another example, the interpretation characteristics of the embedding result under dimension 2 may include: no fixed occupation, annual income less than 100,000, place of residence in Yunnan, age 20-28 years old, SSID using Wi-Fi network is 12345, etc.
本说明书还提供一种词嵌入的结果解释方法。This specification also provides a method for interpreting the results of word embedding.
一方面,可采用嵌入算法对文本中的词汇进行嵌入处理,得到每个文本对应的词嵌入结果,所述词嵌入结果包括若干维度的嵌入值。On the one hand, an embedding algorithm may be used to embed words in the text to obtain a word embedding result corresponding to each text, and the word embedding result includes embedding values in several dimensions.
另一方面,可根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的词汇作为显著训练样本,然后针对每个维度,采用该维度下的显著训练样本的样本特征和显著 类别标签对解释模型进行训练,并可基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。On the other hand, according to the extreme value of the embedding value, the vocabulary whose embedding value meets the salient condition in each dimension can be extracted as a salient training sample, and then for each dimension, the sample features and salient category labels of the salient training sample in that dimension are used The interpretation model is trained, and the interpretation feature of the salient training sample belonging to the salient category can be determined based on the trained interpretation model as the interpretation feature of the embedding result in this dimension.
例如,嵌入结果在维度1下的解释特征可包括:计算机、人工智能、技术、创新、技术的词频大于0.01等。For example, the interpretation characteristics of the embedded result in dimension 1 may include: computer, artificial intelligence, technology, innovation, the word frequency of technology is greater than 0.01, etc.
再例如,嵌入结果在维度2下的解释特征可包括:足球、篮球、运动、游泳、记录等。As another example, the interpretation characteristics of the embedded result under dimension 2 may include: football, basketball, sports, swimming, recording, etc.
需要说明的是,由于一个文本中通常包括有若干词汇,所述文本对应的词嵌入结果可以是所述文本包括的每个词汇的嵌入结果的拼接,也可以是每个词汇的嵌入结果的各嵌入值平均加和等,本说明书对此不作特殊限制。It should be noted that since a text usually includes several vocabularies, the word embedding result corresponding to the text may be a mosaic of the embedding results of each vocabulary included in the text, or each of the embedding results of each vocabulary. The embedding value is average added, etc., this manual does not make any special restrictions.
在进行显著训练样本的提取时,若文本对应的嵌入结果的维度数量与词汇嵌入结果的维度数量相同,则也可以以文本为单位进行显著训练样本的提取,本说明书对此不作特殊限制。When extracting salient training samples, if the number of dimensions of the embedding result corresponding to the text is the same as the number of dimensions of the embedding result of the vocabulary, then the salient training samples can also be extracted in units of text, which is not particularly limited in this specification.
与前述嵌入结果的解释方法的实施例相对应,本说明书还提供了嵌入结果的解释装置的实施例。Corresponding to the foregoing embodiments of the embedding result interpretation method, this specification also provides an embodiment of the embedding result interpretation device.
本说明书嵌入结果的解释装置的实施例可以应用在服务器上。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在服务器的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图3所示,为本说明书嵌入结果的解释装置所在服务器为的一种硬件结构图,除了图3所示的处理器、内存、网络接口、以及非易失性存储器之外,实施例中装置所在的服务器通常根据该服务器的实际功能,还可以包括其他硬件,对此不再赘述。The embodiment of the apparatus for interpreting embedded results in this specification can be applied to a server. The device embodiments may be implemented by software, or by hardware or a combination of hardware and software. Taking software implementation as an example, as a logical device, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory through the processor of the server where it is located and running. From the hardware level, as shown in Figure 3, this is a hardware structure diagram of the server where the interpretation device of the embedded result is located, except for the processor, memory, network interface, and non-volatile memory shown in Figure 3 In addition, in the embodiment, the server where the device is located usually includes other hardware according to the actual function of the server, which will not be repeated here.
图4是本说明书一示例性实施例示出的一种嵌入结果的解释装置的框图。Fig. 4 is a block diagram of an apparatus for interpreting an embedded result shown in an exemplary embodiment of this specification.
请参考图4,所述嵌入结果的解释装置300可以应用在前述图3所示的服务器中,包括有:嵌入处理单元301、样本提取单元302、模型训练单元303以及特征解释单元304。Please refer to FIG. 4. The apparatus 300 for interpreting embedded results may be applied to the server shown in FIG. 3, including: an embedding processing unit 301, a sample extraction unit 302, a model training unit 303 and a feature interpretation unit 304.
其中,嵌入处理单元301,采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值;The embedding processing unit 301 uses an embedding algorithm to embed the embedding objects to obtain embedding results of each embedding object, and the embedding results include embedding values of several dimensions;
样本提取单元302,根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌 入对象作为显著训练样本;The sample extraction unit 302 extracts embedded objects whose embedding values satisfy the salient conditions in each dimension according to the extreme values of the embedding values as salient training samples;
模型训练单元303,针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;The model training unit 303, for each dimension, uses the sample features of the salient training samples in the dimension and the salient category labels to train the explanatory model;
特征解释单元304,基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。The feature interpretation unit 304 determines the interpretation feature of the salient training sample belonging to the salient category based on the trained interpretation model as the interpretation feature of the embedding result in this dimension.
可选的,所述极值包括:最大值和最小值;Optionally, the extreme value includes: a maximum value and a minimum value;
所述显著条件包括:显著激活条件和显著抑制条件;The significant conditions include: significant activation conditions and significant inhibition conditions;
所述显著类别标签与所述显著条件对应,包括显著激活标签和显著抑制标签;The salient category tags correspond to the salient conditions, including salient activation tags and salient suppression tags;
所述显著条件的确定过程包括:The determination process of the salient condition includes:
计算所述最大值和预设变化参数的差值;Calculating the difference between the maximum value and the preset change parameter;
计算所述最小值和所述预设变化参数的求和值;Calculating the sum of the minimum value and the preset change parameter;
将所述显著激活条件确定为:嵌入值大于等于所述差值,且小于等于所述最大值;Determining the significant activation condition as: the embedding value is greater than or equal to the difference and less than or equal to the maximum value;
将所述显著抑制条件确定为:嵌入值大于等于所述最小值,且小于等于所述求和值。The significant suppression condition is determined as: the embedding value is greater than or equal to the minimum value and less than or equal to the summation value.
可选的,所述特征解释单元304:Optionally, the feature interpretation unit 304:
基于已训练的解释模型确定所述显著训练样本中各样本特征的权重;Determine the weight of each sample feature in the significant training sample based on the trained explanatory model;
按照权重从大到小的顺序对样本特征进行排序;Sort the sample features according to the order of weight from large to small;
提取排列在前N位的样本特征作为所述显著训练样本属于所述显著类别的解释特征,N为大于等于1的自然数。The sample features ranked in the top N are extracted as the explanatory features of the saliency training sample belonging to the saliency category, and N is a natural number greater than or equal to 1.
可选的,所述样本特征包括:原始特征和拓扑特征。Optionally, the sample features include: original features and topological features.
可选的,所述拓扑特征包括以下一种或多种:Optionally, the topological features include one or more of the following:
一阶邻居数量、二阶邻居数量、一阶邻居的平均邻居数量、一阶邻居在指定原始特征维度下的统计值。The number of first-order neighbors, the number of second-order neighbors, the average number of first-order neighbors, and the statistics of first-order neighbors under the specified original feature dimensions.
可选的,所述解释模型是二分类模型。Optionally, the interpretation model is a binary classification model.
上述装置中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。For the implementation process of the functions and functions of the units in the above device, please refer to the implementation process of the corresponding steps in the above method for details, which will not be repeated here.
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本说明书方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。As for the device embodiments, since they basically correspond to the method embodiments, the relevant parts can be referred to the description of the method embodiments. The device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located One place, or can be distributed to multiple network elements. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solution in this specification. Those of ordinary skill in the art can understand and implement without paying creative labor.
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。The system, device, module or unit explained in the above embodiments may be specifically implemented by a computer chip or entity, or implemented by a product with a certain function. A typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email sending and receiving device, and a game control Desk, tablet computer, wearable device, or any combination of these devices.
与前述嵌入结果的解释方法的实施例相对应,本说明书还提供一种嵌入结果的解释装置,该装置包括:处理器以及用于存储机器可执行指令的存储器。其中,处理器和存储器通常借由内部总线相互连接。在其他可能的实现方式中,所述设备还可能包括外部接口,以能够与其他设备或者部件进行通信。Corresponding to the foregoing embodiments of the method for interpreting embedded results, this specification also provides an apparatus for interpreting embedded results, which includes a processor and a memory for storing machine-executable instructions. Among them, the processor and the memory are usually connected to each other via an internal bus. In other possible implementations, the device may also include an external interface to be able to communicate with other devices or components.
在本实施例中,通过读取并执行所述存储器存储的与嵌入结果的解释逻辑对应的机器可执行指令,所述处理器被促使:In this embodiment, by reading and executing the machine-executable instructions stored in the memory corresponding to the interpretation logic of the embedded result, the processor is prompted to:
采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值;Use an embedding algorithm to embed the embedded object to obtain the embedding result of each embedding object, the embedding result includes embedding values of several dimensions;
根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌入对象作为显著训练样本;According to the extreme value of the embedding value, extract the embedding object whose embedding value meets the salient condition in each dimension as a salient training sample;
针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;
基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
可选的,所述极值包括:最大值和最小值;Optionally, the extreme value includes: a maximum value and a minimum value;
所述显著条件包括:显著激活条件和显著抑制条件;The significant conditions include: significant activation conditions and significant inhibition conditions;
所述显著类别标签与所述显著条件对应,包括显著激活标签和显著抑制标签;The salient category tags correspond to the salient conditions, including salient activation tags and salient suppression tags;
所述显著条件的确定过程包括:The determination process of the salient condition includes:
计算所述最大值和预设变化参数的差值;Calculating the difference between the maximum value and the preset change parameter;
计算所述最小值和所述预设变化参数的求和值;Calculating the sum of the minimum value and the preset change parameter;
将所述显著激活条件确定为:嵌入值大于等于所述差值,且小于等于所述最大值;Determining the significant activation condition as: the embedding value is greater than or equal to the difference and less than or equal to the maximum value;
将所述显著抑制条件确定为:嵌入值大于等于所述最小值,且小于等于所述求和值。The significant suppression condition is determined as: the embedding value is greater than or equal to the minimum value and less than or equal to the summation value.
可选的,在基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征时,所述处理器被促使:Optionally, when it is determined that the saliency training sample belongs to the interpretation feature of the saliency category based on the trained interpretation model, the processor is prompted to:
基于已训练的解释模型确定所述显著训练样本中各样本特征的权重;Determine the weight of each sample feature in the significant training sample based on the trained explanatory model;
按照权重从大到小的顺序对样本特征进行排序;Sort the sample features according to the order of weight from large to small;
提取排列在前N位的样本特征作为所述显著训练样本属于所述显著类别的解释特征,N为大于等于1的自然数。The sample features ranked in the top N are extracted as the explanatory features of the saliency training sample belonging to the saliency category, and N is a natural number greater than or equal to 1.
可选的,所述样本特征包括:原始特征和拓扑特征。Optionally, the sample features include: original features and topological features.
可选的,所述拓扑特征包括以下一种或多种:Optionally, the topological features include one or more of the following:
一阶邻居数量、二阶邻居数量、一阶邻居的平均邻居数量、一阶邻居在指定原始特征维度下的统计值。The number of first-order neighbors, the number of second-order neighbors, the average number of first-order neighbors, and the statistics of first-order neighbors under the specified original feature dimensions.
可选的,所述解释模型是二分类模型。Optionally, the interpretation model is a binary classification model.
与前述嵌入结果的解释方法的实施例相对应,本说明书还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现以下步骤:Corresponding to the foregoing embodiment of the embedding result interpretation method, this specification also provides a computer-readable storage medium that stores a computer program on the computer-readable storage medium, and the program implements the following steps when executed by the processor:
采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值;Use an embedding algorithm to embed the embedded object to obtain the embedding result of each embedding object, the embedding result includes embedding values of several dimensions;
根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌入对象作为显著训练样本;According to the extreme value of the embedding value, extract the embedding object whose embedding value meets the salient condition in each dimension as a salient training sample;
针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;
基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作 为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
可选的,所述极值包括:最大值和最小值;Optionally, the extreme value includes: a maximum value and a minimum value;
所述显著条件包括:显著激活条件和显著抑制条件;The significant conditions include: significant activation conditions and significant inhibition conditions;
所述显著类别标签与所述显著条件对应,包括显著激活标签和显著抑制标签;The salient category tags correspond to the salient conditions, including salient activation tags and salient suppression tags;
所述显著条件的确定过程包括:The determination process of the salient condition includes:
计算所述最大值和预设变化参数的差值;Calculating the difference between the maximum value and the preset change parameter;
计算所述最小值和所述预设变化参数的求和值;Calculating the sum of the minimum value and the preset change parameter;
将所述显著激活条件确定为:嵌入值大于等于所述差值,且小于等于所述最大值;Determining the significant activation condition as: the embedding value is greater than or equal to the difference and less than or equal to the maximum value;
将所述显著抑制条件确定为:嵌入值大于等于所述最小值,且小于等于所述求和值。The significant suppression condition is determined as: the embedding value is greater than or equal to the minimum value and less than or equal to the summation value.
可选的,所述基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,包括:Optionally, the determining the interpretation feature of the salient training sample belonging to the salient category based on the trained interpretation model includes:
基于已训练的解释模型确定所述显著训练样本中各样本特征的权重;Determine the weight of each sample feature in the significant training sample based on the trained explanatory model;
按照权重从大到小的顺序对样本特征进行排序;Sort the sample features according to the order of weight from large to small;
提取排列在前N位的样本特征作为所述显著训练样本属于所述显著类别的解释特征,N为大于等于1的自然数。The sample features ranked in the top N are extracted as the explanatory features of the saliency training sample belonging to the saliency category, and N is a natural number greater than or equal to 1.
可选的,所述样本特征包括:原始特征和拓扑特征。Optionally, the sample features include: original features and topological features.
可选的,所述拓扑特征包括以下一种或多种:Optionally, the topological features include one or more of the following:
一阶邻居数量、二阶邻居数量、一阶邻居的平均邻居数量、一阶邻居在指定原始特征维度下的统计值。The number of first-order neighbors, the number of second-order neighbors, the average number of first-order neighbors, and the statistics of first-order neighbors under the specified original feature dimensions.
可选的,所述解释模型是二分类模型。Optionally, the interpretation model is a binary classification model.
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。The foregoing describes specific embodiments of the present specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve the desired results. In addition, the processes depicted in the drawings do not necessarily require the particular order shown or sequential order to achieve the desired results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
以上所述仅为本说明书的较佳实施例而已,并不用以限制本说明书,凡在本说明书的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书保护的范围之内。The above are only the preferred embodiments of this specification and are not intended to limit this specification. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of this specification should be included in this specification Within the scope of protection.
Claims (15)
- 一种嵌入结果的解释方法,包括:An interpretation method for embedded results, including:采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值;Use an embedding algorithm to embed the embedded object to obtain the embedding result of each embedding object, the embedding result includes embedding values of several dimensions;根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌入对象作为显著训练样本;According to the extreme value of the embedding value, extract the embedding object whose embedding value meets the salient condition in each dimension as a salient training sample;针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
- 根据权利要求1所述的方法,The method according to claim 1,所述极值包括:最大值和最小值;The extreme value includes: a maximum value and a minimum value;所述显著条件包括:显著激活条件和显著抑制条件;The significant conditions include: significant activation conditions and significant inhibition conditions;所述显著类别标签与所述显著条件对应,包括显著激活标签和显著抑制标签;The salient category tags correspond to the salient conditions, including salient activation tags and salient suppression tags;所述显著条件的确定过程包括:The determination process of the salient condition includes:计算所述最大值和预设变化参数的差值;Calculating the difference between the maximum value and the preset change parameter;计算所述最小值和所述预设变化参数的求和值;Calculating the sum of the minimum value and the preset change parameter;将所述显著激活条件确定为:嵌入值大于等于所述差值,且小于等于所述最大值;Determining the significant activation condition as: the embedding value is greater than or equal to the difference and less than or equal to the maximum value;将所述显著抑制条件确定为:嵌入值大于等于所述最小值,且小于等于所述求和值。The significant suppression condition is determined as: the embedding value is greater than or equal to the minimum value and less than or equal to the summation value.
- 根据权利要求1所述的方法,基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,包括:The method according to claim 1, determining that the saliency training sample belongs to the saliency category interpretation feature based on the trained interpretation model, including:基于已训练的解释模型确定所述显著训练样本中各样本特征的权重;Determine the weight of each sample feature in the significant training sample based on the trained explanatory model;按照权重从大到小的顺序对样本特征进行排序;Sort the sample features according to the order of weight from large to small;提取排列在前N位的样本特征作为所述显著训练样本属于所述显著类别的解释特征,N为大于等于1的自然数。The sample features ranked in the top N are extracted as the explanatory features of the saliency training sample belonging to the saliency category, and N is a natural number greater than or equal to 1.
- 根据权利要求3所述的方法,The method according to claim 3,所述样本特征包括:原始特征和拓扑特征。The sample features include: original features and topological features.
- 根据权利要求4所述的方法,所述拓扑特征包括以下一种或多种:The method according to claim 4, the topological features include one or more of the following:一阶邻居数量、二阶邻居数量、一阶邻居的平均邻居数量、一阶邻居在指定原始特征维度下的统计值。The number of first-order neighbors, the number of second-order neighbors, the average number of first-order neighbors, and the statistics of first-order neighbors under the specified original feature dimensions.
- 根据权利要求1所述的方法,The method according to claim 1,所述解释模型是二分类模型。The explanation model is a binary classification model.
- 一种图嵌入的结果解释方法,包括:A result interpretation method for graph embedding, including:采用嵌入算法对图节点进行嵌入处理,得到每个图节点的嵌入结果,所述嵌入结果包括若干维度的嵌入值;The embedding algorithm is used to embed the graph nodes to obtain the embedding result of each graph node, and the embedding result includes embedding values of several dimensions;根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的图节点作为显著训练样本;Extracting graph nodes whose embedding values meet the salient conditions in each dimension according to the extreme values of the embedding values as salient training samples;针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
- 一种词嵌入的结果解释方法,包括:A result interpretation method for word embedding, including:采用嵌入算法对文本中的词汇进行嵌入处理,得到每个文本对应的词嵌入结果,所述词嵌入结果包括若干维度的嵌入值;The embedding algorithm is used to embed the vocabulary in the text to obtain the word embedding result corresponding to each text, and the word embedding result includes embedding values in several dimensions;根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的词汇作为显著训练样本;Extract the vocabulary whose embedding value meets the salient condition in each dimension according to the extreme value of the embedding value as a salient training sample;针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
- 一种嵌入结果的解释装置,包括:A device for interpreting embedded results, including:嵌入处理单元,采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值;The embedding processing unit embeds the embedded objects by using an embedding algorithm to obtain the embedding result of each embedding object, and the embedding result includes embedding values of several dimensions;样本提取单元,根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌入对象作为显著训练样本;The sample extraction unit extracts, as a significant training sample, an embedded object whose embedded value satisfies the significant condition in each dimension according to the extreme value of the embedded value;模型训练单元,针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;The model training unit, for each dimension, uses the sample features of the salient training samples and salient category labels in that dimension to train the explanatory model;特征解释单元,基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。The feature interpretation unit determines the interpretation feature of the salient training sample belonging to the salient category based on the trained interpretation model as the interpretation feature of the embedding result in this dimension.
- 根据权利要求9所述的装置,The device according to claim 9,所述极值包括:最大值和最小值;The extreme value includes: a maximum value and a minimum value;所述显著条件包括:显著激活条件和显著抑制条件;The significant conditions include: significant activation conditions and significant inhibition conditions;所述显著类别标签与所述显著条件对应,包括显著激活标签和显著抑制标签;The salient category tags correspond to the salient conditions, including salient activation tags and salient suppression tags;所述显著条件的确定过程包括:The determination process of the salient condition includes:计算所述最大值和预设变化参数的差值;Calculating the difference between the maximum value and the preset change parameter;计算所述最小值和所述预设变化参数的求和值;Calculating the sum of the minimum value and the preset change parameter;将所述显著激活条件确定为:嵌入值大于等于所述差值,且小于等于所述最大值;Determining the significant activation condition as: the embedding value is greater than or equal to the difference and less than or equal to the maximum value;将所述显著抑制条件确定为:嵌入值大于等于所述最小值,且小于等于所述求和值。The significant suppression condition is determined as: the embedding value is greater than or equal to the minimum value and less than or equal to the summation value.
- 根据权利要求9所述的装置,所述特征解释单元:The apparatus according to claim 9, the feature interpretation unit:基于已训练的解释模型确定所述显著训练样本中各样本特征的权重;Determine the weight of each sample feature in the significant training sample based on the trained explanatory model;按照权重从大到小的顺序对样本特征进行排序;Sort the sample features according to the order of weight from large to small;提取排列在前N位的样本特征作为所述显著训练样本属于所述显著类别的解释特征,N为大于等于1的自然数。The sample features ranked in the top N are extracted as explanatory features of the saliency training sample belonging to the saliency category, and N is a natural number greater than or equal to 1.
- 根据权利要求11所述的装置,The device according to claim 11,所述样本特征包括:原始特征和拓扑特征。The sample features include: original features and topological features.
- 根据权利要求12所述的装置,所述拓扑特征包括以下一种或多种:The apparatus according to claim 12, the topological characteristics include one or more of the following:一阶邻居数量、二阶邻居数量、一阶邻居的平均邻居数量、一阶邻居在指定原始特征维度下的统计值。The number of first-order neighbors, the number of second-order neighbors, the average number of first-order neighbors, and the statistics of first-order neighbors under the specified original feature dimensions.
- 根据权利要求9所述的装置,The device according to claim 9,所述解释模型是二分类模型。The explanation model is a binary classification model.
- 一种嵌入结果的解释装置,包括:A device for interpreting embedded results, including:处理器;processor;用于存储机器可执行指令的存储器;Memory for storing machine executable instructions;其中,通过读取并执行所述存储器存储的与嵌入结果的解释逻辑对应的机器可执行指令,所述处理器被促使:Wherein, by reading and executing the machine executable instructions stored in the memory corresponding to the interpretation logic of the embedded result, the processor is prompted to:采用嵌入算法对嵌入对象进行嵌入处理,得到每个嵌入对象的嵌入结果,所述嵌入结果包括若干维度的嵌入值;Use an embedding algorithm to embed the embedded object to obtain the embedding result of each embedding object, the embedding result includes embedding values of several dimensions;根据所述嵌入值的极值提取各维度下嵌入值满足显著条件的嵌入对象作为显著训练样本;According to the extreme value of the embedding value, extract the embedding object whose embedding value meets the salient condition in each dimension as a salient training sample;针对每个维度,采用该维度下的显著训练样本的样本特征和显著类别标签对解释模型进行训练;For each dimension, use the sample features and salient category labels of the salient training samples in this dimension to train the explanatory model;基于已训练的解释模型确定所述显著训练样本属于所述显著类别的解释特征,作为所述嵌入结果在该维度下的解释特征。Based on the trained interpretation model, it is determined that the salient training sample belongs to the interpretation feature of the salient category as the interpretation feature of the embedding result in this dimension.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811475037.4A CN109902167B (en) | 2018-12-04 | 2018-12-04 | Interpretation method and device of embedded result |
CN201811475037.4 | 2018-12-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020114109A1 true WO2020114109A1 (en) | 2020-06-11 |
Family
ID=66943355
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/112106 WO2020114109A1 (en) | 2018-12-04 | 2019-10-21 | Interpretation method and apparatus for embedding result |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN109902167B (en) |
TW (1) | TWI711934B (en) |
WO (1) | WO2020114109A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902167B (en) * | 2018-12-04 | 2020-09-01 | 阿里巴巴集团控股有限公司 | Interpretation method and device of embedded result |
CN111262887B (en) * | 2020-04-26 | 2020-08-28 | 腾讯科技(深圳)有限公司 | Network risk detection method, device, equipment and medium based on object characteristics |
CN112561074A (en) * | 2020-11-09 | 2021-03-26 | 联想(北京)有限公司 | Machine learning interpretable method, device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004326465A (en) * | 2003-04-24 | 2004-11-18 | Matsushita Electric Ind Co Ltd | Learning device for document classification, and document classification method and document classification device using it |
CN105303028A (en) * | 2015-08-20 | 2016-02-03 | 扬州大学 | Intelligent medical diagnosis classification method based on supervised isometric mapping |
CN109902167A (en) * | 2018-12-04 | 2019-06-18 | 阿里巴巴集团控股有限公司 | It is embedded in the means of interpretation and device of result |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100520853C (en) * | 2007-10-12 | 2009-07-29 | 清华大学 | Vehicle type classification method based on single frequency continuous-wave radar |
CN102880638B (en) * | 2012-08-10 | 2015-06-17 | 合肥工业大学 | Self-adaptive robust constrained maximum variance mapping (CMVM) characteristic dimensionality reduction and extraction method for diversified image retrieval of plant leaves |
CN104679771B (en) * | 2013-11-29 | 2018-09-18 | 阿里巴巴集团控股有限公司 | A kind of individuation data searching method and device |
CN106774970B (en) * | 2015-11-24 | 2021-08-20 | 北京搜狗科技发展有限公司 | Method and device for sorting candidate items of input method |
CN105548764B (en) * | 2015-12-29 | 2018-11-06 | 山东鲁能软件技术有限公司 | A kind of Fault Diagnosis for Electrical Equipment method |
CN107153630B (en) * | 2016-03-04 | 2020-11-06 | 阿里巴巴集团控股有限公司 | Training method and training system of machine learning system |
CN107766873A (en) * | 2017-09-06 | 2018-03-06 | 天津大学 | The sample classification method of multi-tag zero based on sequence study |
-
2018
- 2018-12-04 CN CN201811475037.4A patent/CN109902167B/en active Active
-
2019
- 2019-09-17 TW TW108133376A patent/TWI711934B/en active
- 2019-10-21 WO PCT/CN2019/112106 patent/WO2020114109A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004326465A (en) * | 2003-04-24 | 2004-11-18 | Matsushita Electric Ind Co Ltd | Learning device for document classification, and document classification method and document classification device using it |
CN105303028A (en) * | 2015-08-20 | 2016-02-03 | 扬州大学 | Intelligent medical diagnosis classification method based on supervised isometric mapping |
CN109902167A (en) * | 2018-12-04 | 2019-06-18 | 阿里巴巴集团控股有限公司 | It is embedded in the means of interpretation and device of result |
Also Published As
Publication number | Publication date |
---|---|
TWI711934B (en) | 2020-12-01 |
TW202022641A (en) | 2020-06-16 |
CN109902167B (en) | 2020-09-01 |
CN109902167A (en) | 2019-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110309331B (en) | Cross-modal deep hash retrieval method based on self-supervision | |
CN111523621B (en) | Image recognition method and device, computer equipment and storage medium | |
WO2020114108A1 (en) | Clustering result interpretation method and device | |
CN105354307B (en) | Image content identification method and device | |
US9190026B2 (en) | Systems and methods for feature fusion | |
CN112400165B (en) | Method and system for improving text-to-content suggestions using unsupervised learning | |
CN111667022A (en) | User data processing method and device, computer equipment and storage medium | |
WO2020114109A1 (en) | Interpretation method and apparatus for embedding result | |
US20220230648A1 (en) | Method, system, and non-transitory computer readable record medium for speaker diarization combined with speaker identification | |
Li et al. | Adaptive metric learning for saliency detection | |
CN111475622A (en) | Text classification method, device, terminal and storage medium | |
CN112507912B (en) | Method and device for identifying illegal pictures | |
CN113657087B (en) | Information matching method and device | |
CN112749737A (en) | Image classification method and device, electronic equipment and storage medium | |
CN111898704A (en) | Method and device for clustering content samples | |
CN113408282B (en) | Method, device, equipment and storage medium for topic model training and topic prediction | |
CN114463552A (en) | Transfer learning and pedestrian re-identification method and related equipment | |
CN112906726A (en) | Model training method, image processing method, device, computing device and medium | |
CN113849645B (en) | Mail classification model training method, device, equipment and storage medium | |
JP5959446B2 (en) | Retrieval device, program, and method for high-speed retrieval by expressing contents as a set of binary feature vectors | |
CN111860556A (en) | Model processing method and device and storage medium | |
CN116758601A (en) | Training method and device of face recognition model, electronic equipment and storage medium | |
CN115840817A (en) | Information clustering processing method and device based on contrast learning and computer equipment | |
KR102060110B1 (en) | Method, apparatus and computer program for classifying object in contents | |
Yu et al. | Construction of garden landscape design system based on multimodal intelligent computing and deep neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19894345 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19894345 Country of ref document: EP Kind code of ref document: A1 |