CN116704208B - Local interpretable method based on characteristic relation - Google Patents

Local interpretable method based on characteristic relation Download PDF

Info

Publication number
CN116704208B
CN116704208B CN202310978031.3A CN202310978031A CN116704208B CN 116704208 B CN116704208 B CN 116704208B CN 202310978031 A CN202310978031 A CN 202310978031A CN 116704208 B CN116704208 B CN 116704208B
Authority
CN
China
Prior art keywords
model
super
features
contribution
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310978031.3A
Other languages
Chinese (zh)
Other versions
CN116704208A (en
Inventor
练智超
陈洲源
周宏拓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202310978031.3A priority Critical patent/CN116704208B/en
Publication of CN116704208A publication Critical patent/CN116704208A/en
Application granted granted Critical
Publication of CN116704208B publication Critical patent/CN116704208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a local interpretable method based on a characteristic relation, and belongs to the field of deep learning interpretable. The method comprises the steps of obtaining a super-pixel segmentation result of an input sample; masking and combining masking are carried out on the super pixels, and the association size between the features is obtained; randomly selecting a super pixel block for shielding, generating a disturbance data set and training a simple model through the data set; and acquiring contribution of the feature blocks through a simple model, and selecting the feature blocks by combining the correlation sizes among the features to obtain final explanation. The invention utilizes the super-pixel shielding method to obtain the association between the features, and combines the contribution of the features to explain the deep learning model, thereby improving the reliability of the interpretation and reducing the randomness and sensitivity of the interpretation.

Description

Local interpretable method based on characteristic relation
Technical Field
The invention belongs to the technical field of artificial intelligence, and relates to a deep learning interpretable method, in particular to a local interpretable method based on a characteristic relation.
Background
In recent years, deep learning is rapidly developed in the related fields of image processing, natural language processing, voice recognition and the like, and the capability exceeding human beings is shown in various industries, so that people increasingly rely on decisions made by artificial intelligence, and the complexity of a model is greatly improved to meet the continuous improvement requirements of people in medical care and precision industries. But the more complex the model, the less understandable its structure and the more difficult it is to interpret the decisions it makes, which presents a problem that one is not able to trust the decisions that the model makes. To solve this problem, the interpretable approach starts to be a popular field. People start to not only meet the effect of the model, but also generate more thinking on the reason of the model effect, and the thinking is helpful for optimizing the model and the characteristics, so that better understanding of the model itself and improvement of the service quality of the model can be better facilitated.
The interpretable methods can be divided into an interpretable method before modeling, an interpretable method in modeling and an interpretable method after modeling, wherein the interpretable method before modeling usually involves preprocessing of data and the like, and aims to display the distribution situation of the features; the interpretability in modeling is to interpret the decision process by building a structure interpretable model; the interpretable method after modeling is to perform visual analysis on the model decision process or importance analysis on the features aiming at the black box model. However, many interpretable methods at present have defects, for example, the interpretable methods for white-box models need to know the architecture of the model in advance, which makes the method have low universality, while the most representative LIME method in the interpretable methods for black-box models ignores the influence of the association between features on the model prediction, so that the interpretation lacks reliability, and the improved G-LIME based on the method can only aim at text and table data and cannot aim at picture data. Therefore, an interpretable method combining the feature relation needs to be designed in consideration of the influence of the inter-feature association on the model prediction.
Disclosure of Invention
The invention solves the technical problems that: a locally interpretable method for image data and black box models is provided that combines inter-feature correlation sizes.
The technical scheme is as follows: in order to solve the technical problems, the invention adopts the following technical scheme:
a locally interpretable method based on a feature relation, comprising the steps of:
step 1: firstly, obtaining a super-pixel segmentation result of an input sample;
step 2: masking and combining masking are carried out on the super pixels, and the association size between the features is obtained;
step 3: randomly selecting a super pixel block for shielding, generating a disturbance data set and training a simple model through the data set;
step 4: and acquiring contribution of the feature blocks through a simple model, and selecting the feature blocks by combining the correlation sizes among the features to obtain final explanation.
Further, in step 1, the method for obtaining the super-pixel segmentation result of the input sample is as follows:
the k-means clustering method is utilized, the center of each cluster is evenly distributed in the image in the initial stage, each step of iteration is carried out, and the seed pixels are combined with pixels with the surrounding distance smaller than a set value to form super pixels.
Further, in step 2, the superpixel is masked and combined masked to obtain the association size between the features, and the method is as follows:
step 2.1: firstly, shielding all super pixel blocks except a super pixel block i in a picture, and obtaining a prediction result of a model on the picture after shielding the super pixel
Step 2.2: all super pixel blocks except the super pixel block j in the shielded picture are shielded, and the prediction result of the model on the picture after the super pixel is shielded is obtained
Step 2.3: all super pixel blocks except the super pixel blocks i and j in the shielded picture are shielded, and the prediction result of the model on the picture after the super pixel is shielded is obtained
Step 2.4: by the formulaThe impact of the association between i and j on the model decision is obtained.
Further, in step 3, the super-pixel blocks are randomly selected for masking, a perturbation data set is generated and a simple model is trained by the data set, the method is as follows:
step 3.1: generating a perturbation data set by perturbing the instance of interest;
step 3.2: calculating the distance between the interested instance and the disturbance generated instance by using a similar distance measurement mode, and converting the distance into similarity;
step 3.3: obtaining a prediction result of an original model on a disturbance sample;
step 3.4: a simple interpretable model can be trained using the perturbation dataset, the distance weights, and the predictive results of the original model.
Further, in step 4, the contribution of the feature blocks is obtained through a simple model, and feature block selection is performed in combination with the correlation size between features, so as to obtain a final interpretation, and the method is as follows: firstly, the contribution of each characteristic block to the prediction result is obtained through a simple modelHalf the value of the inter-feature correlation size is then usedThe contribution size of the feature block is calculated by the following formula, and finally, the final interpretation is obtained by the size sorting of the contribution.
wherein ,representing the total number of blocks of the super pixel block,representing the degree to which feature i contributes to the model's prediction of the picture as class x, i.e. the importance of feature i,the importance level of the feature i obtained by the LIME method is represented,the direct impact magnitude of the relationship between features i and j on the model prediction result is represented.
The beneficial effects are that: compared with the prior art, the invention has the following advantages:
(1) The invention provides a local interpretable method based on a characteristic relation. When the deep learning model is interpreted, the integrity and the credibility of the interpretation are improved by combining the correlation between the features.
(2) The invention combines the characteristic association contribution with the characteristic self contribution, thereby reducing the randomness of interpretation while guaranteeing the interpretation effect.
(3) The invention combines the characteristic association contribution with the characteristic self contribution, thereby reducing the interpretation sensitivity while guaranteeing the interpretation effect.
(4) In recent years, a deep learning model starts to assist people in making decisions in industries such as medical care, law, precision industry and the like, and whether the decisions are reliable is always not the question, for example, in the legal industry, when a model is predicted to draw a conclusion, a criminal is made, but the process of obtaining the result is possibly wrong, so that it is important to understand how the model is predicted to draw the conclusion. Experiments prove that the method can explain the model prediction result, and compared with the traditional black box model interpretation method, the method improves the stability by at least 1.75 percent and reduces the sensitivity by 0.92 percent, so that the method can be more reliably applied to industries such as medical care, law, precision industry and the like, and can help people judge whether the model can be believed.
Drawings
FIG. 1 is a flow chart of a locally interpretable method of the present invention based on a feature relationship.
Detailed Description
The invention will be further illustrated with reference to specific examples, which are carried out on the basis of the technical solutions of the invention, it being understood that these examples are only intended to illustrate the invention and are not intended to limit the scope thereof.
The local interpretable method based on the characteristic relation firstly obtains a super-pixel segmentation result of an input sample; masking and combining masking are carried out on the super pixels, and the association size between the features is obtained; randomly selecting a super pixel block for shielding, generating a disturbance data set and training a simple model through the data set; and acquiring the contribution of the feature blocks through a simple model, and selecting the feature blocks by combining the correlation sizes among the features to obtain final explanation. The specific steps and implementation modes are as follows:
step 1: the super-pixel segmentation result of the input sample is obtained by the following specific method:
the center of each cluster is initially evenly distributed in the image using a k-means clustering algorithm (k-means clustering algorithm). And each step of iteration, merging pixels with surrounding distances smaller than a set value by the seed pixels to form super pixels.
Step 2: masking and combining masking are carried out on the super pixels, and the association size between the features is obtained, wherein the specific mode comprises the following steps:
step 2.1: firstly, setting the pixel values of all super pixel blocks except the super pixel block i in a picture to a preset value (0 or the average value of all pixels in the picture, etc.), then inputting the picture into an original model as an input, and obtaining the prediction result of the model on the picture
Step 2.2: will beThe pixel values of all super pixel blocks except the super pixel block j in the picture are set to a preset value (0 or the average value of all pixels in the picture, etc.), then the picture is taken as input and is input into an original model, and the prediction result of the model on the picture is obtained
Step 2.3: setting the pixel values of all super pixel blocks except the super pixel block i and the super pixel block j in the picture to a preset value (0 or the average value of all pixels in the picture, etc.), inputting the picture into an original model, and obtaining the prediction result of the model on the picture
Step 2.4: the effect of the correlation between superpixel blocks on model decisions can be understood as the effect of one superpixel block a on model prediction is dependent on other superpixel blocks than just a itself, assuming that one superpixel block a contributes to the model prediction result by a size ofHowever, when the super pixel block a is removed, the prediction probability of the model to the target class is reducedAnd (2) andthenThe effect of the association between the superpixel block a and the other superpixel blocks on the model decision can be understood. The effect of the association between the superpixel blocks i and j on the model decision is obtained by the following formula,
in the formula ,indicating the direct influence of the relation between the characteristic i and the characteristic j on the model prediction result when the model classification result is x,representing the probability that the model classification result is x in the case of feature i only,representing the probability that the model classification result is x in the case of feature j alone.
Step 3: the super pixel blocks are randomly selected for masking, and a simple model is collected and trained through the data set in the following specific way:
step 3.1: by means of the interesting examplesGenerating a disturbance data set Z by disturbance N times;
step 3.2: computing instances of interest by L2 norm distanceDistance from all disturbance instances in the disturbance data set, where the instance of interestAnd one instance in disturbance data set ZThe distance of (2) is calculated as follows:
wherein ,representing image instancesAnd disturbance ofSome instance of the generated disturbance data setDistance between (a) and (b)And (3) withThen the picture will be respectively instantiatedAnd (3) withRepresented as a vector, their mth dimension vector,representing vector dimensions after converting the picture into a vector; and converting such distances into similarities between instances, i.e
wherein ,representative sampleAnd (3) withIs used for the degree of similarity of (c) to (c),representative sampleAnd (3) withIs used for the distance of (a),representing vector dimensions after converting the picture into a vector;
step 3.3: taking disturbance data in the disturbance data set as input and inputting the disturbance data into the original model so as to obtain a prediction result of the original model on a disturbance sample
Step 3.4: a simple and interpretable model can be trained by using the disturbance data set, the distance weight and the prediction result of the original model, and the fitting function of the training model is set as follows
wherein ,representative sampleAnd (3) withIs used for the degree of similarity of (c) to (c),that is, a disturbance sample, inPredicted values in dimensional space (original features), and targeting the predicted values,is atPredicted values on dimensional space (interpretable feature). By the above-described linear regression method, a simple model such as a linear model can be trained, and the definition of the linear model is:
where y is the prediction function,is the weight parameter of the feature/and,is a specific value of the feature/b is a bias value, whereas for image data, as long as a super-pixel block/is presentI.e., 1, then control may be provided by controlling whether or not the super pixel block existsWhen the values of the features other than feature/are set to 0, the formula is obtained, wherein ,the prediction result of the model after the feature values except for the feature l are set to 0 is represented, and this is the contribution degree of the feature l to the model prediction.
Step 4: the contribution of the feature blocks is obtained through a simple model, and the feature blocks are selected by combining the correlation sizes among the features, so that the final explanation is obtained, and the specific mode is as follows:
firstly, the contribution of each characteristic block to the prediction result is obtained through a simple modelHalf the value of the inter-feature correlation size is then usedThe contribution size assigned to the feature block, i.e. calculated byFinally, the final interpretation is obtained by size ordering of the contributions.
wherein ,representing the total number of blocks of the super pixel block,representing the degree to which feature i contributes to the model's prediction of the picture as class x, i.e. the importance of feature i,the importance level of the feature i obtained by the LIME method is represented,the direct impact magnitude of the relationship between features i and j on the model prediction result is represented.
The method provided by the invention has the following advantages that the effectiveness and advantages of the method are verified through the following experiments:
the evaluation indexes are faithfulness/accuracy, stability/consistency and sensitivity.
First select the dataset, the present invention selects the ImageNet dataset, which contains 1000 classifications in the ImageNet dataset, containing 1,281,167 training images, 50,000 verification images, and 100,000 test images. Next, the invention selects InceptionV3 and ResNet50 as experimental models. The comparison method is the original LIME interpretable method.
TABLE 1 accuracy of the feature relationship found by the present invention under different Black Box models
TABLE 2 improvement in stability/consistency of the invention
TABLE 3 improvement in stability/consistency of the invention
The results in table 1 show that the correlation between features obtained by the method of the invention and the correlation between features obtained by verification based on disturbance ideas have relatively high similarity, which indicates that the obtained relationship is basically correct.
The results of Table 2 show that the interpretation results and criteria always have an increasing duty cycle after replacing the important feature selection method in the LIME algorithm with the important feature selection method of the present invention, and thus it can be considered that the interpretable algorithm achieves better stability/consistency, demonstrating that the stability/consistency of the interpretable algorithm can be effectively improved when the inter-feature correlation is considered.
The results in Table 3 show that the algorithm of the present invention achieves a sensitivity substantially similar to LIME when the important feature selection method in LIME is replaced, and that when the value of N is sufficiently large, the correctly interpreted number is smaller in magnitude with the change in N, e.g., 4.73% for LIME when N is changed from 1000 to 3000, 3.23% for LIME when N is changed from 3000 to 5000, 10.81% for LIME, and 9.89% for LIME, thus making the algorithm of the present invention less sensitive than the LIME algorithm.
The invention provides a local interpretable method based on a characteristic relation. When the deep learning model is interpreted, the integrity and the credibility of the interpretation are improved by combining the correlation between the features. The invention combines the characteristic association contribution with the characteristic self contribution, thereby reducing the randomness of interpretation while guaranteeing the interpretation effect. The invention combines the characteristic association contribution with the characteristic self contribution, thereby reducing the interpretation sensitivity while guaranteeing the interpretation effect.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (1)

1. A locally interpretable method based on a feature relation, comprising the steps of:
step 1: firstly, obtaining a super-pixel segmentation result of an input sample;
by means ofkIn the mean value clustering method, the center of each cluster is evenly distributed in an image, each step of iteration is performed, and seed pixels are combined with pixels with the surrounding distance smaller than a set value to form super pixels;
step 2: masking and combining masking are carried out on the super pixels, and the association size between the features is obtained; the method comprises the following steps:
step 2.1: first, the super pixel block is removed from the pictureiAnd obtaining the prediction result of the model on the picture after shielding the super pixels
Step 2.2: super pixel block removal in shielding picturejAnd obtaining the prediction result of the model on the picture after shielding the super pixels
Step 2.3: super pixel block removal in shielding pictureiAndjand obtaining the prediction result of the model on the picture after shielding the super pixels
Step 2.4: by the formulaAcquiring superpixel blocksiAnd super pixel blockjThe impact of the correlation between the model decisions;
wherein ,expressed in the model classification result asxFeatures at the timeiAnd featuresjDirect influence of the relation between the values on the model prediction result is +.>Expressed in terms of features onlyiIn the case of (2), the model classification result isxProbability of->Expressed in terms of features onlyjIn the case of (2), the model classification result isxProbability of (2);
step 3: randomly selecting a super pixel block for shielding, generating a disturbance data set and training a simple model through the data set; the method comprises the following steps:
step 3.1: generating a perturbation data set by perturbing the instance of interest;
step 3.2: calculating the distance between the interested instance and the disturbance generated instance by using a similar distance measurement mode, and converting the distance into similarity;
step 3.3: obtaining a prediction result of an original model on a disturbance sample;
step 3.4: training a simple interpretable model by using the disturbance data set, the distance weight and the prediction result of the original model;
step 4: the contribution of the feature blocks is obtained through a simple model, and feature block selection is carried out by combining the correlation sizes among the features, so that final explanation is obtained; the method comprises the following steps: firstly, the contribution of each characteristic block to the prediction result is obtained through a simple modelThen half value of the correlation size between features +.>Distributing the contribution to the feature blocks, calculating the contribution sizes of the feature blocks through the following formula, and finally, acquiring final explanation through the size sorting of the contribution;
wherein ,prepresenting the total number of blocks of the super pixel block,representing characteristicsiPredicting pictures into classes for modelsxThe degree of contribution of (i) is the characteristiciImportance of->Representing features obtained by LIME methodiIs used for determining the importance of the person,representing characteristicsiAnd (3) withjThe relationship between the two has direct influence on the size of the model prediction result.
CN202310978031.3A 2023-08-04 2023-08-04 Local interpretable method based on characteristic relation Active CN116704208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310978031.3A CN116704208B (en) 2023-08-04 2023-08-04 Local interpretable method based on characteristic relation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310978031.3A CN116704208B (en) 2023-08-04 2023-08-04 Local interpretable method based on characteristic relation

Publications (2)

Publication Number Publication Date
CN116704208A CN116704208A (en) 2023-09-05
CN116704208B true CN116704208B (en) 2023-10-20

Family

ID=87824313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310978031.3A Active CN116704208B (en) 2023-08-04 2023-08-04 Local interpretable method based on characteristic relation

Country Status (1)

Country Link
CN (1) CN116704208B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117934450A (en) * 2024-03-13 2024-04-26 中国人民解放军国防科技大学 Interpretive method and system for multi-source image data deep learning model

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553389A (en) * 2020-04-08 2020-08-18 哈尔滨工程大学 Decision tree generation method for understanding deep learning model decision mechanism
CN111753995A (en) * 2020-06-23 2020-10-09 华东师范大学 Local interpretable method based on gradient lifting tree
CN112561074A (en) * 2020-11-09 2021-03-26 联想(北京)有限公司 Machine learning interpretable method, device and storage medium
CN112784986A (en) * 2021-02-08 2021-05-11 中国工商银行股份有限公司 Feature interpretation method, device, equipment and medium for deep learning calculation result
CN113892148A (en) * 2019-03-15 2022-01-04 斯宾泰尔斯公司 Interpretable AI (xAI) platform for computational pathology
CN114170485A (en) * 2021-11-23 2022-03-11 北京航空航天大学 Deep learning interpretable method and apparatus, storage medium, and program product
CN114220549A (en) * 2021-12-16 2022-03-22 无锡中盾科技有限公司 Effective physiological feature selection and medical causal reasoning method based on interpretable machine learning
CN114330109A (en) * 2021-12-14 2022-04-12 深圳先进技术研究院 Interpretability method and system of deep reinforcement learning model under unmanned scene
WO2022194069A1 (en) * 2021-03-15 2022-09-22 华为技术有限公司 Saliency map generation method, and abnormal object detection method and device
CN115358975A (en) * 2022-07-28 2022-11-18 西安邮电大学 Method for performing interpretable analysis on brain tumor segmentation deep learning network
CN115457365A (en) * 2022-09-15 2022-12-09 北京百度网讯科技有限公司 Model interpretation method and device, electronic equipment and storage medium
CN115527097A (en) * 2022-11-01 2022-12-27 厦门大学 CNN interpretation graph object correlation analysis method based on ablation analysis
CN115905926A (en) * 2022-12-09 2023-04-04 华中科技大学 Code classification deep learning model interpretation method and system based on sample difference

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220129791A1 (en) * 2020-10-28 2022-04-28 Oracle International Corporation Systematic approach for explaining machine learning predictions

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113892148A (en) * 2019-03-15 2022-01-04 斯宾泰尔斯公司 Interpretable AI (xAI) platform for computational pathology
CN111553389A (en) * 2020-04-08 2020-08-18 哈尔滨工程大学 Decision tree generation method for understanding deep learning model decision mechanism
CN111753995A (en) * 2020-06-23 2020-10-09 华东师范大学 Local interpretable method based on gradient lifting tree
CN112561074A (en) * 2020-11-09 2021-03-26 联想(北京)有限公司 Machine learning interpretable method, device and storage medium
CN112784986A (en) * 2021-02-08 2021-05-11 中国工商银行股份有限公司 Feature interpretation method, device, equipment and medium for deep learning calculation result
WO2022194069A1 (en) * 2021-03-15 2022-09-22 华为技术有限公司 Saliency map generation method, and abnormal object detection method and device
CN114170485A (en) * 2021-11-23 2022-03-11 北京航空航天大学 Deep learning interpretable method and apparatus, storage medium, and program product
WO2023109640A1 (en) * 2021-12-14 2023-06-22 深圳先进技术研究院 Interpretability method and system for deep reinforcement learning model in driverless scene
CN114330109A (en) * 2021-12-14 2022-04-12 深圳先进技术研究院 Interpretability method and system of deep reinforcement learning model under unmanned scene
CN114220549A (en) * 2021-12-16 2022-03-22 无锡中盾科技有限公司 Effective physiological feature selection and medical causal reasoning method based on interpretable machine learning
CN115358975A (en) * 2022-07-28 2022-11-18 西安邮电大学 Method for performing interpretable analysis on brain tumor segmentation deep learning network
CN115457365A (en) * 2022-09-15 2022-12-09 北京百度网讯科技有限公司 Model interpretation method and device, electronic equipment and storage medium
CN115527097A (en) * 2022-11-01 2022-12-27 厦门大学 CNN interpretation graph object correlation analysis method based on ablation analysis
CN115905926A (en) * 2022-12-09 2023-04-04 华中科技大学 Code classification deep learning model interpretation method and system based on sample difference

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A modified perturbed sampling method for local interpretable model-agnostic explanation;Sheng Shi等;https://arxiv.org/abs/2002.07434;1-5 *
Hyper-Mol: Molecular Representation Learning via Fingerprint-Based Hypergraph;Shicheng Cui等;Computational Intelligence and Neuroscience;第2023卷;1-9 *
卷积神经网络的可解释性研究综述;窦慧等;软件学报;1-27 *
基于深度学习的数字病理图像分割综述与展望;宋杰等;软件学报;第32卷(第05期);1427-1460 *
高分辨率遥感影像耕地提取研究进展与展望;张新长等;武汉大学学报(信息科学版);1-15 *

Also Published As

Publication number Publication date
CN116704208A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN110580501B (en) Zero sample image classification method based on variational self-coding countermeasure network
US11663489B2 (en) Machine learning systems and methods for improved localization of image forgery
Guan et al. A unified probabilistic model for global and local unsupervised feature selection
CN116704208B (en) Local interpretable method based on characteristic relation
CN113761259A (en) Image processing method and device and computer equipment
CN114582470A (en) Model training method and device and medical image report labeling method
CN111325237B (en) Image recognition method based on attention interaction mechanism
CN115861715B (en) Knowledge representation enhancement-based image target relationship recognition algorithm
CN109086794B (en) Driving behavior pattern recognition method based on T-LDA topic model
CN114913923A (en) Cell type identification method aiming at open sequencing data of single cell chromatin
CN112580521A (en) Multi-feature true and false video detection method based on MAML (maximum likelihood modeling language) meta-learning algorithm
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN117011274A (en) Automatic glass bottle detection system and method thereof
Su et al. Going the extra mile in face image quality assessment: A novel database and model
Zhang Application of artificial intelligence recognition technology in digital image processing
Lyu et al. Probabilistic object detection via deep ensembles
Sameki et al. ICORD: Intelligent Collection of Redundant Data-A Dynamic System for Crowdsourcing Cell Segmentations Accurately and Efficiently.
Gao et al. A robust improved network for facial expression recognition
CN116563602A (en) Fine granularity image classification model training method based on category-level soft target supervision
CN116958615A (en) Picture identification method, device, equipment and medium
Tereikovskyi et al. A neural network model for object mask detection in medical images
Adaloglou et al. Rethinking cluster-conditioned diffusion models
CN111626409B (en) Data generation method for image quality detection
Atallah et al. NEURAL NETWORK WITH AGNOSTIC META-LEARNING MODEL FOR FACE-AGING RECOGNITION
CN113112515B (en) Evaluation method for pattern image segmentation algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant