CN117557844B - Multi-model fusion tongue image intelligent classification method based on data enhancement - Google Patents

Multi-model fusion tongue image intelligent classification method based on data enhancement Download PDF

Info

Publication number
CN117557844B
CN117557844B CN202311513218.2A CN202311513218A CN117557844B CN 117557844 B CN117557844 B CN 117557844B CN 202311513218 A CN202311513218 A CN 202311513218A CN 117557844 B CN117557844 B CN 117557844B
Authority
CN
China
Prior art keywords
data set
image classification
tongue image
training
trainer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311513218.2A
Other languages
Chinese (zh)
Other versions
CN117557844A (en
Inventor
刘锡铃
龙海侠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Normal University
Original Assignee
Hainan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Normal University filed Critical Hainan Normal University
Priority to CN202311513218.2A priority Critical patent/CN117557844B/en
Publication of CN117557844A publication Critical patent/CN117557844A/en
Application granted granted Critical
Publication of CN117557844B publication Critical patent/CN117557844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/008Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols involving homomorphic encryption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明属于舌像分类技术领域,具体公开提供的一种基于数据增强的多模型融合舌像智能分类方法,该方法包括:构建原始数据集,将原始数据集扩展得到训练数据集,对训练数据集中各训练样本进行预测标签和真实标签添加;进行难训练样本选取,设置难训练样本队列,并进行强化训练;进行训练样本的特征提取和融合;将各本地训练者与全局训练者进行数据交换,并进行训练,据此进行舌像分类,进行多个训练数据集对比实验,并分析对比实验数据。本发明有效解决了当前数据量不足时难以保障分类结果精准性的问题,拓展了舌像分类的使用场景,规避了当前舌像分类方式存在场景限制性,进而提升了舌像分类的灵活性、适用性和可靠性。

The present invention belongs to the technical field of tongue image classification, and specifically discloses a multi-model fusion tongue image intelligent classification method based on data enhancement, which includes: constructing an original data set, expanding the original data set to obtain a training data set, adding predicted labels and real labels to each training sample in the training data set; selecting difficult training samples, setting a difficult training sample queue, and performing intensive training; extracting and fusing features of training samples; exchanging data between each local trainer and the global trainer, and training, classifying tongue images accordingly, performing comparative experiments on multiple training data sets, and analyzing comparative experimental data. The present invention effectively solves the problem that it is difficult to ensure the accuracy of classification results when the current data volume is insufficient, expands the use scenarios of tongue image classification, avoids the scenario limitations of the current tongue image classification method, and thus improves the flexibility, applicability and reliability of tongue image classification.

Description

一种基于数据增强的多模型融合舌像智能分类方法A multi-model fusion tongue image intelligent classification method based on data enhancement

技术领域Technical Field

本发明属于舌像分类技术领域,涉及到一种基于数据增强的多模型融合舌像智能分类方法。The present invention belongs to the technical field of tongue image classification, and relates to a multi-model fusion tongue image intelligent classification method based on data enhancement.

背景技术Background technique

近年来,随着人工智能技术的不断发展,舌像智能分类在中医诊断领域中扮演越来越重要的角色。舌像是中医诊断的重要依据之一,通过对舌象的观察和分析,可以判断疾病的类型、程度和变化趋势。In recent years, with the continuous development of artificial intelligence technology, tongue image intelligent classification has played an increasingly important role in the field of TCM diagnosis. Tongue image is one of the important bases for TCM diagnosis. By observing and analyzing tongue image, the type, degree and change trend of the disease can be judged.

在舌像智能分类任务中,可以利用数据增强的方法来扩充训练集,这样可以有效地增加训练数据的多样性,提高模型的鲁棒性。然而现在模型对样本规模要求较高,当样本规模不够时参数难以得到有效的训练,往往会造成欠拟合,使得模型难以达到很好的效果,因此当前舌像分类还存在以下几个方面的不足:1、数据量不足时难以保障分类的精准性:医疗相关组织的病例是保密且实际能用于机器学习的结构化数据差别较大。大型医疗组织可能拥有很多病例资源,但是对于小型的医疗组织,病例往往有限,存在一定的限制。In the task of intelligent tongue image classification, data enhancement methods can be used to expand the training set, which can effectively increase the diversity of training data and improve the robustness of the model. However, the current model has high requirements for sample size. When the sample size is not enough, the parameters are difficult to be effectively trained, which often causes underfitting, making it difficult for the model to achieve good results. Therefore, the current tongue image classification still has the following deficiencies: 1. It is difficult to ensure the accuracy of classification when the amount of data is insufficient: the cases of medical-related organizations are confidential and the structured data that can actually be used for machine learning varies greatly. Large medical organizations may have a lot of case resources, but for small medical organizations, the cases are often limited and there are certain restrictions.

2、难以在有限并病历资源下的开展联邦学习,并且未给难以训练的样本增加伪标签进行联邦模型反复训练。2. It is difficult to carry out federated learning with limited medical record resources, and pseudo labels are not added to difficult-to-train samples for repeated training of the federated model.

3、特征提取不全面:残差网络、VGG16、LSTM等算法能够很好的提取图像、文本等的静态特征,但是对图像不同部位的空间关联关系难以有效建模。3. Incomplete feature extraction: Residual networks, VGG16, LSTM and other algorithms can extract static features of images and texts very well, but it is difficult to effectively model the spatial correlation between different parts of the image.

4、当前数据传递量庞大,造成大量的通信开销,进而容易造成通信链路堵塞,无法保障舌像分类的效率。4. The current data transmission volume is huge, resulting in a large amount of communication overhead, which in turn easily causes communication link congestion and cannot guarantee the efficiency of tongue image classification.

发明内容Summary of the invention

鉴于此,为解决上述背景技术中所提出的问题,现提出一种基于数据增强的多模型融合舌像智能分类方法。In view of this, in order to solve the problems raised in the above background technology, a multi-model fusion tongue image intelligent classification method based on data enhancement is proposed.

本发明的目的可以通过以下技术方案实现:本发明提供一种基于数据增强的多模型融合舌像智能分类方法,包括:步骤1、将各舌苔图像作为各训练样本,并构成原始数据集,将原始数据集进行扩展得到训练数据集,对训练数据集中各训练样本进行预测标签和真实标签添加。The objective of the present invention can be achieved through the following technical scheme: The present invention provides a multi-model fusion tongue image intelligent classification method based on data enhancement, comprising: step 1, taking each tongue coating image as each training sample, and forming an original data set, expanding the original data set to obtain a training data set, and adding predicted labels and true labels to each training sample in the training data set.

步骤2、结合多个网络结构分别提取训练数据集中各训练样本不同维度的特征,并进行融合,得到融合特征。Step 2: Combine multiple network structures to extract features of different dimensions of each training sample in the training data set, and fuse them to obtain fused features.

步骤3、对各训练者进行难训练样本选取,据此设置难训练样本队列,并对难训练样本队列进行强化训练。Step 3: Select difficult training samples for each trainer, set up a difficult training sample queue accordingly, and perform intensive training on the difficult training sample queue.

步骤4、提取各训练者的属性标签,将属性标签为全局和本体的训练者分别记为全局训练者和本地训练者,将各本地训练者与全局训练者进行数据交换,并对交换后的数据进行再次训练,并根据训练结果进行舌像分类。Step 4: extract the attribute labels of each trainer, record the trainers with attribute labels of global and entity as global trainers and local trainers respectively, exchange data between each local trainer and the global trainer, train the exchanged data again, and classify the tongue images according to the training results.

步骤5、进行多个训练数据集对比实验,得到对比实验数据,分析所述对比实验数据,得到对比实验结果,进而输出对比实验结果。Step 5: Conduct comparative experiments on multiple training data sets to obtain comparative experimental data, analyze the comparative experimental data to obtain comparative experimental results, and then output the comparative experimental results.

优选地,所述步骤2中进行融合的具体执行过程包括以下步骤:通过随机投票的方式在训练数据集中进行训练样本选取,将选取的各训练样本记为目标样本。Preferably, the specific execution process of the fusion in step 2 includes the following steps: selecting training samples from the training data set by random voting, and recording each selected training sample as a target sample.

通过残差网络提取目标样本的静态特征。The static features of the target sample are extracted through the residual network.

通过胶囊网络提取目标样本内部主体结构之间的空间特征。The capsule network is used to extract the spatial features between the main structures inside the target sample.

通过若干层卷积实现对残差网络提取的静态特征和胶囊网络提取的空间特征的降维。The dimensionality reduction of the static features extracted by the residual network and the spatial features extracted by the capsule network is achieved through several layers of convolution.

通过融合算法将降维后的静态特征和空间特征进行融合。The static features and spatial features after dimensionality reduction are fused through the fusion algorithm.

优选地,所述融合算法具体表示为:x是训练者i的训练样本数据,i为训练者编号,i=1,2,......n,Gi,α,β为第i个训练者的网络结构表示,α表示训练者之间的共享参数,β表示训练者的本地私有参数,fr、fc分别表示静态特征、空间特征,δ和λ分别表示静态特征和空间特征对应分类贡献度因子,fmerge表示融合特征。Preferably, the fusion algorithm is specifically expressed as: x is the training sample data of trainer i, i is the trainer number, i=1,2,...n, Gi,α,β is the network structure representation of the i-th trainer, α represents the shared parameters between trainers, β represents the local private parameters of the trainer, fr and fc represent static features and spatial features respectively, δ and λ represent the classification contribution factors corresponding to static features and spatial features respectively, and f merge represents fused features.

优选地,所述难训练样本选取通过训练样本选取模型选取得到,其中,训练样本选取模型具体表示为:式中,j表示训练数据集中训练样本编号,j=1,2,......u,M为设定难训练样本特征与融合特征的参照最小余弦值,mj表示训练数据集中第j个训练样本对应预测标签与真实标签的余弦距离,Hj←x if mj≤M表示训练数据集中第j个训练样本对应预测标签与真实标签的余弦距离小于或者等于难训练样本特征与融合特征的参照最小余弦值时,训练样本被收集至难训练样本队列Hj中,L标签数目,Avgj′表示训练者i的训练样本中真实标签对应所有真实样本特征的均值。Preferably, the difficult training samples are selected by a training sample selection model, wherein the training sample selection model is specifically expressed as: In the formula, j represents the training sample number in the training data set, j = 1, 2, ... u, M is the reference minimum cosine value for setting the difficult training sample features and the fusion features, m j represents the cosine distance between the predicted label and the true label corresponding to the j-th training sample in the training data set, H j ←x if m j ≤M means that when the cosine distance between the predicted label and the true label corresponding to the j-th training sample in the training data set is less than or equal to the reference minimum cosine value between the difficult training sample features and the fusion features, the training sample is collected into the difficult training sample queue H j , L is the number of labels, and Avg j′ represents the mean of all true sample features corresponding to the true label in the training samples of trainer i.

优选地,所述各本地训练者与全局训练者进行数据交换,包括:将残差网络和胶囊网络作为联邦学习模型下各训练者的基本网络架构,基于联邦学习模型,各本地训练者通过同态加密模型对其融合特征进行加密,进而得到各本地训练者的加密参数矩阵Pg,g表示本地训练者编号,g=1,2,......h。Preferably, each local trainer exchanges data with the global trainer, including: using the residual network and the capsule network as the basic network architecture of each trainer under the federated learning model, based on the federated learning model, each local trainer encrypts its fusion features through a homomorphic encryption model, and then obtains the encrypted parameter matrix P g of each local trainer, where g represents the local trainer number, g=1, 2,...h.

基于加密参数矩阵构建各本体训练者的还原参数矩阵WgBased on the encrypted parameter matrix, the restored parameter matrix W g of each ontology trainer is constructed.

式中,sig函数表征加密矩阵内元素的符号,a表示待加密矩阵的维度,d表示待加密矩阵维度下向量的长度,y为设定自然常数,x′为矩阵元素,x′=fa,dWherein, the sig function represents the sign of the element in the encryption matrix, a represents the dimension of the matrix to be encrypted, d represents the length of the vector under the dimension of the matrix to be encrypted, y is a set natural constant, x′ is a matrix element, and x′=fa ,d .

设定解密参数矩阵 参数Q是同态加密矩阵的解密函数,/>表示对矩阵对应元素求内积。Set decryption parameter matrix Parameter Q is the decryption function of the homomorphic encryption matrix,/> It means to find the inner product of corresponding elements of the matrix.

根据各本地训练者对应融合特征的加密参数矩阵和解密参数矩阵,各本地训练者与各全局训练者通过异步传递方式进行数据交换。According to the encryption parameter matrix and decryption parameter matrix of each local trainer corresponding to the fusion feature, each local trainer exchanges data with each global trainer through asynchronous transmission.

优选地,所述同态加密模型具体表示为:式中,/>表示向上取整符号。Preferably, the homomorphic encryption model is specifically expressed as: In the formula, /> Indicates the round-up symbol.

优选地,所述进行多个训练数据集对比实验,包括以下步骤:A1、将当前多模型融合舌像分类作为目标舌像分类规则,并从舌像分类数据库中提取当前累计制定的各舌像分类规则,作为各参照舌像分类规则。Preferably, the comparative experiment of multiple training data sets comprises the following steps: A1, taking the current multi-model fusion tongue image classification as the target tongue image classification rule, and extracting the currently accumulated tongue image classification rules from the tongue image classification database as the reference tongue image classification rules.

A3、设定各对比数据集,并将各对比数据集作为各实验数据集,同时并制定各评估指标。A3. Set up various comparison data sets and use them as experimental data sets, and formulate various evaluation indicators.

A2、随机抽取Y0个本地训练者,作为各实验本地训练者,同时随机抽取Y1个全局训练者,作为各实验全局训练者,并制定实验数据集分配策略。A2. Randomly select Y 0 local trainers as local trainers for each experiment, and randomly select Y 1 global trainers as global trainers for each experiment, and formulate an experimental data set allocation strategy.

A4、基于目标舌像分类规则和各参照舌像分类规则在各实验数据集上开展融合对比实验,并记录融合对比实验数据。A4. Based on the target tongue image classification rules and the reference tongue image classification rules, a fusion comparison experiment is carried out on each experimental data set, and the fusion comparison experimental data is recorded.

A5、更新测试数据集,将更新的测试数据集作为消融实验数据集,基于目标舌像分类规则和各参照舌像分类规则在消融实验数据集上开展消融对比实验,并记录消融对比实验数据。A5. Update the test data set, use the updated test data set as the ablation experiment data set, conduct an ablation comparison experiment on the ablation experiment data set based on the target tongue image classification rules and each reference tongue image classification rule, and record the ablation comparison experiment data.

优选地,所述制定实验数据集分配策略,包括:将各实验全局训练者数目和各实验本地训练者数目进行整合,得到各实验训练者,按照实验全局训练者先排序规则对各实验训练者进行排序,将排序结果作为各实验训练者的编号。Preferably, the formulation of the experimental data set allocation strategy includes: integrating the number of global trainers of each experiment and the number of local trainers of each experiment to obtain trainers of each experiment, sorting each experimental trainer according to the experiment global trainer first sorting rule, and using the sorting result as the number of each experimental trainer.

按照随机分配规则对各实验训练者进行实验数据集的规模分配,得到各实验训练者对应实验数据集的分配规模,其中,随机分配规则具体表示为k′q表示第q个实验训练者的对应实验数据集的分配规模,q表示实验训练者编号,q=1,2,......b,k′q-1表示第q-1个实验训练者的对应实验数据集的分配规模。According to the random allocation rule, the scale of the experimental data set is allocated to each experimental trainer to obtain the allocation scale of the experimental data set corresponding to each experimental trainer. The random allocation rule is specifically expressed as k′ q represents the allocation scale of the corresponding experimental data set of the qth experimental trainer, q represents the experimental trainer number, q=1, 2,...b, k′ q-1 represents the allocation scale of the corresponding experimental data set of the q-1th experimental trainer.

设置训练数据集和测试数据集,并设定训练数据集和测试数据集的分配占比,将训练数据集的分配占比记为k,将测试数据集的分配占比记为k,并将k和k各实验训练者对应训练数据集和测试数据集的分配规模。Set up a training data set and a test data set, and set the allocation ratio of the training data set and the test data set. The allocation ratio of the training data set is recorded as ktrain , and the allocation ratio of the test data set is recorded as ktest . The ktrain and ktest experimental trainers correspond to the allocation scales of the training data set and the test data set.

将实验数据集的分配规模以及训练数据集和测试数据集的分配规模作为实验数据集分配策略。The allocation scale of the experimental dataset and the allocation scale of the training dataset and the test dataset are used as the experimental dataset allocation strategy.

优选地,所述分析所述对比实验数据,包括:从所述融合对比实验数据中提取目标舌像分类规则以及各参照舌像分类规则在各实验数据集对应各评估指标的提升值。Preferably, the analyzing the comparative experimental data includes: extracting the target tongue image classification rule and the improvement value of each reference tongue image classification rule corresponding to each evaluation index in each experimental data set from the fused comparative experimental data.

将目标舌像分类规则与各参照舌像分类规则在各实验数据集对应各评估指标的提升值进行对应作差,将差值作为提升值偏差。The improvement values of the target tongue image classification rule and the reference tongue image classification rules corresponding to each evaluation index in each experimental data set are subtracted, and the difference is used as the improvement value deviation.

若目标舌像分类规则在某实验数据集对应某评估指标与某参照舌像分类规则在该实验数据集对应该评估指标的提升值偏差大于0,将该参照舌像分类规则作为第一优化舌像分类规则,将该实验数据集作为第一优化舌像分类规则的第一优化实验数据集,将该评估指标作为第一优化实验数据集的优化评估指标。If the deviation between the improvement value of a target tongue image classification rule corresponding to a certain evaluation indicator in a certain experimental data set and that of a reference tongue image classification rule corresponding to the evaluation indicator in the experimental data set is greater than 0, the reference tongue image classification rule is used as the first optimized tongue image classification rule, the experimental data set is used as the first optimized experimental data set of the first optimized tongue image classification rule, and the evaluation indicator is used as the optimized evaluation indicator of the first optimized experimental data set.

提取目标舌像分类规则对应各第一优化舌像分类规则下各第一优化实验数据集的对应各优化评估指标的提升值偏差差Urvl,r表示第一优化舌像分类规则编号,v表示第一优化实验数据集编号,v=1,2,......σ,l为优化评估指标编号,l=1,2,......ε,进而统计目标舌像分类规则的融合精准优化趋向度ψExtract the improvement value deviation difference U rvl of each optimization evaluation index corresponding to each first optimization experimental data set under each first optimization tongue image classification rule of the target tongue image classification rule, where r represents the number of the first optimization tongue image classification rule, v represents the first optimization experimental data set number, v=1,2,...σ, l is the optimization evaluation index number, l=1,2,...ε, and then the fusion precise optimization trend degree ψ of the target tongue image classification rule is statistically analyzed .

从所述消融对比实验数据中筛选出提取目标舌像分类规则以及各参照舌像分类规则在各实验数据集对应各评估指标的下降值,按照ψ的统计方式同理统计得到目标舌像分类规则的消融精准优化趋向度ψ′The target tongue image classification rule and the reduction values of the reference tongue image classification rules corresponding to each evaluation index in each experimental data set are screened out from the ablation comparison experimental data, and the ablation precision optimization trend degree ψ′optimal of the target tongue image classification rule is obtained in the same way as the statistical method of ψoptimal .

统计目标舌像分类规则的综合精准优化趋向度ψψ0、ψ1分别为设定参照的融合实验精准优化趋向度、消融实验精准优化趋向度,并将ψ、ψ′和ψ作为对比实验分析结果。The comprehensive and accurate optimization trend degree ψ of the statistical target tongue image classification rule is ψ 0 and ψ 1 are respectively the precise optimization trend of the fusion experiment and the precise optimization trend of the ablation experiment of the set reference, and ψ optimal , ψ′ optimal and ψ comprehensive are taken as the comparative experimental analysis results.

优选地,所述目标舌像分类规则的融合精准优化趋向度的具体统计公式为:式中,K表示舌像分类规则数,U0为设定参照评估指标提升值,/>σ、ε分别为第一优化舌像分类规则数目、第一优化实验数据集数目、优化评估指标数目。Preferably, the specific statistical formula for the fusion precision optimization trend of the target tongue image classification rule is: In the formula, K parameter represents the number of tongue image classification rules, U 0 is the reference evaluation index improvement value, /> σ and ε are the number of the first optimized tongue image classification rules, the number of the first optimized experimental data sets, and the number of optimized evaluation indicators, respectively.

相较于现有技术,本发明的有益效果如下:(1)本发明通过使用残差网络提取图像的静态特征,使用胶囊网络提取图像内部主体结构之间的空间特征,然后使用若干层卷积实现对二者特征的降维,最后将二者提取的特征进行融合,将残差网络和胶囊网络作为联邦学习模型各训练者的基本网络架构,进而基于该架构开展联邦学习,有效解决了当前数据量不足时难以保障分类结果精准性的问题,拓展了舌像分类的使用场景,规避了当前舌像分类方式存在场景限制性,进而提升了舌像分类的灵活性、适用性和可靠性。Compared with the prior art, the beneficial effects of the present invention are as follows: (1) The present invention uses a residual network to extract static features of an image, uses a capsule network to extract spatial features between the main structures inside the image, and then uses several layers of convolution to achieve dimensionality reduction of the features of the two. Finally, the features extracted by the two are fused, and the residual network and the capsule network are used as the basic network architecture of each trainer of the federated learning model, and then federated learning is carried out based on the architecture, which effectively solves the problem that it is difficult to ensure the accuracy of classification results when the current data volume is insufficient, expands the use scenarios of tongue image classification, avoids the scenario limitations of the current tongue image classification method, and thus improves the flexibility, applicability and reliability of tongue image classification.

(2)本发明通过对训练样本进行预测标签和真实标签添加,为有限病例资源背景下联邦学习的开展提供了可能和便利,实现了难以训练样本的伪标签的添加,便于后续难训练样本进行联邦学习模型反复训练,进而提升了后续舌像分类的准确性和有效性。(2) The present invention provides the possibility and convenience for the implementation of federated learning in the context of limited case resources by adding predicted labels and true labels to training samples, realizes the addition of pseudo labels for difficult-to-train samples, and facilitates the repeated training of the federated learning model for subsequent difficult-to-train samples, thereby improving the accuracy and effectiveness of subsequent tongue image classification.

(3)本发明通过使用残差网络提取图像的静态特征,使用胶囊网络提取图像内部主体结构之间的空间特征,规避了当前图像特征提取不全面的欠缺,实习了舌苔图像不同部位的空间化特征提取,进而提高了联邦学习模型构建的便利性和可行性。(3) The present invention uses a residual network to extract the static features of the image and a capsule network to extract the spatial features between the main structures inside the image, thereby avoiding the shortcomings of the current image feature extraction that is incomplete, and practicing the spatial feature extraction of different parts of the tongue coating image, thereby improving the convenience and feasibility of building a federated learning model.

(4)本发明进行本地训练者和全局训练者的数据传递时,通过基于联邦学习模型和同态加密模型进行加密参数矩阵和解密参数矩阵设置,同时通过异步传递方式进行数据传递,有效缩减当前传输数据量,进而减少了通信开销,有效避免了通信链路的堵塞,确保了舌像分类的效率,并且在另一层面还有效降低了数据传递过程中的丢失几率以及传输错误几率。(4) When the present invention performs data transmission between local trainers and global trainers, the encryption parameter matrix and the decryption parameter matrix are set based on the federated learning model and the homomorphic encryption model, and data is transmitted through asynchronous transmission, which effectively reduces the current transmission data volume, thereby reducing communication overhead, effectively avoiding congestion of the communication link, ensuring the efficiency of tongue image classification, and at another level effectively reducing the loss probability and transmission error probability during data transmission.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the accompanying drawings required for describing the embodiments will be briefly introduced below. Obviously, the accompanying drawings described below are only some embodiments of the present invention. For ordinary technicians in this field, other accompanying drawings can be obtained based on these accompanying drawings without paying creative work.

图1为本发明方法实施步骤流程示意图。FIG1 is a schematic flow chart of the implementation steps of the method of the present invention.

图2为本发明联邦学习模型示意图。FIG2 is a schematic diagram of a federated learning model of the present invention.

图3为本发明数据交换传递示意图。FIG. 3 is a schematic diagram of data exchange and transmission according to the present invention.

图4为本发明训练者网络结构特征提取示意图。FIG4 is a schematic diagram of the trainer network structure feature extraction of the present invention.

图5为本发明舌像分类规则在2CLS数据集的融合实验对比图。FIG5 is a comparison diagram of the fusion experiment of the tongue image classification rule of the present invention in the 2CLS data set.

图6为本发明舌像分类规则在ZXSFL数据集的融合实验对比图。FIG. 6 is a comparison diagram of the fusion experiment of the tongue image classification rule of the present invention on the ZXSFL dataset.

图7为本发明舌像分类规则在2CLS数据集的消融实验对比图。FIG. 7 is a comparison diagram of the ablation experiment of the tongue image classification rule of the present invention on the 2CLS dataset.

图8为本发明舌像分类规则在ZXSFL数据集的消融实验对比图。FIG8 is a comparison diagram of the ablation experiment of the tongue image classification rule of the present invention on the ZXSFL dataset.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.

请参阅图1所示,本发明提供了一种基于数据增强的多模型融合舌像智能分类方法,包括:步骤1、将各舌苔图像作为各训练样本,并构成原始数据集,将原始数据集进行扩展得到训练数据集,对训练数据集中各训练样本进行预测标签和真实标签添加。Please refer to Figure 1, the present invention provides a multi-model fusion tongue image intelligent classification method based on data enhancement, including: step 1, taking each tongue coating image as a training sample to form an original data set, expanding the original data set to obtain a training data set, and adding predicted labels and true labels to each training sample in the training data set.

在一个具体实施例中,将原始数据集进行扩展得到训练数据集的扩展方式为:将原始数据集中各训练样本通过包括但不限于翻转和镜像的方式增加训练样本的数目,进而将增加的训练样本数目和训练样本数目组合成训练数据集。In a specific embodiment, the expansion method of expanding the original data set to obtain the training data set is: increasing the number of training samples of each training sample in the original data set by means including but not limited to flipping and mirroring, and then combining the increased number of training samples and the number of training samples into the training data set.

本发明实施例明通过对训练样本进行预测标签和真实标签添加,为有限病例资源背景下联邦学习的开展提供了可能和便利,实现了难以训练样本的伪标签的添加,便于后续难训练样本进行联邦学习模型反复训练,进而提升了后续舌像分类的准确性和有效性。The embodiments of the present invention provide the possibility and convenience for the implementation of federated learning in the context of limited case resources by adding predicted labels and true labels to training samples, realize the addition of pseudo labels for difficult-to-train samples, facilitate the repeated training of the federated learning model for subsequent difficult-to-train samples, and thus improve the accuracy and effectiveness of subsequent tongue image classification.

步骤2、结合多个网络结构分别提取训练数据集中各训练样本不同维度的特征,并进行融合,得到融合特征。Step 2: Combine multiple network structures to extract features of different dimensions of each training sample in the training data set, and fuse them to obtain fused features.

示例性地,步骤2中进行融合的具体执行过程包括以下步骤:通过随机投票的方式在训练数据集中进行训练样本选取,将选取的各训练样本记为目标样本。Exemplarily, the specific execution process of the fusion in step 2 includes the following steps: selecting training samples from the training data set by random voting, and recording each selected training sample as a target sample.

通过残差网络提取目标样本的静态特征。The static features of the target sample are extracted through the residual network.

通过胶囊网络提取目标样本内部主体结构之间的空间特征。The capsule network is used to extract the spatial features between the main structures inside the target sample.

通过若干层卷积实现对残差网络提取的静态特征和胶囊网络提取的空间特征的降维。The dimensionality reduction of the static features extracted by the residual network and the spatial features extracted by the capsule network is achieved through several layers of convolution.

通过融合算法将降维后的静态特征和空间特征进行融合。The static features and spatial features after dimensionality reduction are fused through the fusion algorithm.

在一个具体实施例中,进行融合的具体融合过程参照图4所示,首先使用随机投票的方式获取一定批次的样本,分别使用残差网络和胶囊网络提取样本的静态特征和空间特征。然后继续进行3次基础卷积,卷积核大小分别是:3×3,3×3,2×2,步长分别是:2×2,通过3层简单卷积将残差网络和胶囊网络提取的特征进行降维,最后通过融合算法将二者进行融合。In a specific embodiment, the specific fusion process is shown in FIG4. First, a certain batch of samples is obtained by random voting, and the static features and spatial features of the samples are extracted using the residual network and the capsule network, respectively. Then, three basic convolutions are performed, and the convolution kernel sizes are 3×3, 3×3, and 2×2, and the step sizes are 2×2. The features extracted by the residual network and the capsule network are reduced in dimension through three layers of simple convolution, and finally the two are fused through the fusion algorithm.

本发明实施例通过使用残差网络提取图像的静态特征,使用胶囊网络提取图像内部主体结构之间的空间特征,规避了当前图像特征提取不全面的欠缺,实习了舌苔图像不同部位的空间化特征提取,进而提高了联邦学习模型构建的便利性和可行性。The embodiment of the present invention uses a residual network to extract the static features of the image and a capsule network to extract the spatial features between the main structures inside the image, thereby avoiding the shortcomings of the current image feature extraction that are incomplete, and practicing the spatial feature extraction of different parts of the tongue coating image, thereby improving the convenience and feasibility of building a federated learning model.

进一步地,融合算法具体表示为:x是训练者i的训练样本数据,i为训练者编号,i=1,2,......n,Gi,α,β为第i个训练者的网络结构表示,α表示训练者之间的共享参数,β表示训练者的本地私有参数,fr、fc分别表示静态特征、空间特征,δ和λ分别表示静态特征和空间特征对应分类贡献度因子,fmerge表示融合特征。Furthermore, the fusion algorithm is specifically expressed as: x is the training sample data of trainer i, i is the trainer number, i=1,2,......n, Gi,α,β is the network structure representation of the i-th trainer, α represents the shared parameters between trainers, β represents the local private parameters of the trainer, fr and fc represent static features and spatial features respectively, δ and λ represent the classification contribution factors corresponding to static features and spatial features respectively, and f merge represents fused features.

步骤3、对各训练者进行难训练样本选取,据此设置难训练样本队列,并对难训练样本队列进行强化训练。Step 3: Select difficult training samples for each trainer, set up a difficult training sample queue accordingly, and perform intensive training on the difficult training sample queue.

具体地,进行难训练样本选取通过训练样本选取模型选取得到,其中,训练样本选取模型具体表示为:式中,j表示训练数据集中训练样本编号,j=1,2,......u,M为设定难训练样本特征与融合特征的参照最小余弦值,mj表示训练数据集中第j个训练样本对应预测标签与真实标签的余弦距离,Hj←x if mj≤M表示训练数据集中第j个训练样本对应预测标签与真实标签的余弦距离小于或者等于难训练样本特征与融合特征的参照最小余弦值时,训练样本被收集至难训练样本队列Hj中,L标签数目,Avgj′表示训练者i的训练样本中真实标签对应所有真实样本特征的均值。Specifically, the difficult training samples are selected by a training sample selection model, wherein the training sample selection model is specifically expressed as: In the formula, j represents the training sample number in the training data set, j = 1, 2, ... u, M is the reference minimum cosine value for setting the difficult training sample features and the fusion features, m j represents the cosine distance between the predicted label and the true label corresponding to the j-th training sample in the training data set, H j ←x if m j ≤M means that when the cosine distance between the predicted label and the true label corresponding to the j-th training sample in the training data set is less than or equal to the reference minimum cosine value between the difficult training sample features and the fusion features, the training sample is collected into the difficult training sample queue H j , L is the number of labels, and Avg j′ represents the mean of all true sample features corresponding to the true label in the training samples of trainer i.

需要说明的是,假设fix表示训练者i对样本x提取的特征,fix=Gi,α,β,则Avg′j′的计算公式为:Dim表示各训练者特征融合维度,其中,本发明中各训练者的特征融合后维度相同,z表示空间特征对应向量的维度,k表示静态特征对应向量维度。It should be noted that, assuming that fix represents the features extracted by trainer i for sample x, fix = Gi,α,β , then the calculation formula of Avg′j is: Dim represents the fusion dimension of the features of each trainer, wherein the dimensions of the features of each trainer after fusion in the present invention are the same, z represents the dimension of the vector corresponding to the spatial feature, and k represents the dimension of the vector corresponding to the static feature.

步骤4、提取各训练者的属性标签,将属性标签为全局和本体的训练者分别记为全局训练者和本地训练者,将各本地训练者与全局训练者进行数据交换,并对交换后的数据进行再次训练,并根据训练结果进行舌像分类。Step 4: extract the attribute labels of each trainer, record the trainers with attribute labels of global and entity as global trainers and local trainers respectively, exchange data between each local trainer and the global trainer, train the exchanged data again, and classify the tongue images according to the training results.

具体地,各本地训练者与全局训练者进行数据交换,包括:B1、将残差网络和胶囊网络作为联邦学习模型下各训练者的基本网络架构,基于联邦学习模型,各本地训练者通过同态加密模型对其融合特征进行加密,进而得到各本地训练者的加密参数矩阵Pg,g表示本地训练者编号,g=1,2,......h。Specifically, each local trainer exchanges data with the global trainer, including: B1. Using the residual network and capsule network as the basic network architecture of each trainer under the federated learning model, based on the federated learning model, each local trainer encrypts its fusion features through the homomorphic encryption model, and then obtains the encrypted parameter matrix P g of each local trainer, where g represents the local trainer number, g=1,2,......h.

在一个具体实施例中,联邦学习模型参照图2所示,每个本地训练者均拥有自己的私有数据集,在私有数据集上训练完之后,参与到与全局训练者的数据交换。In a specific embodiment, the federated learning model is shown in FIG. 2 , where each local trainer has its own private data set, and after training on the private data set, participates in data exchange with the global trainer.

可理解地,同态加密模型具体表示为:式中,/>表示向上取整符号,y为设定自然常数,fa,d为矩阵元素,a表示待加密矩阵的维度,d表示待加密矩阵维度下向量的长度。Understandably, the homomorphic encryption model is specifically expressed as: In the formula, /> represents the rounding up symbol, y is the set natural constant, f a,d are matrix elements, a represents the dimension of the matrix to be encrypted, and d represents the length of the vector under the dimension of the matrix to be encrypted.

在一个具体实施例中,加密精度完全取决于放大比率,放大比率越小,丢失的精度越多,为了保障加密精度,y具体取值可以为7。In a specific embodiment, the encryption accuracy depends entirely on the magnification ratio. The smaller the magnification ratio, the more accuracy is lost. In order to ensure the encryption accuracy, the specific value of y can be 7.

B2、基于加密参数矩阵构建各本体训练者的还原参数矩阵Wg B2. Construct the restored parameter matrix W g of each ontology trainer based on the encrypted parameter matrix.

式中,sig函数表征加密矩阵内元素的符号,x′为矩阵元素,x′=fa,dWherein, the sig function represents the sign of the element in the encryption matrix, x′ is the matrix element, and x′=fa ,d .

B3、设定解密参数矩阵 参数Q是同态加密矩阵的解密函数,/>表示对矩阵对应元素求内积。B3. Set the decryption parameter matrix Parameter Q is the decryption function of the homomorphic encryption matrix,/> It means to find the inner product of corresponding elements of the matrix.

B4、根据各本地训练者对应融合特征的加密参数矩阵和解密参数矩阵,各本地训练者与各全局训练者通过异步传递方式进行数据交换,如图3所示,虚线表示当前通信过程中,某些本地训练者和全局训练者之间不存在通信。B4. According to the encryption parameter matrix and decryption parameter matrix of the fusion features corresponding to each local trainer, each local trainer exchanges data with each global trainer through asynchronous transmission, as shown in Figure 3. The dotted line indicates that there is no communication between some local trainers and the global trainer during the current communication process.

需要说明的是,本地训练者上传的是经加密的梯度,全局经验池用来存放本地训练者的上传梯度,全局经验池满后,将全局经验池内的所有样本清空,进行安全聚合更新全局训练者的网络参数,聚合方式将各本地训练者上传的梯度进行求和。It should be noted that the local trainer uploads encrypted gradients, and the global experience pool is used to store the uploaded gradients of the local trainer. When the global experience pool is full, all samples in the global experience pool are cleared, and secure aggregation is performed to update the network parameters of the global trainer. The aggregation method sums the gradients uploaded by each local trainer.

在一个具体实施例中,本地训练者可以理解为客户端,全局训练者可以理解为服务端,其中,本地训练者和全局训练者之间不存在通信可能是本地训练者运算能力不足导致没能在规定的时间内完成梯度的计算,也可能时全局训练者随机选择部分本地训练者广播梯度,导致某几个本地训练者单次无法更新参数。In a specific embodiment, the local trainer can be understood as a client, and the global trainer can be understood as a server. The lack of communication between the local trainer and the global trainer may be due to insufficient computing power of the local trainer, resulting in failure to complete gradient calculation within the specified time. It may also be that the global trainer randomly selects some local trainers to broadcast gradients, resulting in several local trainers being unable to update parameters at a single time.

本发明实施例进行本地训练者和全局训练者的数据传递时,通过基于联邦学习模型和同态加密模型进行加密参数矩阵和解密参数矩阵设置,同时通过异步传递方式进行数据传递,有效缩减当前传输数据量,进而减少了通信开销,有效避免了通信链路的堵塞,确保了舌像分类的效率,并且在另一层面还有效降低了数据传递过程中的丢失几率以及传输错误几率。When the embodiment of the present invention performs data transmission between the local trainer and the global trainer, the encryption parameter matrix and the decryption parameter matrix are set based on the federated learning model and the homomorphic encryption model, and the data is transmitted through asynchronous transmission, which effectively reduces the current transmission data volume, thereby reducing the communication overhead, effectively avoiding the congestion of the communication link, ensuring the efficiency of tongue image classification, and at another level, effectively reducing the loss probability and transmission error probability during data transmission.

步骤5、进行多个训练数据集对比实验,得到对比实验数据,分析所述对比实验数据,得到对比实验结果,进而输出对比实验结果。Step 5: Conduct comparative experiments on multiple training data sets to obtain comparative experimental data, analyze the comparative experimental data to obtain comparative experimental results, and then output the comparative experimental results.

本发明实施例中,进行多个训练数据集对比实验,包括以下步骤:A1、将当前多模型融合舌像分类作为目标舌像分类规则,并从舌像分类数据库中提取当前累计制定的各舌像分类规则,作为各参照舌像分类规则。In an embodiment of the present invention, a comparison experiment of multiple training data sets is conducted, including the following steps: A1. The current multi-model fusion tongue image classification is used as the target tongue image classification rule, and the currently accumulated tongue image classification rules are extracted from the tongue image classification database as the reference tongue image classification rules.

在一个具体实施例中,将目标舌像分类规则简化为AFME算法,并选用CapsNet+LSTM算法、ResNet+BILSTM算法和ResNetblock+CapsNet算法作为各参照舌像分类规则。In a specific embodiment, the target tongue image classification rule is simplified to the AFME algorithm, and the CapsNet+LSTM algorithm, the ResNet+BILSTM algorithm and the ResNetblock+CapsNet algorithm are selected as the reference tongue image classification rules.

A3、设定各对比数据集,并将各对比数据集作为各实验数据集,同时并制定各评估指标。A3. Set up various comparison data sets and use them as experimental data sets, and formulate various evaluation indicators.

在一个具体实施例中,选用医疗数据集数据集2CLS和ZXSFL作为各实验数据集,且采用在X周轴、Y轴,X轴和Y轴同时翻转的方式进行数据集规模扩充,其中,各实验数据集的基本信息如表1所示。In a specific embodiment, medical data sets 2CLS and ZXSFL are selected as experimental data sets, and the data set size is expanded by flipping on the X axis, Y axis, and X axis and Y axis at the same time. The basic information of each experimental data set is shown in Table 1.

表1实验数据集的基本信息Table 1 Basic information of the experimental dataset

数据集名称Dataset name 数据类型type of data 原数据集规模Original dataset size 扩充后数据集规模The size of the expanded dataset 类别category 2CLS2CLS 图像数据集Image Datasets 831831 33243324 44 ZXSFLZXSFL 图像数据集Image Datasets 17781778 71127112 22

在另一个具体实施例中,选用Accuracy、Precision、Recall-Score、F1-Score这四个作为评估指标,将Recall-Score简化为Recall,将F1-Score简化为F1,评估指标的计算方式为现有较为成熟技术,在此不进行赘述,其中,舌像分类规则和评估指标可参照表2所示。In another specific embodiment, Accuracy, Precision, Recall-Score, and F1-Score are selected as evaluation indicators, Recall-Score is simplified to Recall, and F1-Score is simplified to F1. The calculation method of the evaluation indicators is an existing relatively mature technology and will not be elaborated here. The tongue image classification rules and evaluation indicators can be referred to as shown in Table 2.

表2对比算法以及评估指标Table 2 Comparison of algorithms and evaluation indicators

A2、随机抽取Y0个本地训练者,作为各实验本地训练者,同时随机抽取Y1个全局训练者,作为各实验全局训练者,并制定实验数据集分配策略。A2. Randomly select Y 0 local trainers as local trainers for each experiment, and randomly select Y 1 global trainers as global trainers for each experiment, and formulate an experimental data set allocation strategy.

可理解地,制定实验数据集分配策略,包括:A2-1、将各实验全局训练者数目和各实验本地训练者数目进行整合,得到各实验训练者,按照实验全局训练者先排序规则对各实验训练者进行排序,将排序结果作为各实验训练者的编号。Understandably, the experimental data set allocation strategy is formulated, including: A2-1, integrating the number of global trainers of each experiment and the number of local trainers of each experiment to obtain the trainers of each experiment, sorting the trainers of each experiment according to the first sorting rule of the global trainers of the experiment, and using the sorting result as the number of the trainers of each experiment.

A2-2、按照随机分配规则对各实验训练者进行实验数据集的规模分配,得到各实验训练者对应实验数据集的分配规模,其中,随机分配规则具体表示为k′q表示第q个实验训练者的对应实验数据集的分配规模,q表示实验训练者编号,q=1,2,......b,k′q-1表示第q-1个实验训练者的对应实验数据集的分配规模。A2-2. Allocate the scale of the experimental data set to each experimental trainer according to the random allocation rule to obtain the allocation scale of the experimental data set corresponding to each experimental trainer. The random allocation rule is specifically expressed as k′ q represents the allocation scale of the corresponding experimental data set of the qth experimental trainer, q represents the experimental trainer number, q=1, 2,...b, k′ q-1 represents the allocation scale of the corresponding experimental data set of the q-1th experimental trainer.

A2-3、设置训练数据集和测试数据集,并设定训练数据集和测试数据集的分配占比,将训练数据集的分配占比记为k,将测试数据集的分配占比记为k,并将k和k各实验训练者对应训练数据集和测试数据集的分配规模。A2-3. Set up a training data set and a test data set, and set the allocation ratio of the training data set and the test data set. The allocation ratio of the training data set is recorded as ktrain , and the allocation ratio of the test data set is recorded as ktest . The ktrain and ktest experimental trainers correspond to the allocation scales of the training data set and the test data set.

A2-4、将实验数据集的分配规模以及训练数据集和测试数据集的分配规模作为实验数据集分配策略。A2-4. The allocation scale of the experimental data set and the allocation scale of the training data set and the test data set are used as the experimental data set allocation strategy.

在一个具体实施例中,为了便于分析,本发明实施例选用一个全局训练者和三个本地训练者作为实验训练者,其中,实验训练者实验数据集分配策略可参照表3所示。In a specific embodiment, for the convenience of analysis, the embodiment of the present invention selects one global trainer and three local trainers as experimental trainers, wherein the experimental data set allocation strategy of the experimental trainers can be referred to as shown in Table 3.

表3实验训练者实验数据集分配Table 3 Experimental trainer experimental dataset allocation

训练者trainer 数据集规模Dataset size 训练集Training set 测试集Test Set 全局实验训练者Global Experiment Trainer 10%10% 80%80% 20%20% 本地实验训练者1Local Experiment Trainer 1 20%20% 80%80% 20%20% 本地实验训练者2Local Experiment Trainer 2 30%30% 80%80% 20%20% 本地实验训练者3Local Experiment Trainer 3 40%40% 80%80% 20%20%

A4、基于目标舌像分类规则和各参照舌像分类规则在各实验数据集上开展融合对比实验,并记录融合对比实验数据。A4. Based on the target tongue image classification rules and the reference tongue image classification rules, a fusion comparison experiment is carried out on each experimental data set, and the fusion comparison experimental data is recorded.

在一个具体实施例中,基于图5所示可得到目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展融合对比实验的融合对比实验数据,其中,图5(a)表示目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展融合对比实验的F1值,图5(b)表示目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展融合对比实验的Accuracy值,图5(c)表示目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展融合对比实验的Precision值,图5(d)表示目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展融合对比实验的Recall值。In a specific embodiment, based on Figure 5, fusion comparison experimental data of the target tongue image classification rule and each reference tongue image classification rule carried out on the 2CLS experimental data set can be obtained, wherein Figure 5(a) represents the F1 value of the target tongue image classification rule and each reference tongue image classification rule carried out on the 2CLS experimental data set, Figure 5(b) represents the Accuracy value of the target tongue image classification rule and each reference tongue image classification rule carried out on the 2CLS experimental data set, Figure 5(c) represents the Precision value of the target tongue image classification rule and each reference tongue image classification rule carried out on the 2CLS experimental data set, and Figure 5(d) represents the Recall value of the target tongue image classification rule and each reference tongue image classification rule carried out on the 2CLS experimental data set.

在一个具体实施例中,基于图6所示可得到目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展融合对比实验的融合对比实验数据,其中,图6(a)表示目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展融合对比实验的F1值,图6(b)表示目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展融合对比实验的Accuracy值,图6(c)表示目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展融合对比实验的Precision值,图6(d)表示目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展融合对比实验的Recall值。In a specific embodiment, based on FIG6 , fusion comparison experimental data of the target tongue image classification rule and each reference tongue image classification rule carried out on the ZXSFL experimental data set can be obtained, wherein FIG6(a) represents the F1 value of the target tongue image classification rule and each reference tongue image classification rule carried out on the ZXSFL experimental data set for fusion comparison experiment, FIG6(b) represents the Accuracy value of the target tongue image classification rule and each reference tongue image classification rule carried out on the ZXSFL experimental data set for fusion comparison experiment, FIG6(c) represents the Precision value of the target tongue image classification rule and each reference tongue image classification rule carried out on the ZXSFL experimental data set for fusion comparison experiment, and FIG6(d) represents the Recall value of the target tongue image classification rule and each reference tongue image classification rule carried out on the ZXSFL experimental data set for fusion comparison experiment.

在另一个具体实施例中,为了便于后续分析,融合对比实验数据具体可参照表4、表5和表6所示。In another specific embodiment, in order to facilitate subsequent analysis, the fusion comparison experimental data can be specifically referred to as shown in Table 4, Table 5 and Table 6.

表4在2CLS和ZXSFL实验数据集实验汇总Table 4 Experimental summary on 2CLS and ZXSFL experimental datasets

表5在2CLS实验数据集各评估指标的提升效果汇总Table 5 Summary of the improvement effect of each evaluation index in the 2CLS experimental dataset

表6在ZXSFL实验数据集各评估指标的提升效果汇总Table 6 Summary of the improvement effect of each evaluation index in the ZXSFL experimental dataset

可理解地,从表5和表6可看出,本发明目标舌像规则,即AFME算法在2CLS实验数据集上的平均准确率最大提升了5.111%,最小提升了3.795%,且相对于各参照舌像分类规则而言,准确率最大提升了8.010%,最小提升了3.031%,AFME算法在ZXSFL数据集上的平均准确率最大提升了2.953%,最小提升了2.265%,且对比各参照舌像分类规则而言,准确率最小提升了4.972%,最大提升了8.829%,由此可见AFME算法具有一定的优越性和鲁棒性。Understandably, it can be seen from Tables 5 and 6 that the target tongue image rule of the present invention, i.e., the average accuracy of the AFME algorithm on the 2CLS experimental data set is improved by a maximum of 5.111% and a minimum of 3.795%, and relative to each reference tongue image classification rule, the accuracy is improved by a maximum of 8.010% and a minimum of 3.031%. The average accuracy of the AFME algorithm on the ZXSFL data set is improved by a maximum of 2.953% and a minimum of 2.265%, and relative to each reference tongue image classification rule, the accuracy is improved by a minimum of 4.972% and a maximum of 8.829%. It can be seen that the AFME algorithm has certain superiority and robustness.

A5、更新测试数据集,将更新的测试数据集作为消融实验数据集,基于目标舌像分类规则和各参照舌像分类规则在消融实验数据集上开展消融对比实验,并记录消融对比实验数据,A5. Update the test data set, use the updated test data set as the ablation experiment data set, conduct ablation comparison experiments on the ablation experiment data set based on the target tongue image classification rules and the reference tongue image classification rules, and record the ablation comparison experiment data.

在一个具体实施例中,基于图7所示可得到目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展消融对比实验的消融对比实验数据,其中,图7(a)表示目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展消融对比实验的F1值,图7(b)表示目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展消融对比实验的Accuracy值,图7(c)表示目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展消融对比实验的Precision值,图7(d)表示目标舌像分类规则和各参照舌像分类规则在2CLS实验数据集上开展消融对比实验的Recall值。In a specific embodiment, based on FIG7 , ablation comparison experiment data of the target tongue image classification rule and each reference tongue image classification rule in the 2CLS experimental data set can be obtained, wherein FIG7(a) represents the F1 value of the ablation comparison experiment of the target tongue image classification rule and each reference tongue image classification rule in the 2CLS experimental data set, FIG7(b) represents the Accuracy value of the ablation comparison experiment of the target tongue image classification rule and each reference tongue image classification rule in the 2CLS experimental data set, FIG7(c) represents the Precision value of the ablation comparison experiment of the target tongue image classification rule and each reference tongue image classification rule in the 2CLS experimental data set, and FIG7(d) represents the Recall value of the ablation comparison experiment of the target tongue image classification rule and each reference tongue image classification rule in the 2CLS experimental data set.

在一个具体实施例中,基于图8所示可得到目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展消融对比实验的消融对比实验数据,其中,图8(a)表示目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展消融对比实验的F1值,图8(b)表示目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展消融对比实验的Accuracy值,图8(c)表示目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展消融对比实验的Precision值,图8(d)表示目标舌像分类规则和各参照舌像分类规则在ZXSFL实验数据集上开展消融对比实验的Recall值。In a specific embodiment, based on FIG8 , ablation comparison experiment data of the target tongue image classification rule and each reference tongue image classification rule in the ZXSFL experimental dataset can be obtained, wherein FIG8(a) represents the F1 value of the target tongue image classification rule and each reference tongue image classification rule in the ZXSFL experimental dataset, FIG8(b) represents the Accuracy value of the target tongue image classification rule and each reference tongue image classification rule in the ZXSFL experimental dataset, FIG8(c) represents the Precision value of the target tongue image classification rule and each reference tongue image classification rule in the ZXSFL experimental dataset, and FIG8(d) represents the Recall value of the target tongue image classification rule and each reference tongue image classification rule in the ZXSFL experimental dataset.

为了便于后续分析,消融对比实验数据可参照表7所示。To facilitate subsequent analysis, the ablation comparison experimental data can be found in Table 7.

表7消融对比实验数据汇总Table 7. Summary of ablation comparison experimental data

可理解地,从表7中可看出当训练集中不加入经扩展的样本前提下训练的AFME算法对测试集上的样本预测能力均有所下降,但下降的比例不高。与算法CapsNet+LSTM的实验结果相比,CapsNet+LSTM算法在2CLS实验数据集上的准确率下降9.64%,在ZXSFL实验数据集上的预测准确率下降了4.75%,即在两个实验数据集上预测准确率平均下降了7.15%,ResNetblock+CapsNet算法在2CLS数据集上的准确率下降8.42%,在ZXSFL实验数据集上的预测准确率下降了6.14%,即在两个实验数据集上预测准确率平均下降了7.25%,ResNet+BILSTM算法在2CLS数据集上的准确率下降6.93%,在ZXSFL实验数据集上的预测准确率下降了5.25%,在两个实验数据集上预测准确率平均下降了6.09%,而本发明目标舌像分类规则,即AFME算法在2CLS数据集上的准确率下降1.04%,在ZXSFL实验数据集上的预测准确率下降了2.42%,在两个实验数据集上预测准确率平均下降了1.73%。Understandably, it can be seen from Table 7 that when the AFME algorithm trained without adding the expanded samples to the training set, the prediction ability of the samples in the test set is reduced, but the proportion of the reduction is not high. Compared with the experimental results of the CapsNet+LSTM algorithm, the accuracy of the CapsNet+LSTM algorithm on the 2CLS experimental dataset decreased by 9.64%, and the prediction accuracy on the ZXSFL experimental dataset decreased by 4.75%, that is, the prediction accuracy on the two experimental datasets decreased by 7.15% on average; the accuracy of the ResNetblock+CapsNet algorithm on the 2CLS dataset decreased by 8.42%, and the prediction accuracy on the ZXSFL experimental dataset decreased by 6.14%, that is, the prediction accuracy on the two experimental datasets decreased by 7.25% on average; the accuracy of the ResNet+BILSTM algorithm on the 2CLS dataset decreased by 6.93%, and the prediction accuracy on the ZXSFL experimental dataset decreased by 5.25%, and the prediction accuracy on the two experimental datasets decreased by 6.09% on average; the target tongue image classification rule of the present invention, that is, the AFME algorithm, decreased by 1.04% on the 2CLS dataset, decreased by 2.42% on the ZXSFL experimental dataset, and decreased by 1.73% on average on the two experimental datasets.

本发明实施例通过进行融合对比实验和消融对比实验,直观地展示了当前舌像分类方式的分类准确性和鲁棒性,同时通过多种实验数据集、多种评估指数和多个对照组的对比实验,确保了当前舌像分类精准度评估的科学性和参考性,为后续舌像分类模型的构建提供了数据辅助,便于舌像分类模型的优化,并且还便于后续用户的选用。The embodiments of the present invention intuitively demonstrate the classification accuracy and robustness of the current tongue image classification method by conducting fusion comparison experiments and ablation comparison experiments. At the same time, through comparison experiments of multiple experimental data sets, multiple evaluation indexes and multiple control groups, the scientificity and reference nature of the current tongue image classification accuracy evaluation are ensured, which provides data assistance for the construction of subsequent tongue image classification models, facilitates the optimization of tongue image classification models, and is also convenient for subsequent users to select.

进一步地,分析所述对比实验数据,包括:X1、从所述融合对比实验数据中提取目标舌像分类规则以及各参照舌像分类规则在各实验数据集对应各评估指标的提升值。Further, analyzing the comparative experimental data includes: X1, extracting the target tongue image classification rule and the improvement value of each reference tongue image classification rule corresponding to each evaluation index in each experimental data set from the fused comparative experimental data.

X2、将目标舌像分类规则与各参照舌像分类规则在各实验数据集对应各评估指标的提升值进行对应作差,将差值作为提升值偏差。X2. Subtract the improvement values of the target tongue image classification rule and the reference tongue image classification rules in each experimental data set corresponding to each evaluation indicator, and use the difference as the improvement value deviation.

X3、若目标舌像分类规则在某实验数据集对应某评估指标与某参照舌像分类规则在该实验数据集对应该评估指标的提升值偏差大于0,将该参照舌像分类规则作为第一优化舌像分类规则,将该实验数据集作为第一优化舌像分类规则的第一优化实验数据集,将该评估指标作为第一优化实验数据集的优化评估指标。X3. If the deviation between the improvement value of a target tongue image classification rule corresponding to a certain evaluation index in a certain experimental data set and that of a reference tongue image classification rule corresponding to the evaluation index in the experimental data set is greater than 0, the reference tongue image classification rule is used as the first optimized tongue image classification rule, the experimental data set is used as the first optimized experimental data set of the first optimized tongue image classification rule, and the evaluation index is used as the optimized evaluation index of the first optimized experimental data set.

X4、提取目标舌像分类规则对应各第一优化舌像分类规则下各第一优化实验数据集的对应各优化评估指标的提升值偏差差Urvl,r表示第一优化舌像分类规则编号,v表示第一优化实验数据集编号,v=1,2,......σ,l为优化评估指标编号,l=1,2,......ε,进而统计目标舌像分类规则的融合精准优化趋向度ψ,/>式中,K表示舌像分类规则数,U0为设定参照评估指标提升值,/>σ、ε分别为第一优化舌像分类规则数目、第一优化实验数据集数目、优化评估指标数目。X4, extracting the improvement value deviation difference U rvl of each optimization evaluation index corresponding to each first optimization experimental data set under each first optimization tongue image classification rule of the target tongue image classification rule, where r represents the number of the first optimization tongue image classification rule, v represents the number of the first optimization experiment data set, v = 1, 2, ... σ, l is the number of the optimization evaluation index, l = 1, 2, ... ε, and then the fusion precision optimization trend degree ψ of the target tongue image classification rule is statistically analyzed, /> In the formula, K parameter represents the number of tongue image classification rules, U 0 is the reference evaluation index improvement value, /> σ and ε are the number of the first optimized tongue image classification rules, the number of the first optimized experimental data sets, and the number of optimized evaluation indicators, respectively.

X5、从所述消融对比实验数据中筛选出提取目标舌像分类规则以及各参照舌像分类规则在各实验数据集对应各评估指标的下降值,将目标舌像分类规则与各参照舌像分类规则在各实验数据集对应各评估指标的的下降值进行对应作差,得到下降值偏差。X5. Filter out the target tongue image classification rules and the reduction values of each reference tongue image classification rules corresponding to each evaluation index in each experimental data set from the ablation comparison experimental data, and make a corresponding difference between the reduction values of each evaluation index corresponding to the target tongue image classification rules and each reference tongue image classification rules in each experimental data set to obtain the reduction value deviation.

X6、若目标舌像分类规则在某实验数据集对应某评估指标与某参照舌像分类规则在该实验数据集对应该评估指标的下降值偏差小于0,将该参照舌像分类规则作为第二优化舌像分类规则,将该实验数据集作为第二优化舌像分类规则的第二优化实验数据集,将该评估指标作为第二优化实验数据集的优化评估指标。X6. If the decrease value deviation between a target tongue image classification rule corresponding to a certain evaluation index in a certain experimental data set and a reference tongue image classification rule corresponding to the evaluation index in the experimental data set is less than 0, the reference tongue image classification rule is used as the second optimized tongue image classification rule, the experimental data set is used as the second optimized experimental data set of the second optimized tongue image classification rule, and the evaluation index is used as the optimized evaluation index of the second optimized experimental data set.

X7、按照ψ的统计方式同理统计得到目标舌像分类规则的消融精准优化趋向度ψ′X7. According to the statistical method of ψoptimal , the ablation precision optimization trend degree ψ′optimal of the target tongue image classification rule is obtained.

X8、统计目标舌像分类规则的综合精准优化趋向度ψψ0、ψ1分别为设定参照的融合实验精准优化趋向度、消融实验精准优化趋向度,并将ψ、ψ′和ψ作为对比实验分析结果。X8. Comprehensive and accurate optimization trend degree of statistical target tongue image classification rules ψ comprehensive , ψ 0 and ψ 1 are respectively the precise optimization trend of the fusion experiment and the precise optimization trend of the ablation experiment of the set reference, and ψ optimal , ψ′ optimal and ψ comprehensive are taken as the comparative experimental analysis results.

以上内容仅仅是对本发明的构思所作的举例和说明,所属本技术领域的技术人员对所描述的具体实施例做各种各样的修改或补充或采用类似的方式替代,只要不偏离发明的构思或者超越本发明所定义的范围,均应属于本发明的保护范围。The above contents are merely examples and explanations of the concept of the present invention. The technicians in this technical field may make various modifications or additions to the specific embodiments described or replace them in a similar manner. As long as they do not deviate from the concept of the invention or exceed the scope defined by the present invention, they should all fall within the protection scope of the present invention.

Claims (8)

1. A multi-model fusion tongue image intelligent classification method based on data enhancement is characterized in that: comprising the following steps:
step 1, taking each tongue fur image as each training sample, forming an original data set, expanding the original data set to obtain a training data set, and adding a prediction label and a real label to each training sample in the training data set;
Step 2, respectively extracting features of different dimensions of each training sample in the training data set by combining a plurality of network structures, and fusing to obtain fused features;
step 3, selecting a difficult-to-train sample for each trainer, setting a difficult-to-train sample queue according to the difficult-to-train sample selection, and performing reinforcement training on the difficult-to-train sample queue;
Step 4, extracting attribute labels of all trainers, respectively marking the trainers with the attribute labels of the overall and the body as the overall trainers and the local trainers, exchanging data between each local trainer and the overall trainer, retraining the exchanged data, and classifying tongue images according to training results;
Step 5, performing a plurality of training data set comparison experiments to obtain comparison experiment data, analyzing the comparison experiment data to obtain comparison experiment results, and outputting the comparison experiment results;
the method for performing the plurality of training data set comparison experiments comprises the following steps:
a1, taking the current multi-model fusion tongue image classification as a target tongue image classification rule, and extracting each tongue image classification rule which is currently accumulated and formulated from a tongue image classification database as each reference tongue image classification rule;
A3, setting each comparison data set, taking each comparison data set as each experimental data set, and simultaneously formulating each evaluation index;
A2, randomly extracting Local trainers are used as local trainers for each experiment, and are randomly extracted/>The global trainers are used as the global trainers of all experiments, and an experiment data set allocation strategy is formulated;
a4, carrying out fusion comparison experiments on all experimental data sets based on the target tongue image classification rules and all reference tongue image classification rules, and recording fusion comparison experiment data;
A5, updating the test data set, taking the updated test data set as an ablation experiment data set, carrying out an ablation comparison experiment on the ablation experiment data set based on the target tongue image classification rule and each reference tongue image classification rule, and recording ablation comparison experiment data;
The making of the experimental data set allocation strategy comprises the following steps:
Integrating the number of the experimental global trainers and the number of the experimental local trainers to obtain the experimental trainers, sequencing the experimental trainers according to the first sequencing rule of the experimental global trainers, and taking the sequencing result as the serial number of the experimental trainers;
Performing scale distribution of the experimental data sets on each experimental trainer according to a random distribution rule to obtain the distribution scale of the experimental data sets corresponding to each experimental trainer, wherein the random distribution rule is specifically expressed as ,/>Represents the/>Distribution scale of corresponding experimental data set of individual experimental trainers,/>The number of the experiment training person is shown,,/>Represents the/>The distribution scale of the corresponding experimental data sets of the individual experimental trainers;
Setting a training data set and a test data set, setting the allocation ratio of the training data set and the test data set, and marking the allocation ratio of the training data set as The allocation duty cycle of the test dataset is noted as/>And will/>And/>Each experimental trainer corresponds to the distribution scale of the training data set and the test data set;
the distribution scale of the experimental data set and the distribution scales of the training data set and the test data set are used as the experimental data set distribution strategy.
2. The intelligent classification method of the multi-model fusion tongue images based on data enhancement of claim 1 is characterized in that: the specific implementation process of fusion in the step 2 comprises the following steps:
training sample selection is carried out in a training data set in a random voting mode, and each selected training sample is marked as a target sample;
extracting static characteristics of a target sample through a residual error network;
extracting spatial features among internal main body structures of a target sample through a capsule network;
realizing dimension reduction of static features extracted from a residual error network and spatial features extracted from a capsule network through a plurality of layers of convolution;
and fusing the static characteristics and the space characteristics after dimension reduction through a fusion algorithm.
3. The intelligent classification method of the multi-model fusion tongue images based on data enhancement according to claim 2, which is characterized in that: the fusion algorithm is specifically expressed as:,/> Is the trainer/> Training sample data of,/>Numbering the trainers,/>,/>For/>Network structure representation of individual trainers,/>Representing sharing parameters between trainers,/>Representing the local private parameters of the trainer,/>、/>Respectively represent static characteristics and spatial characteristics,/>And/>Representing the corresponding classification contribution factor of static features and spatial features respectively,/>Representing the fusion characteristics.
4. The intelligent classification method of the multi-model fusion tongue images based on data enhancement of claim 1 is characterized in that: the difficult training sample selection is obtained through training sample selection models, wherein the training sample selection models are specifically expressed as: In the above, the ratio of/> Representing the training sample numbers in the training dataset,,/>To set the reference minimum cosine value of the characteristics and the fusion characteristics of the samples difficult to train,/>Representing the/>, in the training datasetCosine distance between predicted label and real label corresponding to each training sample,/>Representing the/>, in the training datasetWhen the cosine distance between the prediction label corresponding to each training sample and the real label is smaller than or equal to the reference minimum cosine value of the characteristics of the difficult training sample and the fusion characteristics, the training samples are collected to a difficult training sample queue/>In/>The number of tags to be used in the method,Representing the trainer/>The real labels in the training samples correspond to the average value of all real sample characteristics.
5. The intelligent classification method of the multi-model fusion tongue images based on data enhancement according to claim 2, which is characterized in that: each local trainer exchanges data with a global trainer, and the method comprises the following steps:
Taking the residual network and the capsule network as basic network architecture of each trainer under the federal learning model, and based on the federal learning model, each local trainer encrypts the fusion characteristics of each local trainer through the homomorphic encryption model to further obtain an encryption parameter matrix of each local trainer ,/>Representing the local trainer number,/>
Constructing a restoring parameter matrix of each ontology trainer based on encryption parameter matrixIn the above, the ratio of/>The function characterizes the sign of the element in the encryption matrix,/>Representing the dimension of the matrix to be encrypted,/>Representing the length of the vector in the dimension of the matrix to be encrypted,/>To set natural constant,/>Is a matrix element,/>
Setting decryption parameter matrix,/>Parameter/>Is a decryption function of homomorphic encryption matrix,/>Representing solving inner products of corresponding elements of the matrix;
And according to the encryption parameter matrix and the decryption parameter matrix of the corresponding fusion characteristics of each local trainer, each local trainer and each global trainer exchange data in an asynchronous transmission mode.
6. The intelligent classification method of the multi-model fusion tongue images based on data enhancement according to claim 5, which is characterized in that: the homomorphic encryption model is specifically expressed as: In which, in the process, Representing rounding up symbols.
7. The intelligent classification method of the multi-model fusion tongue images based on data enhancement of claim 1 is characterized in that: said analyzing said comparative experimental data comprising:
Extracting a target tongue image classification rule and an elevated value of each evaluation index corresponding to each reference tongue image classification rule in each experimental data set from the fusion comparison experimental data;
performing corresponding difference on the target tongue image classification rule and each reference tongue image classification rule at the lifting value of each evaluation index corresponding to each experimental data set, and taking the difference value as a lifting value deviation;
If the deviation of the lifting value of the target tongue image classification rule corresponding to a certain evaluation index in a certain experimental data set and the lifting value of the target tongue image classification rule corresponding to the evaluation index in the experimental data set is greater than 0, taking the reference tongue image classification rule as a first optimized tongue image classification rule, taking the experimental data set as a first optimized experiment data set of the first optimized tongue image classification rule, and taking the evaluation index as an optimized evaluation index of the first optimized experiment data set;
Extracting the lifting value deviation difference of each optimization evaluation index corresponding to each first optimization experiment data set under each first optimization tongue image classification rule corresponding to the target tongue image classification rule ,/>Representing the number of the first optimized tongue image classification rule/>,Representing the first optimization experiment dataset numbering,/>,/>To optimize the evaluation index number,/>Further, the fusion precision optimization trend/>, of the target tongue image classification rules is counted
Screening and extracting target tongue image classification rules and descending values of each reference tongue image classification rule corresponding to each evaluation index in each experimental data set from the ablation comparison experimental data according to the following steps ofThe statistical mode of the target tongue image classification rule is obtained through the same statistics, and the ablation accurate optimization trend/>
Comprehensive accurate optimization trend degree for statistics of target tongue image classification rules,/>The trend degree is accurately optimized for the fusion experiment of the set reference and the ablation experiment, and/>And/>As a result of comparative experimental analysis.
8. The intelligent classification method for the multi-model fusion tongue images based on data enhancement of claim 7 is characterized in that: the specific statistical formula of the fused precise optimization trend of the target tongue image classification rule is as follows: In the above, the ratio of/> Representing tongue image classification rule number,/>To set the reference evaluation index rise value,/>The number of the first optimized tongue image classification rules, the number of the first optimized experiment data sets and the number of the optimized evaluation indexes are respectively.
CN202311513218.2A 2023-11-14 2023-11-14 Multi-model fusion tongue image intelligent classification method based on data enhancement Active CN117557844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311513218.2A CN117557844B (en) 2023-11-14 2023-11-14 Multi-model fusion tongue image intelligent classification method based on data enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311513218.2A CN117557844B (en) 2023-11-14 2023-11-14 Multi-model fusion tongue image intelligent classification method based on data enhancement

Publications (2)

Publication Number Publication Date
CN117557844A CN117557844A (en) 2024-02-13
CN117557844B true CN117557844B (en) 2024-04-26

Family

ID=89816023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311513218.2A Active CN117557844B (en) 2023-11-14 2023-11-14 Multi-model fusion tongue image intelligent classification method based on data enhancement

Country Status (1)

Country Link
CN (1) CN117557844B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191660A (en) * 2019-12-30 2020-05-22 浙江工业大学 Rectal cancer pathology image classification method based on multi-channel collaborative capsule network
CN111223553A (en) * 2020-01-03 2020-06-02 大连理工大学 A two-stage deep transfer learning TCM tongue diagnosis model
CN111783831A (en) * 2020-05-29 2020-10-16 河海大学 Accurate classification of complex images based on multi-source and multi-label shared subspace learning
CN114581432A (en) * 2022-03-18 2022-06-03 河海大学 Tongue appearance tongue image segmentation method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191660A (en) * 2019-12-30 2020-05-22 浙江工业大学 Rectal cancer pathology image classification method based on multi-channel collaborative capsule network
CN111223553A (en) * 2020-01-03 2020-06-02 大连理工大学 A two-stage deep transfer learning TCM tongue diagnosis model
CN111783831A (en) * 2020-05-29 2020-10-16 河海大学 Accurate classification of complex images based on multi-source and multi-label shared subspace learning
CN114581432A (en) * 2022-03-18 2022-06-03 河海大学 Tongue appearance tongue image segmentation method based on deep learning

Also Published As

Publication number Publication date
CN117557844A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
Sullivan et al. Deep learning is combined with massive-scale citizen science to improve large-scale image classification
WO2019001070A1 (en) Adjacency matrix-based connection information organization system, image feature extraction system, and image classification system and method
CN112241494B (en) Key information pushing method and device based on user behavior data
CN112148776B (en) Academic relationship prediction method and device based on neural network introducing semantic information
CN110347881A (en) A kind of group's discovery method for recalling figure insertion based on path
CN114819056B (en) A single-cell data integration method based on domain adversarial and variational inference
CN115293919B (en) Graph neural network prediction method and system for out-of-distribution generalization of social networks
WO2023024408A1 (en) Method for determining feature vector of user, and related device and medium
CN110209772B (en) Text processing method, device and equipment and readable storage medium
CN112597399B (en) Graph data processing method and device, computer equipment and storage medium
Costa et al. Demonstrating the Evolution of GANs through t-SNE
CN117036060A (en) Vehicle insurance fraud recognition method, device and storage medium
CN116416478A (en) A Bioinformatics Classification Model Based on Graph Structure Data Features
Wang et al. Fgnn2: A powerful pre-training framework for learning the logic functionality of circuits
CN117557844B (en) Multi-model fusion tongue image intelligent classification method based on data enhancement
CN117591675A (en) Node classification prediction method, system and storage medium for academic citation network
Sapkota et al. Enhancing Student Success: Predictive Modeling of Graduation and Dropout Rates in University Management Using Machine Learning
CN111882061B (en) Convolutional neural network training method based on hierarchical random gradient descent
CN115221955A (en) Multi-deep neural network parameter fusion system and method based on sample difference analysis
Shen et al. Dynamic relation extraction with a learnable temporal encoding method
WO2021114626A1 (en) Method for detecting quality of medical record data and related device
Senthamil Selvi et al. Ensemble Model for Stock Price Forecasting: MapReduce Framework for Big Data Handling: An Optimal Trained Hybrid Model for Classification
CN113763148A (en) Resource allocation method, device and storage medium and electronic device
Hsu et al. GNN-based approach for user retweet behavior prediction
CN118964640B (en) A customized perspective multi-domain campus element knowledge graph collaborative representation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant