CN113269196A - Method for realizing hyperspectral medical component analysis of graph convolution neural network - Google Patents

Method for realizing hyperspectral medical component analysis of graph convolution neural network Download PDF

Info

Publication number
CN113269196A
CN113269196A CN202110811547.XA CN202110811547A CN113269196A CN 113269196 A CN113269196 A CN 113269196A CN 202110811547 A CN202110811547 A CN 202110811547A CN 113269196 A CN113269196 A CN 113269196A
Authority
CN
China
Prior art keywords
hyperspectral
pixel
medical
neural network
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110811547.XA
Other languages
Chinese (zh)
Other versions
CN113269196B (en
Inventor
王耀南
尹阿婷
毛建旭
曾凯
张辉
朱青
周显恩
李亚萍
赵禀睿
陈煜嵘
苏学叁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110811547.XA priority Critical patent/CN113269196B/en
Publication of CN113269196A publication Critical patent/CN113269196A/en
Application granted granted Critical
Publication of CN113269196B publication Critical patent/CN113269196B/en
Priority to PCT/CN2022/076023 priority patent/WO2023000653A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C20/00Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
    • G16C20/20Identification of molecular entities, parts thereof or of chemical compositions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种图卷积神经网络高光谱医药成分分析的实现方法,一方面,将医药高光谱图像数据处理成图数据,大幅度降低了像素数量,有效减少了数据量;另一方面,以图卷积神经网络模型提取药物的特征信息,有效地学习了药物高光谱图像中的视觉特征与药物成分间的空间关系,提升了药物成分分类特征的表示能力,提高了被测药物的成分和属性精度,可实现对药物成分与质量的无损、快速检测分析。

Figure 202110811547

The invention discloses a method for realizing hyperspectral medical component analysis of graph convolution neural network. On the one hand, the medical hyperspectral image data is processed into graph data, which greatly reduces the number of pixels and effectively reduces the amount of data; , extracting the feature information of the drug with the graph convolutional neural network model, effectively learning the spatial relationship between the visual features in the drug hyperspectral image and the drug components, improving the representation ability of the classification features of the drug components, and improving the accuracy of the tested drugs. Composition and attribute accuracy, enabling non-destructive and rapid detection and analysis of pharmaceutical composition and quality.

Figure 202110811547

Description

一种图卷积神经网络高光谱医药成分分析的实现方法A Realization Method of Hyperspectral Pharmaceutical Composition Analysis by Graph Convolutional Neural Network

技术领域technical field

本发明涉及高端医药高光谱智能检测分析领域,特别是涉及一种图卷积神经网络高光谱医药成分分析的实现方法,该方法引入了图卷积神经网络技术,可用于高光谱医药成分和质量的无损分析。The invention relates to the field of high-end medical hyperspectral intelligent detection and analysis, in particular to a method for realizing hyperspectral medical component analysis of graph convolutional neural network. The method introduces graph convolutional neural network technology and can be used for hyperspectral medical composition and quality nondestructive analysis.

背景技术Background technique

医药安全是关系人民群众身体健康和经济发展的大事,已经成为了人们时刻关注的民生与公共安全问题,保障医药质量安全对维护国家安定和社会和谐稳定具有重大意义。现有药物成分质量检测方法,如化学检测方法、分光光度法等,只能适应于抽样检测,且具有破坏性,无法满足医药质量无损检测的要求。近年来,近红外光谱检测技术在药物分析领域中应用十分广泛,其光谱信息是一种鲁棒性很强的类“指纹”特征,可以用来将不同药品成分计量分类。光谱检测法作为检验医药品质、质量的保障,已被2015 版《中国药典》收录,但其仅能检测光源照射点被测试样成分的定量信息,无法对药物的整体成分进行分析。因此,亟需研究新型、通用、可靠的医药成分质量光谱检测分析方法。Medical safety is a major event related to people's health and economic development. It has become a people's livelihood and public safety issues that people are always concerned about. Ensuring the quality and safety of medicines is of great significance to maintaining national stability and social harmony and stability. Existing quality testing methods for pharmaceutical ingredients, such as chemical testing methods, spectrophotometry, etc., are only suitable for sampling testing, and are destructive, unable to meet the requirements of non-destructive testing of pharmaceutical quality. In recent years, near-infrared spectral detection technology has been widely used in the field of pharmaceutical analysis, and its spectral information is a robust "fingerprint"-like feature that can be used to measure and classify different pharmaceutical ingredients. Spectral detection method, as a guarantee for testing the quality and quality of medicines, has been included in the 2015 edition of the Chinese Pharmacopoeia, but it can only detect the quantitative information of the components of the tested sample at the point where the light source is irradiated, and cannot analyze the overall components of the drug. Therefore, there is an urgent need to develop a new, general and reliable spectral detection and analysis method for the quality of pharmaceutical ingredients.

高光谱成像技术可以同时获取被测药物的光谱信息和空间信息,且获取的数据信息量十分丰富,能准确地反映被检医药的整体性质,很好地满足了当前医药整体成分的无损检测分析需求。目前高光谱成像技术结合化学计量学相关算法,在制药领域开展了药材和片剂的鉴别、固态片剂中有效成分及辅料的均匀性分布检测、载药薄膜的组成及分布情况监测等相关研究,表明了高光谱技术能作为制药领域的高效能无损质量检测手段。但由于医药种类多样、成分复杂,同时高光谱数据量非常庞大,化学计量学方法难以提取药物的有效特征信息,被测药物的成分和属性预测精度不高。深度学习擅于发掘多维数据中的复杂关系,是目前海量数据处理与分析最好的方法之一。其中图神经网络是一类用于处理图域信息的神经网络,由于对生物分子结构、分子之间的功能关系具有强解释性,目前已在脑科学、医学诊断、药物发现和研究等医药领域受到广泛关注。图神经网络对拓扑数据结构的空间特征具有较好的学习能力,但很难直接用于医药高光谱图像的成分分析中。因此急需针对多种多样的医药种类与复杂的药物成分分析难题,深度探索医药高光谱图像的视觉信息,结合对待测药品的空间特征,提高药物成分分析的精度。Hyperspectral imaging technology can simultaneously obtain spectral information and spatial information of the tested drug, and the obtained data is very informative, which can accurately reflect the overall nature of the tested drug, and well meet the current non-destructive testing and analysis of the overall composition of the drug. need. At present, hyperspectral imaging technology combined with chemometrics-related algorithms has carried out related researches in the pharmaceutical field, such as the identification of medicinal materials and tablets, the uniform distribution detection of active ingredients and excipients in solid tablets, and the composition and distribution monitoring of drug-loaded films. , indicating that hyperspectral technology can be used as a high-efficiency nondestructive quality inspection method in the pharmaceutical field. However, due to the variety of medicines, the complex components, and the huge amount of hyperspectral data, it is difficult for chemometrics to extract the effective characteristic information of medicines, and the prediction accuracy of the components and properties of the tested medicines is not high. Deep learning is good at discovering complex relationships in multi-dimensional data, and it is one of the best methods for processing and analyzing massive data. Among them, the graph neural network is a kind of neural network used to process the information in the graph domain. Because of its strong explanatory power on the structure of biomolecules and the functional relationship between molecules, it has been widely used in the fields of medicine such as brain science, medical diagnosis, drug discovery and research. Widespread concern. Graph neural network has good learning ability for the spatial features of topological data structure, but it is difficult to be directly used in the component analysis of medical hyperspectral images. Therefore, it is urgently necessary to deeply explore the visual information of medical hyperspectral images for various types of medicines and complex drug composition analysis problems, and combine the spatial characteristics of the drugs to be tested to improve the accuracy of drug composition analysis.

发明内容SUMMARY OF THE INVENTION

有鉴于此,本发明提出一种图卷积神经网络高光谱医药成分分析的实现方法,通过学习高光谱医药图像中药物的光谱信息特征和有效成分空间分布特征,有效实现无损药物成分分析与质量的快速检测。In view of this, the present invention proposes a method for implementing hyperspectral medical component analysis with graph convolutional neural network. By learning the spectral information characteristics and the spatial distribution characteristics of effective components of drugs in hyperspectral medical images, it can effectively achieve non-destructive drug component analysis and quality. rapid detection.

一方面,本发明提供了一种图卷积神经网络高光谱医药成分分析的实现方In one aspect, the present invention provides a method for implementing hyperspectral medical component analysis using a graph convolutional neural network.

法,包括以下步骤:method, including the following steps:

步骤1、获取医药高光谱图像,构建医药高光谱数据集,所述医药高光谱数据集包括训练集和测试集;Step 1. Obtain a medical hyperspectral image, and construct a medical hyperspectral data set, where the medical hyperspectral data set includes a training set and a test set;

步骤2、利用超像素分割算法,将所述训练集中的医药高光谱图像进行分割,得到互不重叠的超像素,所述互不重叠的超像素构成医药高光谱超像素集合;Step 2, using a superpixel segmentation algorithm to segment the medical hyperspectral images in the training set to obtain non-overlapping superpixels, and the non-overlapping superpixels constitute a medical hyperspectral superpixel set;

步骤3、分别统计每个超像素的像素均值、质心像素位置、周长、面积、区域方位角,以及质心像素到每个超像素区域边界距离的特征参数,构造图数据的特征矩阵;Step 3. Count the pixel mean value, centroid pixel position, perimeter, area, area azimuth of each superpixel, and the characteristic parameters of the distance from the centroid pixel to the boundary of each superpixel area, and construct a feature matrix of the graph data;

步骤4、以每个超像素为图节点,最近邻超像素为边,构建区域邻接图,并获得图数据的邻接权值矩阵;Step 4. Taking each superpixel as a graph node and the nearest neighbor superpixel as an edge, construct a regional adjacency graph, and obtain the adjacency weight matrix of the graph data;

步骤5、将所述特征矩阵、所述邻接权值矩阵以及训练集中医药高光谱图像对应的医药高光谱成分标签输入到图卷积神经网络中进行训练,得到图卷积神经网络的模型参数;Step 5. Input the feature matrix, the adjacency weight matrix and the medical hyperspectral component label corresponding to the medical hyperspectral image in the training set into the graph convolutional neural network for training, and obtain the model parameters of the graph convolutional neural network;

步骤6、将测试集中的医药高光谱图像重复步骤2至4,得到需进行药物成分分析的区域邻接图,并获取需进行药物成分分析的区域邻接图的特征矩阵与邻接权值矩阵,将测试集中获取的特征矩阵与邻接权值矩阵输入到由步骤5训练好的模型参数所初始化的图卷积神经网络模型中,得到药物成分分析结果。Step 6. Repeat steps 2 to 4 for the medical hyperspectral images in the test set to obtain a region adjacency graph that needs to be analyzed for drug components, and obtain a feature matrix and an adjacency weight matrix of the region adjacency graph that needs to be analyzed for drug components. The feature matrix and adjacency weight matrix obtained in a centralized manner are input into the graph convolutional neural network model initialized by the model parameters trained in step 5, and the result of drug component analysis is obtained.

进一步地,所述步骤1具体包括以下过程:Further, the step 1 specifically includes the following process:

步骤1.1、准备药物样品:头孢丙烯片、土霉素片、马来酸氯苯那敏片、呋塞米片、阿司匹林肠溶片、珀乙红霉素片 、裸花紫珠分散片七种药品样本;Step 1.1. Prepare drug samples: seven kinds of cefprozil tablets, oxytetracycline tablets, chlorpheniramine maleate tablets, furosemide tablets, aspirin enteric-coated tablets, erythromycin tablets, naked flower purple beads dispersible tablets drug samples;

步骤1.2、获取医药高光谱图像,构建医药高光谱数据集

Figure 291369DEST_PATH_IMAGE001
:采用高光谱分选仪获取药物样品的医药高光谱图像,并对采集的医药高光谱图像进行反射率校正,将校正后的图像作为医药高光谱数据集的样本;Step 1.2. Obtain medical hyperspectral images and construct a medical hyperspectral dataset
Figure 291369DEST_PATH_IMAGE001
: Use a hyperspectral sorter to obtain the medical hyperspectral image of the drug sample, and perform reflectance correction on the collected medical hyperspectral image, and use the corrected image as the sample of the medical hyperspectral data set;

步骤1.3、将医药高光谱数据集

Figure 622731DEST_PATH_IMAGE002
随机划分为训练集
Figure 45622DEST_PATH_IMAGE003
和测试集
Figure 229479DEST_PATH_IMAGE004
Figure 864859DEST_PATH_IMAGE005
,
Figure 817772DEST_PATH_IMAGE006
Figure 145985DEST_PATH_IMAGE007
Figure 53023DEST_PATH_IMAGE008
Figure 757674DEST_PATH_IMAGE002
中第i个样本的图像,
Figure 565093DEST_PATH_IMAGE009
Figure 329787DEST_PATH_IMAGE002
中第i个样本对应的药物成分标签,
Figure 957077DEST_PATH_IMAGE010
为训练集
Figure 199840DEST_PATH_IMAGE011
中第i个样本的图像,
Figure 366160DEST_PATH_IMAGE012
为训练集
Figure 301755DEST_PATH_IMAGE011
中第i个样本对应的药物成分标签,
Figure 416341DEST_PATH_IMAGE013
为测试集
Figure 462795DEST_PATH_IMAGE014
中第i个样本的图像,
Figure 979227DEST_PATH_IMAGE015
为测试集
Figure 85723DEST_PATH_IMAGE014
中第i个样本对应的药物成分标签,d表示医药高光谱数据集
Figure 687606DEST_PATH_IMAGE002
中的样本总数,s表示训练集
Figure 39215DEST_PATH_IMAGE011
中的样本总数,m表示测试集
Figure 410153DEST_PATH_IMAGE014
中的样本总数。Step 1.3, the medical hyperspectral dataset
Figure 622731DEST_PATH_IMAGE002
Randomly divided into training set
Figure 45622DEST_PATH_IMAGE003
and test set
Figure 229479DEST_PATH_IMAGE004
,
Figure 864859DEST_PATH_IMAGE005
,
Figure 817772DEST_PATH_IMAGE006
,
Figure 145985DEST_PATH_IMAGE007
,
Figure 53023DEST_PATH_IMAGE008
for
Figure 757674DEST_PATH_IMAGE002
The image of the ith sample in ,
Figure 565093DEST_PATH_IMAGE009
for
Figure 329787DEST_PATH_IMAGE002
The drug component label corresponding to the i -th sample in
Figure 957077DEST_PATH_IMAGE010
for the training set
Figure 199840DEST_PATH_IMAGE011
The image of the ith sample in ,
Figure 366160DEST_PATH_IMAGE012
for the training set
Figure 301755DEST_PATH_IMAGE011
The drug component label corresponding to the i -th sample in
Figure 416341DEST_PATH_IMAGE013
for the test set
Figure 462795DEST_PATH_IMAGE014
The image of the ith sample in ,
Figure 979227DEST_PATH_IMAGE015
for the test set
Figure 85723DEST_PATH_IMAGE014
The drug component label corresponding to the i -th sample in , d represents the medical hyperspectral dataset
Figure 687606DEST_PATH_IMAGE002
The total number of samples in , s is the training set
Figure 39215DEST_PATH_IMAGE011
The total number of samples in , m is the test set
Figure 410153DEST_PATH_IMAGE014
The total number of samples in .

进一步地,采用K折交叉验证法对步骤1.3中医药高光谱数据集

Figure 421972DEST_PATH_IMAGE002
进行训练集
Figure 245571DEST_PATH_IMAGE011
与测试集
Figure 899406DEST_PATH_IMAGE014
的划分。Further, the K-fold cross-validation method was used to analyze the TCM hyperspectral data set in step 1.3.
Figure 421972DEST_PATH_IMAGE002
run the training set
Figure 245571DEST_PATH_IMAGE011
with the test set
Figure 899406DEST_PATH_IMAGE014
division.

进一步地,所述步骤2具体表现为:采用SLIC算法对所述训练集中的医药高光谱图像进行分割,通过计算像素点之间的空间距离和光谱距离,迭代式更新超像素聚类中心和边界范围,在新的聚类中心和旧的聚类中心之间误差小于预设阈值时停止迭代,从而得到互不重叠的超像素,所述互不重叠的超像素构成医药高光谱超像素集合

Figure 124851DEST_PATH_IMAGE016
Figure 71685DEST_PATH_IMAGE017
为第i个超像素,N为互不重叠的超像素个数。Further, the step 2 is embodied as follows: using the SLIC algorithm to segment the medical hyperspectral images in the training set, by calculating the spatial distance and spectral distance between the pixel points, iteratively updating the superpixel cluster center and boundary. range, the iteration is stopped when the error between the new cluster center and the old cluster center is less than a preset threshold, so as to obtain non-overlapping superpixels, and the non-overlapping superpixels constitute a medical hyperspectral superpixel set
Figure 124851DEST_PATH_IMAGE016
,
Figure 71685DEST_PATH_IMAGE017
is the ith superpixel, and N is the number of non-overlapping superpixels.

进一步地,所述步骤3具体表现为:将步骤2中得到的每个超像素

Figure 648160DEST_PATH_IMAGE018
,获取每个超像素
Figure 840107DEST_PATH_IMAGE018
的像素均值
Figure 185638DEST_PATH_IMAGE019
、质心像素
Figure 804838DEST_PATH_IMAGE020
位置
Figure 868609DEST_PATH_IMAGE021
、周长
Figure 100132DEST_PATH_IMAGE022
、面积
Figure 300170DEST_PATH_IMAGE023
、区域方位角
Figure 355850DEST_PATH_IMAGE024
以及质心像素
Figure 375759DEST_PATH_IMAGE020
到每个超像素区域边界取东、南、西、北、东南、东北、西南、西北8个方向的距离
Figure 175088DEST_PATH_IMAGE025
,从而获得特征矩阵X
Figure 229631DEST_PATH_IMAGE026
,其中,N为超像素个数,M为特征维数,
Figure 190634DEST_PATH_IMAGE027
表示实数集。Further, the step 3 is embodied as: each superpixel obtained in step 2 is
Figure 648160DEST_PATH_IMAGE018
, get each superpixel
Figure 840107DEST_PATH_IMAGE018
pixel mean of
Figure 185638DEST_PATH_IMAGE019
, centroid pixel
Figure 804838DEST_PATH_IMAGE020
Location
Figure 868609DEST_PATH_IMAGE021
,perimeter
Figure 100132DEST_PATH_IMAGE022
,area
Figure 300170DEST_PATH_IMAGE023
, area azimuth
Figure 355850DEST_PATH_IMAGE024
and centroid pixels
Figure 375759DEST_PATH_IMAGE020
The distance to the boundary of each superpixel area is taken in eight directions: east, south, west, north, southeast, northeast, southwest, and northwest
Figure 175088DEST_PATH_IMAGE025
, so as to obtain the feature matrix X ,
Figure 229631DEST_PATH_IMAGE026
, where N is the number of superpixels, M is the feature dimension,
Figure 190634DEST_PATH_IMAGE027
represents the set of real numbers.

进一步地,步骤4中邻接权值矩阵的具体实现包括以下步骤:Further, the specific implementation of the adjacency weight matrix in step 4 includes the following steps:

步骤4.1、根据步骤2得到的医药高光谱超像素集合V,将医药高光谱超像素集合中的超像素

Figure 557611DEST_PATH_IMAGE028
构成一个个图节点,采用K最近邻算法选取离超像素
Figure 629472DEST_PATH_IMAGE028
最近的K个超像素点构建边,从而构成区域邻接图G;Step 4.1. According to the medical hyperspectral superpixel set V obtained in step 2, the superpixels in the medical hyperspectral superpixel set are
Figure 557611DEST_PATH_IMAGE028
A graph node is formed, and the K nearest neighbor algorithm is used to select the distance from the superpixel
Figure 629472DEST_PATH_IMAGE028
The nearest K superpixels construct edges to form a region adjacency graph G;

步骤4.2、根据步骤2获取的医药高光谱超像素集合V中的每个超像素区域,统计每个超像素区域的相邻超像素,得到相邻超像素集合

Figure 804101DEST_PATH_IMAGE029
;Step 4.2, according to each superpixel area in the medical hyperspectral superpixel set V obtained in step 2, count the adjacent superpixels of each superpixel area, and obtain the adjacent superpixel set
Figure 804101DEST_PATH_IMAGE029
;

步骤4.3、根据步骤3获取超像素

Figure 936005DEST_PATH_IMAGE028
的像素均值
Figure 196085DEST_PATH_IMAGE030
,计算每个超像素间的像素均值距离
Figure 337217DEST_PATH_IMAGE031
;Step 4.3, obtain superpixels according to step 3
Figure 936005DEST_PATH_IMAGE028
pixel mean of
Figure 196085DEST_PATH_IMAGE030
, calculate the pixel mean distance between each superpixel
Figure 337217DEST_PATH_IMAGE031
;

步骤4.4、根据步骤3获取超像素

Figure 867817DEST_PATH_IMAGE030
的质心像素
Figure 170623DEST_PATH_IMAGE032
位置
Figure 917999DEST_PATH_IMAGE033
,计算每个超像素间质心坐标距离
Figure 331663DEST_PATH_IMAGE034
;Step 4.4, obtain superpixels according to step 3
Figure 867817DEST_PATH_IMAGE030
centroid of pixels
Figure 170623DEST_PATH_IMAGE032
Location
Figure 917999DEST_PATH_IMAGE033
, calculate the centroid coordinate distance between each superpixel
Figure 331663DEST_PATH_IMAGE034
;

步骤4.5、根据步骤4.3获取的超像素间的像素均值距离

Figure 480884DEST_PATH_IMAGE035
和步骤4.4获取的超像素间质心坐标距离
Figure 689012DEST_PATH_IMAGE034
进行计算,得到邻接权值矩阵A,
Figure 422219DEST_PATH_IMAGE036
。Step 4.5, the pixel mean distance between superpixels obtained according to step 4.3
Figure 480884DEST_PATH_IMAGE035
and the distance between the centroid coordinates of the superpixels obtained in step 4.4
Figure 689012DEST_PATH_IMAGE034
Perform the calculation to get the adjacency weight matrix A,
Figure 422219DEST_PATH_IMAGE036
.

进一步地,所述步骤5的具体实现包括以下步骤:Further, the specific implementation of the step 5 includes the following steps:

步骤5.1、采用Xavier方法初始化图卷积神经网络模型的模型参数

Figure 905153DEST_PATH_IMAGE037
;Step 5.1, use the Xavier method to initialize the model parameters of the graph convolutional neural network model
Figure 905153DEST_PATH_IMAGE037
;

步骤5.2、根据步骤4构建的区域邻接图G,计算各图节点的度矩阵D,

Figure 643302DEST_PATH_IMAGE038
;Step 5.2. Calculate the degree matrix D of each graph node according to the region adjacency graph G constructed in step 4,
Figure 643302DEST_PATH_IMAGE038
;

步骤5.3、图卷积神经网络模型中每层图卷积神经网络GCN的特征H由下式计算:Step 5.3. The feature H of each layer of graph convolutional neural network GCN in the graph convolutional neural network model is calculated by the following formula:

Figure 553489DEST_PATH_IMAGE039
(4)
Figure 553489DEST_PATH_IMAGE039
(4)

其中,

Figure 275457DEST_PATH_IMAGE040
W为可学习的权值参数矩阵,
Figure 30924DEST_PATH_IMAGE041
为激活函数,且l=0时,
Figure 656202DEST_PATH_IMAGE042
X为特征矩阵;in,
Figure 275457DEST_PATH_IMAGE040
, W is a learnable weight parameter matrix,
Figure 30924DEST_PATH_IMAGE041
is the activation function, and when l = 0,
Figure 656202DEST_PATH_IMAGE042
, X is the feature matrix;

步骤5.4、在训练阶段,通过图卷积、可微池化操作来调整W以持续性的减少误差,从而优化输出,损失函数由下式计算:Step 5.4. In the training phase, adjust W through graph convolution and micro-pooling operations to continuously reduce the error, thereby optimizing the output. The loss function is calculated by the following formula:

Figure 737291DEST_PATH_IMAGE043
(5)
Figure 737291DEST_PATH_IMAGE043
(5)

其中,

Figure 680976DEST_PATH_IMAGE044
是训练样本
Figure 240134DEST_PATH_IMAGE045
的真实标签,s为训练样本数量,L为损失函数;in,
Figure 680976DEST_PATH_IMAGE044
is the training sample
Figure 240134DEST_PATH_IMAGE045
The true label of , s is the number of training samples, L is the loss function;

步骤5.5、根据损失函数L的梯度经反向传播调整整个图卷积神经网络模型的模型参数

Figure 218454DEST_PATH_IMAGE046
,以此作为步骤5.1中的网络初始化参数,不断迭代步骤5.1到步骤5.5直到图卷积神经网络模型对药物成分分析精度趋于稳定。Step 5.5. Adjust the model parameters of the entire graph convolutional neural network model through backpropagation according to the gradient of the loss function L
Figure 218454DEST_PATH_IMAGE046
, as the network initialization parameter in step 5.1, and iterates step 5.1 to step 5.5 until the graph convolutional neural network model tends to stabilize the accuracy of drug component analysis.

进一步地,步骤4.3中每个超像素间的像素均值距离

Figure 204864DEST_PATH_IMAGE047
由下式计算:Further, the pixel mean distance between each superpixel in step 4.3
Figure 204864DEST_PATH_IMAGE047
Calculated by:

Figure 405819DEST_PATH_IMAGE048
(1)
Figure 405819DEST_PATH_IMAGE048
(1)

式中,

Figure 768668DEST_PATH_IMAGE049
表示第i个超像素的像素均值,
Figure 601494DEST_PATH_IMAGE050
表示第j个超像素的像素均值。In the formula,
Figure 768668DEST_PATH_IMAGE049
represents the pixel mean of the ith superpixel,
Figure 601494DEST_PATH_IMAGE050
represents the pixel mean of the jth superpixel.

进一步地,步骤4.4中每个超像素间质心坐标距离

Figure 758806DEST_PATH_IMAGE051
由下式计算:Further, the centroid coordinate distance between each superpixel in step 4.4
Figure 758806DEST_PATH_IMAGE051
Calculated by:

Figure 677084DEST_PATH_IMAGE052
(2)
Figure 677084DEST_PATH_IMAGE052
(2)

式中,

Figure 843623DEST_PATH_IMAGE053
表示第i个超像素质心,
Figure 32421DEST_PATH_IMAGE054
表示第j个超像素质心,
Figure 626213DEST_PATH_IMAGE055
表示第i个超像素质心的横坐标,
Figure 766208DEST_PATH_IMAGE056
表示第i个超像素质心的纵坐标,
Figure 205279DEST_PATH_IMAGE057
表示第j个超像素质心的横坐标,
Figure 747119DEST_PATH_IMAGE058
表示第j个超像素质心的纵坐标。In the formula,
Figure 843623DEST_PATH_IMAGE053
represents the i -th superpixel centroid,
Figure 32421DEST_PATH_IMAGE054
represents the jth superpixel centroid,
Figure 626213DEST_PATH_IMAGE055
represents the abscissa of the i -th superpixel centroid,
Figure 766208DEST_PATH_IMAGE056
represents the ordinate of the i -th superpixel centroid,
Figure 205279DEST_PATH_IMAGE057
represents the abscissa of the j -th superpixel centroid,
Figure 747119DEST_PATH_IMAGE058
represents the ordinate of the j -th superpixel centroid.

进一步地,步骤4.5中邻接权值矩阵A由下式计算:Further, in step 4.5, the adjacency weight matrix A is calculated by the following formula:

Figure 511813DEST_PATH_IMAGE059
(3)。
Figure 511813DEST_PATH_IMAGE059
(3).

故此,本发明提供的图卷积神经网络高光谱医药成分分析的实现方法,首先,获取医药高光谱图像,构建包括训练集和测试集的医药高光谱数据集;其次,利用超像素分割算法,将所述训练集中的医药高光谱图像进行分割,得到互不重叠的超像素;然后,分别统计每个超像素的像素均值、质心像素位置、周长、面积、区域方位角,以及质心像素到每个超像素区域边界距离的特征参数,构造图数据的特征矩阵;接着,以每个超像素为图节点,最近邻超像素为边,构建区域邻接图,并获得图数据的得到邻接权值矩阵;再次,将所述特征矩阵、所述邻接权值矩阵以及训练集中医药高光谱图像对应的医药高光谱成分标签输入到图卷积神经网络中进行训练,得到图卷积神经网络的模型参数;最后,步骤2至4,得到需进行药物成分分析的区域邻接图,并获取需进行药物成分分析的区域邻接图的特征矩阵与邻接权值矩阵,将测试集中获取的特征矩阵与邻接权值矩阵输入到由步骤5训练好的模型参数所初始化的图卷积神经网络模型中,得到药物成分分析结果。与现有技术相比,本发明一方面,将医药高光谱图像数据处理成图数据,大幅度降低了像素数量,有效减少了数据量;另一方面,以图卷积神经网络模型提取药物的特征信息,有效地学习了医药高光谱图像中的视觉特征与药物成分间的空间关系,提升了药物成分分类特征的表示能力,提高了被测药物的成分和属性精度,解决了医药种类多样、组成成分复杂、物理特性各异等难题,实现了无损药物成分分析与质量的快速检测。Therefore, in the method for realizing the hyperspectral medical component analysis of graph convolutional neural network provided by the present invention, firstly, obtaining a medical hyperspectral image, and constructing a medical hyperspectral data set including a training set and a testing set; secondly, using a superpixel segmentation algorithm, The medical hyperspectral images in the training set are divided to obtain non-overlapping superpixels; then, the pixel mean, centroid pixel position, perimeter, area, area azimuth, and centroid pixel to The feature parameters of the boundary distance of each superpixel region are used to construct the feature matrix of the graph data; then, using each superpixel as a graph node and the nearest superpixel as an edge, construct a region adjacency graph, and obtain the adjacency weights of the graph data. Matrix; again, input the feature matrix, the adjacency weight matrix and the medical hyperspectral component label corresponding to the medical hyperspectral image in the training set into the graph convolutional neural network for training to obtain the model parameters of the graph convolutional neural network Finally, in steps 2 to 4, the region adjacency graph that needs to be analyzed for drug components is obtained, and the feature matrix and adjacency weight matrix of the region adjacency graph that needs to be analyzed for drug components are obtained, and the feature matrix and adjacency weight matrix obtained in the test set are obtained. The matrix is input into the graph convolutional neural network model initialized by the model parameters trained in step 5, and the result of drug composition analysis is obtained. Compared with the prior art, on the one hand, the present invention processes the medical hyperspectral image data into graph data, which greatly reduces the number of pixels and effectively reduces the amount of data; The feature information effectively learns the spatial relationship between the visual features in the medical hyperspectral image and the drug components, improves the representation ability of the classification features of the drug components, improves the composition and attribute accuracy of the tested drugs, and solves the problem of various types of medicines, Due to the complex composition and different physical properties, the rapid detection of non-destructive drug composition analysis and quality has been realized.

附图说明Description of drawings

构成本发明的一部分的附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The accompanying drawings constituting a part of the present invention are used to provide further understanding of the present invention, and the exemplary embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute an improper limitation of the present invention. In the attached image:

图1为本发明实施例一提供的一种图卷积神经网络高光谱医药成分分析的实现方法的流程图;1 is a flowchart of a method for implementing hyperspectral medical component analysis using a graph convolutional neural network according to Embodiment 1 of the present invention;

图2为本发明实施例二提供的一种图卷积神经网络高光谱医药成分分析的实现方法的流程图;2 is a flowchart of a method for implementing hyperspectral medical component analysis using a graph convolutional neural network provided in Embodiment 2 of the present invention;

图3为本发明实施例中邻接权值矩阵获取过程的流程图;3 is a flowchart of an adjacent weight matrix acquisition process in an embodiment of the present invention;

图4为本发明实施例的图卷积神经网络模型的结构框架示意图;4 is a schematic diagram of a structural framework of a graph convolutional neural network model according to an embodiment of the present invention;

图5为本发明实施例的高光谱医药成分分析数据集部分样本示意图。FIG. 5 is a schematic diagram of some samples of a hyperspectral medical component analysis data set according to an embodiment of the present invention.

具体实施方式Detailed ways

需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本发明。It should be noted that the embodiments of the present invention and the features of the embodiments may be combined with each other under the condition of no conflict. The present invention will be described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.

图1是根据本发明实施例一提供的一种图卷积神经网络高光谱医药成分分析的实现方法的流程图。如图1所示,本发明的一种图卷积神经网络高光谱医药成分分析的实现方法通过以下步骤实现:FIG. 1 is a flowchart of a method for implementing hyperspectral medical component analysis with a graph convolutional neural network according to Embodiment 1 of the present invention. As shown in Fig. 1, a kind of realization method of graph convolutional neural network hyperspectral medical component analysis of the present invention is realized through the following steps:

步骤1、获取医药高光谱图像,构建医药高光谱数据集,该医药高光谱数据集包括训练集和测试集;Step 1. Obtain a medical hyperspectral image, and construct a medical hyperspectral data set, which includes a training set and a test set;

步骤2、利用超像素分割算法,将上述训练集中的医药高光谱图像进行分割,得到互不重叠的超像素,该互不重叠的超像素构成医药高光谱超像素集合;Step 2, using a superpixel segmentation algorithm to segment the medical hyperspectral images in the training set to obtain non-overlapping superpixels, and the non-overlapping superpixels constitute a medical hyperspectral superpixel set;

步骤3、分别统计每个超像素的像素均值、质心像素位置、周长、面积、区域方位角,以及质心像素到每个超像素区域边界距离的特征参数,构造图数据的特征矩阵;Step 3. Count the pixel mean value, centroid pixel position, perimeter, area, area azimuth of each superpixel, and the characteristic parameters of the distance from the centroid pixel to the boundary of each superpixel area, and construct a feature matrix of the graph data;

步骤4、以每个超像素为图节点,最近邻超像素为边,构建区域邻接图,并获得图数据的邻接权值矩阵;Step 4. Taking each superpixel as a graph node and the nearest neighbor superpixel as an edge, construct a regional adjacency graph, and obtain the adjacency weight matrix of the graph data;

步骤5、将所述特征矩阵、所述邻接权值矩阵以及训练集中医药高光谱图像对应的医药高光谱成分标签输入到图卷积神经网络中进行训练,得到图卷积神经网络的模型参数;Step 5. Input the feature matrix, the adjacency weight matrix and the medical hyperspectral component label corresponding to the medical hyperspectral image in the training set into the graph convolutional neural network for training, and obtain the model parameters of the graph convolutional neural network;

步骤6、将测试集中的医药高光谱图像重复步骤2至4,得到需进行药物成分分析的区域邻接图,并获取需进行药物成分分析的区域邻接图的特征矩阵与邻接权值矩阵,将测试集中获取的特征矩阵与邻接权值矩阵输入到由步骤5训练好的模型参数所初始化的图卷积神经网络模型中,得到药物成分分析结果。Step 6. Repeat steps 2 to 4 for the medical hyperspectral images in the test set to obtain a region adjacency graph that needs to be analyzed for drug components, and obtain a feature matrix and an adjacency weight matrix of the region adjacency graph that needs to be analyzed for drug components. The feature matrix and adjacency weight matrix obtained in a centralized manner are input into the graph convolutional neural network model initialized by the model parameters trained in step 5, and the result of drug component analysis is obtained.

本发明首先,获取医药高光谱图像,构建包括训练集和测试集的医药高光谱数据集;其次,利用超像素分割算法,将所述训练集中的医药高光谱图像进行分割,得到互不重叠的超像素;然后,分别统计每个超像素的像素均值、质心像素位置、周长、面积、区域方位角,以及质心像素到每个超像素区域边界距离的特征参数,构造图数据的特征矩阵;接着,以每个超像素为图节点,最近邻超像素为边,构建区域邻接图,并获得图数据的邻接权值矩阵;再次,将所述特征矩阵、所述邻接权值矩阵以及训练集中医药高光谱图像对应的医药高光谱成分标签输入到图卷积神经网络中进行训练,得到图卷积神经网络的模型参数;最后,将测试集中的医药高光谱图像重复步骤2至4,得到需进行药物成分分析的区域邻接图,并获取需进行药物成分分析的区域邻接图的特征矩阵与邻接权值矩阵,将测试集中获取的特征矩阵与邻接权值矩阵输入到由步骤5训练好的模型参数所初始化的图卷积神经网络模型中,得到药物成分分析结果。与现有技术相比,本发明可对医药高光谱图像中药品样本的不同成分进行精确分析,解决了医药种类多样、组成成分复杂、物理特性各异等难题,实现了无损药物成分分析与质量的快速检测。In the present invention, firstly, medical hyperspectral images are acquired, and a medical hyperspectral data set including a training set and a test set is constructed; secondly, a superpixel segmentation algorithm is used to segment the medical hyperspectral images in the training set to obtain non-overlapping medical hyperspectral images. Superpixel; then, count the pixel mean, centroid pixel position, perimeter, area, area azimuth, and the characteristic parameters of the distance from the centroid pixel to the boundary of each superpixel area of each superpixel respectively, and construct the feature matrix of the graph data; Next, take each superpixel as a graph node and the nearest neighbor superpixel as an edge, construct a regional adjacency graph, and obtain the adjacency weight matrix of the graph data; again, combine the feature matrix, the adjacency weight matrix and the training set The medical hyperspectral component labels corresponding to the medical hyperspectral images are input into the graph convolutional neural network for training, and the model parameters of the graph convolutional neural network are obtained; Carry out the region adjacency graph for drug component analysis, and obtain the feature matrix and adjacency weight matrix of the region adjacency graph that needs to be analyzed for drug components, and input the feature matrix and adjacency weight matrix obtained in the test set into the model trained in step 5 In the graph convolutional neural network model initialized by the parameters, the analysis results of the drug components are obtained. Compared with the prior art, the present invention can accurately analyze the different components of the drug samples in the medical hyperspectral image, solve the problems of various types of medicines, complex components, different physical properties, etc., and realize non-destructive drug component analysis and quality. rapid detection.

参见图2至图4,图2为本发明实施例二提供的一种图卷积神经网络高光谱医药成分分析的实现方法的流程图;图3为本发明实施例中邻接权值矩阵获取过程的流程图;图4为本发明实施例的图卷积神经网络模型的结构框架示意图。Referring to FIGS. 2 to 4 , FIG. 2 is a flowchart of a method for implementing hyperspectral medical component analysis using a graph convolutional neural network according to Embodiment 2 of the present invention; FIG. 3 is a process of obtaining an adjacency weight matrix in an embodiment of the present invention 4 is a schematic diagram of a structural framework of a graph convolutional neural network model according to an embodiment of the present invention.

一种图卷积神经网络高光谱医药成分分析的实现方法,该方法包括以下步骤:A method for realizing hyperspectral medical component analysis of graph convolutional neural network, the method comprises the following steps:

步骤1.1、准备多种不同的药物样品;Step 1.1, prepare a variety of different drug samples;

需要说明的是,该实施例中以头孢丙烯片、土霉素片、马来酸氯苯那敏片、呋塞米片、阿司匹林肠溶片、珀乙红霉素片 、裸花紫珠分散片七种药物样品进行实验,但药物的数量和种类并不局限于此。图5即为头孢丙烯片、马来酸氯苯那敏片 、裸花紫珠分散片的高光谱医药成分分析数据集部分样本图,具体地,图5中(a)表示裸花紫珠分散片的样本图,(b)表示头孢丙烯片的样本图,(c)表示马来酸氯苯那敏片的样本图。It should be noted that in this example, cefprozil tablets, oxytetracycline tablets, chlorpheniramine maleate tablets, furosemide tablets, aspirin enteric-coated tablets, erythromycin tablets, and naked flowers are dispersed Seven drug samples were used for experiments, but the number and types of drugs were not limited to this. Figure 5 is a partial sample graph of the hyperspectral pharmaceutical composition analysis data set of Cefprozil Tablets, Chlorpheniramine Maleate Tablets, and Naked Flowers Violet Dispersible Tablets. Specifically, (a) in Figure 5 represents Naked Flowers Violet Dispersion The sample diagram of the tablet, (b) represents the sample diagram of the cefprozil tablet, and (c) represents the sample diagram of the chlorpheniramine maleate tablet.

步骤1.2、获取医药高光谱图像,构建医药高光谱数据集

Figure 404682DEST_PATH_IMAGE060
:采用高光谱分选仪获取药物样品的医药高光谱图像,并对采集的医药高光谱图像进行反射率校正,将校正后的图像作为医药高光谱数据集的样本;Step 1.2. Obtain medical hyperspectral images and construct a medical hyperspectral dataset
Figure 404682DEST_PATH_IMAGE060
: Use a hyperspectral sorter to obtain the medical hyperspectral image of the drug sample, and perform reflectance correction on the collected medical hyperspectral image, and use the corrected image as the sample of the medical hyperspectral data set;

需要说明的是,上述过程中高光谱分选仪优选采用四川双利合谱高光谱分选仪(V10E、N25E - SWIR) ,光谱范围分别为400 - 1000nm,1000 - 2500nm;It should be noted that in the above process, the hyperspectral sorter preferably adopts Sichuan Shuanglihe Spectrum hyperspectral sorter (V10E, N25E-SWIR), and the spectral ranges are 400-1000nm, 1000-2500nm respectively;

步骤1.3、将医药高光谱数据集

Figure 145980DEST_PATH_IMAGE061
随机划分为训练集
Figure 73485DEST_PATH_IMAGE062
和测试集
Figure 743501DEST_PATH_IMAGE063
Figure 858087DEST_PATH_IMAGE064
,
Figure 170120DEST_PATH_IMAGE065
Figure 686552DEST_PATH_IMAGE066
Figure 28934DEST_PATH_IMAGE067
Figure 630816DEST_PATH_IMAGE068
中第i个样本的图像,
Figure 215381DEST_PATH_IMAGE069
Figure 851899DEST_PATH_IMAGE068
中第i个样本对应的药物成分标签,
Figure 863717DEST_PATH_IMAGE070
为训练集
Figure 218475DEST_PATH_IMAGE071
中第i个样本的图像,
Figure 692883DEST_PATH_IMAGE072
为训练集
Figure 449486DEST_PATH_IMAGE071
中第i个样本对应的药物成分标签,
Figure 632206DEST_PATH_IMAGE073
为测试集
Figure 208680DEST_PATH_IMAGE074
中第i个样本的图像,
Figure 902092DEST_PATH_IMAGE075
为测试集
Figure 247623DEST_PATH_IMAGE074
中第i个样本对应的药物成分标签,d表示医药高光谱数据集
Figure 866823DEST_PATH_IMAGE068
中的样本总数,s表示训练集
Figure 665015DEST_PATH_IMAGE071
中的样本总数,m表示测试集
Figure 395073DEST_PATH_IMAGE074
中的样本总数;Step 1.3, the medical hyperspectral dataset
Figure 145980DEST_PATH_IMAGE061
Randomly divided into training set
Figure 73485DEST_PATH_IMAGE062
and test set
Figure 743501DEST_PATH_IMAGE063
,
Figure 858087DEST_PATH_IMAGE064
,
Figure 170120DEST_PATH_IMAGE065
,
Figure 686552DEST_PATH_IMAGE066
,
Figure 28934DEST_PATH_IMAGE067
for
Figure 630816DEST_PATH_IMAGE068
The image of the ith sample in ,
Figure 215381DEST_PATH_IMAGE069
for
Figure 851899DEST_PATH_IMAGE068
The drug component label corresponding to the i -th sample in
Figure 863717DEST_PATH_IMAGE070
for the training set
Figure 218475DEST_PATH_IMAGE071
The image of the ith sample in ,
Figure 692883DEST_PATH_IMAGE072
for the training set
Figure 449486DEST_PATH_IMAGE071
The drug component label corresponding to the i -th sample in
Figure 632206DEST_PATH_IMAGE073
for the test set
Figure 208680DEST_PATH_IMAGE074
The image of the ith sample in ,
Figure 902092DEST_PATH_IMAGE075
for the test set
Figure 247623DEST_PATH_IMAGE074
The drug component label corresponding to the i -th sample in , d represents the medical hyperspectral dataset
Figure 866823DEST_PATH_IMAGE068
The total number of samples in , s is the training set
Figure 665015DEST_PATH_IMAGE071
The total number of samples in , m is the test set
Figure 395073DEST_PATH_IMAGE074
The total number of samples in;

步骤2、利用超像素分割算法,将所述训练集中的医药高光谱图像进行分割,得到互不重叠的超像素,所述互不重叠的超像素构成医药高光谱超像素集合;Step 2, using a superpixel segmentation algorithm to segment the medical hyperspectral images in the training set to obtain non-overlapping superpixels, and the non-overlapping superpixels constitute a medical hyperspectral superpixel set;

优选地,该步骤具体表现为:采用SLIC算法(Simple Linear IiterativeClustering,简单线性迭代聚类)对所述训练集中的医药高光谱图像进行分割,通过计算像素点之间的空间距离和光谱距离,迭代式更新超像素聚类中心和边界范围,在新的聚类中心和旧的聚类中心之间误差小于预设阈值时停止迭代,从而得到互不重叠的超像素,所述互不重叠的超像素构成医药高光谱超像素集合

Figure 595111DEST_PATH_IMAGE076
Figure 385212DEST_PATH_IMAGE077
为第i个超像素,N为互不重叠的超像素个数;Preferably, this step is embodied as follows: using the SLIC algorithm (Simple Linear Iiterative Clustering, simple linear iterative clustering) to segment the medical hyperspectral images in the training set, and by calculating the spatial distance and spectral distance between the pixel points, iteratively The superpixel cluster center and boundary range are updated using the formula, and the iteration is stopped when the error between the new cluster center and the old cluster center is less than a preset threshold, so as to obtain non-overlapping superpixels. Pixel composition medical hyperspectral superpixel collection
Figure 595111DEST_PATH_IMAGE076
,
Figure 385212DEST_PATH_IMAGE077
is the ith superpixel, and N is the number of non-overlapping superpixels;

步骤3,分别统计每个超像素的像素均值、质心像素位置、周长、面积、区域方位角,以及质心像素到每个超像素区域边界距离的特征参数,构造图数据的特征矩阵;Step 3, respectively count the pixel mean value, centroid pixel position, perimeter, area, area azimuth of each superpixel, and characteristic parameters of the distance from the centroid pixel to the boundary of each superpixel area, and construct a feature matrix of the graph data;

具体地,该步骤表现为:将步骤2中得到的每个超像素

Figure 434814DEST_PATH_IMAGE077
,获取每个超像素
Figure 968564DEST_PATH_IMAGE077
的像素均值
Figure 288687DEST_PATH_IMAGE078
、质心像素
Figure 984110DEST_PATH_IMAGE079
位置
Figure 22473DEST_PATH_IMAGE080
、周长
Figure 359914DEST_PATH_IMAGE081
、面积
Figure 770429DEST_PATH_IMAGE082
、区域方位角
Figure 167912DEST_PATH_IMAGE083
以及质心像素
Figure 162413DEST_PATH_IMAGE079
到每个超像素区域边界取东、南、西、北、东南、东北、西南、西北8个方向的距离
Figure 303544DEST_PATH_IMAGE084
,从而获得特征矩阵X
Figure 332680DEST_PATH_IMAGE085
,其中,N为超像素个数,M为特征维数,
Figure 635486DEST_PATH_IMAGE086
表示实数集;Specifically, this step is expressed as: each superpixel obtained in step 2
Figure 434814DEST_PATH_IMAGE077
, get each superpixel
Figure 968564DEST_PATH_IMAGE077
pixel mean of
Figure 288687DEST_PATH_IMAGE078
, centroid pixel
Figure 984110DEST_PATH_IMAGE079
Location
Figure 22473DEST_PATH_IMAGE080
,perimeter
Figure 359914DEST_PATH_IMAGE081
,area
Figure 770429DEST_PATH_IMAGE082
, area azimuth
Figure 167912DEST_PATH_IMAGE083
and centroid pixels
Figure 162413DEST_PATH_IMAGE079
The distance to the boundary of each superpixel area is taken in eight directions: east, south, west, north, southeast, northeast, southwest, and northwest
Figure 303544DEST_PATH_IMAGE084
, so as to obtain the feature matrix X ,
Figure 332680DEST_PATH_IMAGE085
, where N is the number of superpixels, M is the feature dimension,
Figure 635486DEST_PATH_IMAGE086
represents the set of real numbers;

步骤4、以每个超像素为图节点,最近邻超像素为边,构建区域邻接图,并获得图数据的邻接权值矩阵;具体地,参见图3,该步骤分解为以下过程:Step 4. Taking each superpixel as a graph node and the nearest neighbor superpixel as an edge, construct a regional adjacency graph, and obtain the adjacency weight matrix of the graph data; specifically, referring to Fig. 3, this step is decomposed into the following process:

步骤4.1、根据步骤2得到的医药高光谱超像素集合V,将医药高光谱超像素集合中的超像素

Figure 887256DEST_PATH_IMAGE087
构成一个个图节点,采用K最近邻算法选取离超像素
Figure 300920DEST_PATH_IMAGE087
最近的K个超像素点构建边,从而构成区域邻接图G,此处,K的取值为8;Step 4.1. According to the medical hyperspectral superpixel set V obtained in step 2, the superpixels in the medical hyperspectral superpixel set are
Figure 887256DEST_PATH_IMAGE087
A graph node is formed, and the K nearest neighbor algorithm is used to select the distance from the superpixel
Figure 300920DEST_PATH_IMAGE087
The nearest K superpixels construct edges to form a region adjacency graph G, where the value of K is 8;

步骤4.2、根据步骤2获取的医药高光谱超像素集合V中的每个超像素区域,统计每个超像素区域的相邻超像素,得到相邻超像素集合

Figure 450142DEST_PATH_IMAGE088
;Step 4.2, according to each superpixel area in the medical hyperspectral superpixel set V obtained in step 2, count the adjacent superpixels of each superpixel area, and obtain the adjacent superpixel set
Figure 450142DEST_PATH_IMAGE088
;

步骤4.3、根据步骤3获取超像素

Figure 923848DEST_PATH_IMAGE087
的像素均值
Figure 892941DEST_PATH_IMAGE089
,计算每个超像素间的像素均值距离
Figure 110296DEST_PATH_IMAGE090
,每个超像素间的像素均值距离
Figure 848445DEST_PATH_IMAGE090
由下式计算:Step 4.3, obtain superpixels according to step 3
Figure 923848DEST_PATH_IMAGE087
pixel mean of
Figure 892941DEST_PATH_IMAGE089
, calculate the pixel mean distance between each superpixel
Figure 110296DEST_PATH_IMAGE090
, the pixel mean distance between each superpixel
Figure 848445DEST_PATH_IMAGE090
Calculated by:

Figure 994518DEST_PATH_IMAGE091
(1)
Figure 994518DEST_PATH_IMAGE091
(1)

式中,

Figure 450907DEST_PATH_IMAGE089
表示第i个超像素的像素均值,
Figure 206373DEST_PATH_IMAGE092
表示第j个超像素的像素均值;In the formula,
Figure 450907DEST_PATH_IMAGE089
represents the pixel mean of the ith superpixel,
Figure 206373DEST_PATH_IMAGE092
represents the pixel mean of the jth superpixel;

步骤4.4、根据步骤3获取超像素

Figure 330187DEST_PATH_IMAGE089
的质心像素
Figure 880117DEST_PATH_IMAGE093
位置
Figure 823802DEST_PATH_IMAGE094
,计算每个超像素间质心坐标距离
Figure 382960DEST_PATH_IMAGE095
,每个超像素间质心坐标距离
Figure 859815DEST_PATH_IMAGE095
由下式计算:Step 4.4, obtain superpixels according to step 3
Figure 330187DEST_PATH_IMAGE089
centroid of pixels
Figure 880117DEST_PATH_IMAGE093
Location
Figure 823802DEST_PATH_IMAGE094
, calculate the centroid coordinate distance between each superpixel
Figure 382960DEST_PATH_IMAGE095
, the centroid coordinate distance between each superpixel
Figure 859815DEST_PATH_IMAGE095
Calculated by:

Figure 111805DEST_PATH_IMAGE096
(2)
Figure 111805DEST_PATH_IMAGE096
(2)

式中,

Figure 277207DEST_PATH_IMAGE097
表示第i个超像素质心,
Figure 640055DEST_PATH_IMAGE098
表示第j个超像素质心,
Figure 738461DEST_PATH_IMAGE099
表示第i个超像素质心的横坐标,
Figure 161352DEST_PATH_IMAGE100
表示第i个超像素质心的纵坐标,
Figure 581095DEST_PATH_IMAGE101
表示第j个超像素质心的横坐标,
Figure 482054DEST_PATH_IMAGE102
表示第j个超像素质心的纵坐标;In the formula,
Figure 277207DEST_PATH_IMAGE097
represents the i -th superpixel centroid,
Figure 640055DEST_PATH_IMAGE098
represents the jth superpixel centroid,
Figure 738461DEST_PATH_IMAGE099
represents the abscissa of the i -th superpixel centroid,
Figure 161352DEST_PATH_IMAGE100
represents the ordinate of the i -th superpixel centroid,
Figure 581095DEST_PATH_IMAGE101
represents the abscissa of the j -th superpixel centroid,
Figure 482054DEST_PATH_IMAGE102
represents the ordinate of the j -th superpixel centroid;

步骤4.5、根据步骤4.3获取的超像素间的像素均值距离

Figure 434967DEST_PATH_IMAGE103
和步骤4.4获取的超像素间质心坐标距离
Figure 763180DEST_PATH_IMAGE095
进行计算,得到邻接权值矩阵A,
Figure 168754DEST_PATH_IMAGE104
,邻接权值矩阵A由下式计算:Step 4.5, the pixel mean distance between superpixels obtained according to step 4.3
Figure 434967DEST_PATH_IMAGE103
and the distance between the centroid coordinates of the superpixels obtained in step 4.4
Figure 763180DEST_PATH_IMAGE095
Perform the calculation to get the adjacency weight matrix A,
Figure 168754DEST_PATH_IMAGE104
, the adjacency weight matrix A is calculated by the following formula:

Figure 607825DEST_PATH_IMAGE105
(3);
Figure 607825DEST_PATH_IMAGE105
(3);

步骤5、将所述特征矩阵、所述邻接权值矩阵以及训练集中医药高光谱图像对应的医药高光谱成分标签输入到图卷积神经网络中进行训练,得到图卷积神经网络的模型参数;图4即为本发明实施例的图卷积神经网络模型的结构框架示意图;Step 5. Input the feature matrix, the adjacency weight matrix and the medical hyperspectral component label corresponding to the medical hyperspectral image in the training set into the graph convolutional neural network for training, and obtain the model parameters of the graph convolutional neural network; 4 is a schematic diagram of a structural framework of a graph convolutional neural network model according to an embodiment of the present invention;

步骤6、将测试集中的医药高光谱图像重复步骤2至4,得到需进行药物成分分析的区域邻接图,并获取需进行药物成分分析的区域邻接图的特征矩阵与邻接权值矩阵,将测试集中获取的特征矩阵与邻接权值矩阵输入到由步骤5训练好的模型参数所初始化的图卷积神经网络模型中,得到药物成分分析结果。Step 6. Repeat steps 2 to 4 for the medical hyperspectral images in the test set to obtain a region adjacency graph that needs to be analyzed for drug components, and obtain a feature matrix and an adjacency weight matrix of the region adjacency graph that needs to be analyzed for drug components. The feature matrix and adjacency weight matrix obtained in a centralized manner are input into the graph convolutional neural network model initialized by the model parameters trained in step 5, and the result of drug component analysis is obtained.

作为本发明的优选实施例,采用K折交叉验证法对步骤1.3中医药高光谱数据集

Figure 931358DEST_PATH_IMAGE106
进行训练集
Figure 696051DEST_PATH_IMAGE107
与测试集
Figure 323342DEST_PATH_IMAGE108
的划分,其中,K取10。As a preferred embodiment of the present invention, the K-fold cross-validation method is used to analyze the hyperspectral data set of traditional Chinese medicine in step 1.3
Figure 931358DEST_PATH_IMAGE106
run the training set
Figure 696051DEST_PATH_IMAGE107
with the test set
Figure 323342DEST_PATH_IMAGE108
, where K is 10.

同时,在进一步的技术方案中,步骤5将所述特征矩阵、所述邻接权值矩阵以及训练集中医药高光谱图像对应的医药高光谱成分标签输入到图卷积神经网络中进行训练,得到图卷积神经网络的模型参数的具体实现包括以下步骤:At the same time, in a further technical solution, step 5 inputs the feature matrix, the adjacency weight matrix and the medical hyperspectral component label corresponding to the medical hyperspectral image in the training set into the graph convolutional neural network for training to obtain a graph of The specific implementation of the model parameters of the convolutional neural network includes the following steps:

步骤5.1、采用Xavier方法初始化图卷积神经网络模型的模型参数

Figure 831683DEST_PATH_IMAGE109
,需要说明的是,Xavier方法就是一种很有效的神经网络参数初始化方法,其目的主要是使得神经网络每一层输出的方差应该尽量相等;Step 5.1, use the Xavier method to initialize the model parameters of the graph convolutional neural network model
Figure 831683DEST_PATH_IMAGE109
, it should be noted that the Xavier method is a very effective neural network parameter initialization method, and its purpose is to make the variance of the output of each layer of the neural network as equal as possible;

步骤5.2、根据步骤4构建的区域邻接图G,计算各图节点的度矩阵D,

Figure 493609DEST_PATH_IMAGE110
;Step 5.2. Calculate the degree matrix D of each graph node according to the region adjacency graph G constructed in step 4,
Figure 493609DEST_PATH_IMAGE110
;

步骤5.3、图卷积神经网络模型中每层图卷积神经网络GCN的特征H由下式计算:Step 5.3. The feature H of each layer of graph convolutional neural network GCN in the graph convolutional neural network model is calculated by the following formula:

Figure 163625DEST_PATH_IMAGE111
(4)
Figure 163625DEST_PATH_IMAGE111
(4)

其中,

Figure 278211DEST_PATH_IMAGE112
W为可学习的权值参数矩阵,
Figure 91709DEST_PATH_IMAGE113
为激活函数,且l=0时,
Figure 342561DEST_PATH_IMAGE114
X为特征矩阵;in,
Figure 278211DEST_PATH_IMAGE112
, W is a learnable weight parameter matrix,
Figure 91709DEST_PATH_IMAGE113
is the activation function, and when l = 0,
Figure 342561DEST_PATH_IMAGE114
, X is the feature matrix;

步骤5.4、在训练阶段,通过图卷积、可微池化操作来调整W以持续性的减少误差,从而优化输出,损失函数由下式计算:Step 5.4. In the training phase, adjust W through graph convolution and micro-pooling operations to continuously reduce the error, thereby optimizing the output. The loss function is calculated by the following formula:

Figure 449058DEST_PATH_IMAGE115
(5)
Figure 449058DEST_PATH_IMAGE115
(5)

其中,

Figure 316520DEST_PATH_IMAGE116
是训练样本
Figure 166664DEST_PATH_IMAGE117
的真实标签,s为训练样本数量,L为损失函数;其中,L采用交叉熵损失函数的下式计算:in,
Figure 316520DEST_PATH_IMAGE116
is the training sample
Figure 166664DEST_PATH_IMAGE117
The true label of , s is the number of training samples, L is the loss function; among them, L is calculated by the following formula of the cross-entropy loss function:

Figure 301717DEST_PATH_IMAGE118
(6)
Figure 301717DEST_PATH_IMAGE118
(6)

式中,

Figure 579114DEST_PATH_IMAGE119
为训练样本
Figure 933872DEST_PATH_IMAGE120
的真实成分,
Figure 322128DEST_PATH_IMAGE121
为训练样本
Figure 813152DEST_PATH_IMAGE120
预测的成分,s为样本数量。In the formula,
Figure 579114DEST_PATH_IMAGE119
for training samples
Figure 933872DEST_PATH_IMAGE120
the real ingredients,
Figure 322128DEST_PATH_IMAGE121
for training samples
Figure 813152DEST_PATH_IMAGE120
Predicted components, s is the sample size.

图4中的图卷积神经网络模型即包括图卷积层、图池化层和输出层。The graph convolutional neural network model in Figure 4 includes graph convolution layer, graph pooling layer and output layer.

步骤5.5、根据损失函数L的梯度经反向传播调整整个图卷积神经网络模型的模型参数

Figure 527031DEST_PATH_IMAGE122
,以此作为步骤5.1中的网络初始化参数,不断迭代步骤5.1到步骤5.5直到图卷积神经网络模型对药物成分分析精度趋于稳定。Step 5.5. Adjust the model parameters of the entire graph convolutional neural network model through backpropagation according to the gradient of the loss function L
Figure 527031DEST_PATH_IMAGE122
, as the network initialization parameter in step 5.1, and iterates step 5.1 to step 5.5 until the graph convolutional neural network model tends to stabilize the accuracy of drug component analysis.

相比现有技术,本发明将医药高光谱图像数据处理成图数据,大幅度降低了像素数量,有效减少了数据量;以图卷积神经网络提取药物的特征信息,有效地学习了药物高光谱图像中的视觉特征与药物成分间的空间关系,提升了药物成分分类特征的表示能力,提高了被测药物的成分和属性精度,可实现对药物成分与质量的无损、快速检测分析。Compared with the prior art, the present invention processes the medical hyperspectral image data into graph data, greatly reduces the number of pixels, and effectively reduces the amount of data; the graph convolutional neural network is used to extract the characteristic information of the drug, and the drug height is effectively learned. The spatial relationship between the visual features in the spectral image and the drug components improves the representation ability of the classification features of the drug components, improves the accuracy of the components and attributes of the tested drugs, and can achieve non-destructive and rapid detection and analysis of the drug components and quality.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the scope of the present invention. within the scope of protection.

Claims (10)

1. A method for realizing hyperspectral medical component analysis of a graph convolution neural network is characterized by comprising the following steps:
step 1, acquiring a medical hyperspectral image, and constructing a medical hyperspectral data set, wherein the medical hyperspectral data set comprises a training set and a testing set;
step 2, segmenting the medical hyperspectral images in the training set by utilizing a superpixel segmentation algorithm to obtain mutually non-overlapping superpixels, wherein the mutually non-overlapping superpixels form a medical hyperspectral superpixel set;
step 3, respectively counting the pixel mean value, the centroid pixel position, the perimeter, the area and the region azimuth angle of each super pixel, and the characteristic parameters of the distance from the centroid pixel to the boundary of each super pixel region, and constructing a characteristic matrix of the graph data;
step 4, constructing a region adjacency graph by taking each super pixel as a graph node and the nearest neighbor super pixel as an edge, and obtaining an adjacency weight matrix of graph data;
step 5, inputting the feature matrix, the adjacent weight matrix and the medical hyperspectral component labels corresponding to the medical hyperspectral images in the training set into a atlas neural network for training to obtain model parameters of the atlas neural network;
and 6, repeating the steps 2 to 4 on the medical hyperspectral images in the test set to obtain a region adjacency graph needing to be subjected to medicine component analysis, obtaining a feature matrix and an adjacency weight matrix of the region adjacency graph needing to be subjected to the medicine component analysis, and inputting the feature matrix and the adjacency weight matrix obtained in the test set into a graph convolution neural network model initialized by the model parameters trained in the step 5 to obtain a medicine component analysis result.
2. The method for realizing the hyperspectral medical composition analysis of the convolutional neural network according to claim 1, wherein the step 1 specifically comprises the following steps:
1.1, preparing a plurality of different drug samples;
step 1.2, acquiring a medicine hyperspectral image and constructing a medicine hyperspectral data set
Figure 844152DEST_PATH_IMAGE001
: acquiring a medical hyperspectral image of a medicine sample by adopting a hyperspectral sorter, performing reflectivity correction on the acquired medical hyperspectral image, and taking the corrected image as a sample of a medical hyperspectral data set;
step 1.3, medical hyperspectral data set
Figure 36099DEST_PATH_IMAGE002
Random partitioning into training sets
Figure 381629DEST_PATH_IMAGE003
And test set
Figure 236715DEST_PATH_IMAGE004
Figure 34907DEST_PATH_IMAGE005
,
Figure 764966DEST_PATH_IMAGE006
Figure 965003DEST_PATH_IMAGE007
Figure 489525DEST_PATH_IMAGE008
Is composed of
Figure 40592DEST_PATH_IMAGE002
To middleiThe image of one of the samples is taken,
Figure 308762DEST_PATH_IMAGE009
is composed of
Figure 363306DEST_PATH_IMAGE002
To middleiThe label of the drug component corresponding to each sample,
Figure 828703DEST_PATH_IMAGE010
for training set
Figure 601487DEST_PATH_IMAGE011
To middleiThe image of one of the samples is taken,
Figure 407769DEST_PATH_IMAGE012
for training set
Figure 847978DEST_PATH_IMAGE011
To middleiThe label of the drug component corresponding to each sample,
Figure 979882DEST_PATH_IMAGE013
to test the set
Figure 505541DEST_PATH_IMAGE014
To middleiThe image of one of the samples is taken,
Figure 616979DEST_PATH_IMAGE015
to test the set
Figure 646115DEST_PATH_IMAGE014
To middleiThe medicine component labels corresponding to the samples, d represents a medicine hyperspectral dataset
Figure 948920DEST_PATH_IMAGE002
Total number of samples in (1), s represents the training set
Figure 696296DEST_PATH_IMAGE011
Total number of samples in (1), m represents the test set
Figure 109960DEST_PATH_IMAGE014
Total number of samples in (1).
3. The method for realizing the hyperspectral medical component analysis of the convolutional neural network according to claim 2, wherein a K-fold cross-validation method is adopted to perform hyperspectral medical data collection on the medicine in the step 1.3
Figure 993602DEST_PATH_IMAGE002
Training set
Figure 467309DEST_PATH_IMAGE011
And test set
Figure 200517DEST_PATH_IMAGE014
The division of (2).
4. The method for implementing hyperspectral medical composition analysis of the convolutional neural network according to claim 3, wherein the step 2 is embodied as: the SLIC algorithm is adopted to segment the medical hyperspectral images in the training set, the spatial distance and the spectral distance between pixel points are calculated, the superpixel clustering center and the boundary range are updated in an iterative mode, and the new clustering center is usedStopping iteration when the error between the current clustering center and the old clustering center is smaller than a preset threshold value, thereby obtaining super pixels which are not overlapped with each other, wherein the super pixels which are not overlapped with each other form a medicine hyperspectral super pixel set
Figure 152292DEST_PATH_IMAGE016
Figure 421599DEST_PATH_IMAGE017
Is as followsiN is the number of super pixels which are not overlapped with each other.
5. The method for implementing the hyperspectral medical composition analysis of the convolutional neural network according to claim 4, wherein the step 3 is specifically represented as: subjecting each super pixel obtained in step 2
Figure 66207DEST_PATH_IMAGE018
Obtaining each super pixel
Figure 522596DEST_PATH_IMAGE018
Pixel mean of
Figure 278063DEST_PATH_IMAGE019
Centroid pixel
Figure 136297DEST_PATH_IMAGE020
Position of
Figure 686227DEST_PATH_IMAGE021
Circumference length, of
Figure 396957DEST_PATH_IMAGE022
Area of
Figure 956114DEST_PATH_IMAGE023
Azimuth of area
Figure 934434DEST_PATH_IMAGE024
And centroid pixel
Figure 186424DEST_PATH_IMAGE020
Distances from each super pixel region boundary to east, south, west, north, south, north and west 8 directions
Figure 617405DEST_PATH_IMAGE025
Thereby obtaining a feature matrixX
Figure 714674DEST_PATH_IMAGE026
Wherein N is the number of superpixels, M is the feature dimension,
Figure 329194DEST_PATH_IMAGE027
representing a set of real numbers.
6. The method for realizing the hyperspectral medical component analysis of the convolutional neural network according to claim 5, wherein the concrete realization of the adjacent weight matrix in the step 4 comprises the following steps:
step 4.1, according to the medical hyperspectral superpixel set V obtained in the step 2, superpixels in the medical hyperspectral superpixel set
Figure 220926DEST_PATH_IMAGE028
Forming individual graph nodes, and selecting super-pixel by adopting K nearest neighbor algorithm
Figure 139204DEST_PATH_IMAGE028
Constructing edges by the nearest K super pixel points so as to form a region adjacency graph G;
step 4.2, according to each super-pixel area in the medical hyperspectral super-pixel set V obtained in the step 2, counting adjacent super-pixels of each super-pixel area to obtain an adjacent super-pixel set
Figure 40164DEST_PATH_IMAGE029
Step 4.3, obtaining the super pixel according to the step 3
Figure 461918DEST_PATH_IMAGE028
Pixel mean of
Figure 790131DEST_PATH_IMAGE030
Calculating the pixel mean distance between each super pixel
Figure 930125DEST_PATH_IMAGE031
Step 4.4, obtaining the super pixel according to the step 3
Figure 401820DEST_PATH_IMAGE030
Centroid pixel of
Figure 943660DEST_PATH_IMAGE032
Position of
Figure 442774DEST_PATH_IMAGE033
Calculating the distance of each superpixel interstitial center coordinate
Figure 335644DEST_PATH_IMAGE034
Step 4.5, obtaining the pixel mean distance between the super pixels according to the step 4.3
Figure 578407DEST_PATH_IMAGE035
And the super-pixel interstitial-to-heart coordinate distance obtained in step 4.4
Figure 974753DEST_PATH_IMAGE034
Calculating to obtain an adjacent weight matrix A,
Figure 644769DEST_PATH_IMAGE036
7. the method for realizing hyperspectral medical composition analysis of the convolutional neural network according to claim 6, wherein the concrete implementation of the step 5 comprises the following steps:
step 5.1, initializing model parameters of graph convolution neural network model by using Xavier method
Figure 523470DEST_PATH_IMAGE037
Step 5.2, calculating a degree matrix D of each graph node according to the region adjacency graph G constructed in the step 4,
Figure 304344DEST_PATH_IMAGE038
step 5.3, calculating the characteristic H of each layer of the graph convolution neural network GCN in the graph convolution neural network model according to the following formula:
Figure 820776DEST_PATH_IMAGE039
(4)
wherein,
Figure 661693DEST_PATH_IMAGE040
Wfor the matrix of weight parameters that can be learned,
Figure 263575DEST_PATH_IMAGE041
is an activation function, andlwhen the value is not less than 0, the reaction time is not less than 0,
Figure 113720DEST_PATH_IMAGE042
Xis a feature matrix;
step 5.4, in the training phase, the adjustment is carried out through graph convolution and micro-poolingWTo reduce the error on a continuous basis to optimize the output, the loss function is calculated by:
Figure 484658DEST_PATH_IMAGE043
(5)
wherein,
Figure 997941DEST_PATH_IMAGE044
is a training sample
Figure 87120DEST_PATH_IMAGE045
The real label of (a) is,sin order to train the number of samples,Lis a loss function;
step 5.5, according to the loss functionLThe gradient of the whole graph convolution neural network model is adjusted through back propagation
Figure 475376DEST_PATH_IMAGE046
And taking the parameter as the network initialization parameter in the step 5.1, and continuously iterating the step 5.1 to the step 5.5 until the analysis precision of the graph convolution neural network model to the medicine components tends to be stable.
8. The method for implementing the hyperspectral medical composition analysis of the convolutional neural network of claim 6, wherein the pixel mean distance between each superpixel in the step 4.3
Figure 966400DEST_PATH_IMAGE047
Calculated from the following formula:
Figure 149120DEST_PATH_IMAGE048
(1)
in the formula,
Figure 725595DEST_PATH_IMAGE049
is shown asiThe pixel mean of the individual super-pixels,
Figure 651963DEST_PATH_IMAGE050
is shown asjPixel mean of individual superpixels.
9. Implementation of the atlas neural network hyperspectral pharmaceutical composition analysis of claim 8Method, characterized in that in step 4.4 each superpixel interstitial-to-cardiac coordinate distance
Figure 501888DEST_PATH_IMAGE051
Calculated from the following formula:
Figure 121088DEST_PATH_IMAGE052
(2)
in the formula,
Figure 919280DEST_PATH_IMAGE053
is shown asiThe center of mass of each super-pixel,
Figure 914917DEST_PATH_IMAGE054
is shown asjThe center of mass of each super-pixel,
Figure 849375DEST_PATH_IMAGE055
is shown asiThe abscissa of the centroid of an individual super-pixel,
Figure 639477DEST_PATH_IMAGE056
is shown asiThe ordinate of the individual superpixel centroid,
Figure 190544DEST_PATH_IMAGE057
is shown asjThe abscissa of the centroid of an individual super-pixel,
Figure 960179DEST_PATH_IMAGE058
is shown asjThe ordinate of the individual superpixel centroid.
10. The method for implementing the hyperspectral medical composition analysis of the convolutional neural network of claim 9, wherein the adjacency weight matrix a in step 4.5 is calculated by the following formula:
Figure 14723DEST_PATH_IMAGE059
(3)。
CN202110811547.XA 2021-07-19 2021-07-19 Method for realizing hyperspectral medical component analysis of graph convolution neural network Active CN113269196B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110811547.XA CN113269196B (en) 2021-07-19 2021-07-19 Method for realizing hyperspectral medical component analysis of graph convolution neural network
PCT/CN2022/076023 WO2023000653A1 (en) 2021-07-19 2022-02-11 Method for implementing hyperspectral medical component analysis by using graph convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110811547.XA CN113269196B (en) 2021-07-19 2021-07-19 Method for realizing hyperspectral medical component analysis of graph convolution neural network

Publications (2)

Publication Number Publication Date
CN113269196A true CN113269196A (en) 2021-08-17
CN113269196B CN113269196B (en) 2021-09-28

Family

ID=77236799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110811547.XA Active CN113269196B (en) 2021-07-19 2021-07-19 Method for realizing hyperspectral medical component analysis of graph convolution neural network

Country Status (2)

Country Link
CN (1) CN113269196B (en)
WO (1) WO2023000653A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989525A (en) * 2021-12-24 2022-01-28 湖南大学 Hyperspectral Chinese herbal medicine identification method based on adaptive random block convolution kernel network
WO2023000653A1 (en) * 2021-07-19 2023-01-26 湖南大学 Method for implementing hyperspectral medical component analysis by using graph convolutional neural network
CN115979973A (en) * 2023-03-20 2023-04-18 湖南大学 A Hyperspectral Chinese Medicinal Material Identification Method Based on Dual-Channel Compressive Attention Network
CN116429710A (en) * 2023-06-15 2023-07-14 武汉大学人民医院(湖北省人民医院) Drug component detection method, device, equipment and readable storage medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115825316B (en) * 2023-02-15 2023-06-16 武汉宏韧生物医药股份有限公司 Method and device for analyzing active ingredients of medicine based on supercritical chromatography
CN116563711B (en) * 2023-05-17 2024-02-09 大连民族大学 Hyperspectral target detection method based on momentum update binary encoder network
CN116612333B (en) * 2023-07-17 2023-09-29 山东大学 Medical hyperspectral image classification method based on rapid full convolution network
CN116662593B (en) * 2023-07-21 2023-10-27 湖南大学 A fully pipelined neural network classification method for medical hyperspectral images based on FPGA
CN117333486B (en) * 2023-11-30 2024-03-22 清远欧派集成家居有限公司 UV finish paint performance detection data analysis method, device and storage medium
CN118459405B (en) * 2024-04-22 2024-11-15 陕西科弘健康产业有限公司 Process for extracting huperzine A from huperzia serrata
CN118096536B (en) * 2024-04-29 2024-06-21 中国科学院长春光学精密机械与物理研究所 Remote sensing hyperspectral image super-resolution reconstruction method based on hypergraph neural network
CN118469400A (en) * 2024-07-09 2024-08-09 中科信息产业(山东)有限公司 Traditional Chinese medicine talent analysis system based on traditional Chinese medicine identification data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022353A (en) * 2016-05-05 2016-10-12 浙江大学 Image semantic annotation method based on super pixel segmentation
US20190156154A1 (en) * 2017-11-21 2019-05-23 Nvidia Corporation Training a neural network to predict superpixels using segmentation-aware affinity loss
WO2020165913A1 (en) * 2019-02-12 2020-08-20 Tata Consultancy Services Limited Automated unsupervised localization of context sensitive events in crops and computing extent thereof
CN111681249A (en) * 2020-05-14 2020-09-18 中山艾尚智同信息科技有限公司 Grabcut-based sandstone particle improved segmentation algorithm research
CN112381813A (en) * 2020-11-25 2021-02-19 华南理工大学 Panorama visual saliency detection method based on graph convolution neural network
CN112446417A (en) * 2020-10-16 2021-03-05 山东大学 Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
CN113095305A (en) * 2021-06-08 2021-07-09 湖南大学 Hyperspectral classification detection method for medical foreign matters

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695636B (en) * 2020-06-15 2023-07-14 北京师范大学 A Hyperspectral Image Classification Method Based on Graph Neural Network
CN113269196B (en) * 2021-07-19 2021-09-28 湖南大学 Method for realizing hyperspectral medical component analysis of graph convolution neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022353A (en) * 2016-05-05 2016-10-12 浙江大学 Image semantic annotation method based on super pixel segmentation
US20190156154A1 (en) * 2017-11-21 2019-05-23 Nvidia Corporation Training a neural network to predict superpixels using segmentation-aware affinity loss
WO2020165913A1 (en) * 2019-02-12 2020-08-20 Tata Consultancy Services Limited Automated unsupervised localization of context sensitive events in crops and computing extent thereof
CN111681249A (en) * 2020-05-14 2020-09-18 中山艾尚智同信息科技有限公司 Grabcut-based sandstone particle improved segmentation algorithm research
CN112446417A (en) * 2020-10-16 2021-03-05 山东大学 Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
CN112381813A (en) * 2020-11-25 2021-02-19 华南理工大学 Panorama visual saliency detection method based on graph convolution neural network
CN113095305A (en) * 2021-06-08 2021-07-09 湖南大学 Hyperspectral classification detection method for medical foreign matters

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NA LIU ET AL: "《Stack Attention-Pruning Aggregates Multiscale Graph Convolution Networks for Hyperspectral Remote Sensing Image Classification》", 《IEEE》 *
SHENG WAN ET AL: "《Multiscale Dynamic Graph Convolutional Network for Hyperspectral Image Classification》", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
杨萌: "《基于注意力和特征复用机制的图卷积神经网》", 《硕士论文库》 *
陈逸: "《基于图模型的高光谱图像分类》", 《硕士论文库》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023000653A1 (en) * 2021-07-19 2023-01-26 湖南大学 Method for implementing hyperspectral medical component analysis by using graph convolutional neural network
CN113989525A (en) * 2021-12-24 2022-01-28 湖南大学 Hyperspectral Chinese herbal medicine identification method based on adaptive random block convolution kernel network
CN113989525B (en) * 2021-12-24 2022-03-29 湖南大学 Hyperspectral Chinese herbal medicine identification method based on adaptive random block convolution kernel network
WO2023115682A1 (en) * 2021-12-24 2023-06-29 湖南大学 Hyperspectral traditional chinese medicine identification method based on adaptive random block convolutional kernel network
CN115979973A (en) * 2023-03-20 2023-04-18 湖南大学 A Hyperspectral Chinese Medicinal Material Identification Method Based on Dual-Channel Compressive Attention Network
CN116429710A (en) * 2023-06-15 2023-07-14 武汉大学人民医院(湖北省人民医院) Drug component detection method, device, equipment and readable storage medium
CN116429710B (en) * 2023-06-15 2023-09-26 武汉大学人民医院(湖北省人民医院) A drug component detection method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN113269196B (en) 2021-09-28
WO2023000653A1 (en) 2023-01-26

Similar Documents

Publication Publication Date Title
CN113269196B (en) Method for realizing hyperspectral medical component analysis of graph convolution neural network
Wang et al. Intelligent hybrid deep learning model for breast cancer detection
Ma et al. Application of deep learning convolutional neural networks for internal tablet defect detection: high accuracy, throughput, and adaptability
CN107154043B (en) Pulmonary nodule false positive sample inhibition method based on 3DCNN
Luo et al. Grape berry detection and size measurement based on edge image processing and geometric morphology
Kłosowski et al. Using neural networks and deep learning algorithms in electrical impedance tomography
Guo et al. A novel skin lesion detection approach using neutrosophic clustering and adaptive region growing in dermoscopy images
Ali et al. Towards the automatic detection of skin lesion shape asymmetry, color variegation and diameter in dermoscopic images
Hu et al. A 3D point cloud filtering method for leaves based on manifold distance and normal estimation
CN110033015A (en) A kind of plant disease detection method based on residual error network
CN108062744A (en) A kind of mass spectrum image super-resolution rebuilding method based on deep learning
Miao et al. Image recognition of traditional Chinese medicine based on deep learning
Zhang et al. Forest land resource information acquisition with sentinel-2 image utilizing support vector machine, K-nearest neighbor, random forest, decision trees and multi-layer perceptron
Venkatesan et al. Nodule detection with convolutional neural network using apache spark and GPU frameworks
CN112161965B (en) Method, device, computer equipment and storage medium for detecting traditional Chinese medicine property
Shamrat et al. Analysing most efficient deep learning model to detect COVID-19 from computer tomography images
Wang et al. Medical tumor image classification based on Few-shot learning
Lin et al. Parallel regional segmentation method of high-resolution remote sensing image based on minimum spanning tree
Herrera et al. A stereovision matching strategy for images captured with fish-eye lenses in forest environments
Guedri et al. Novel computerized method for measurement of retinal vessel diameters
Vantas et al. Intra-storm pattern recognition through fuzzy clustering
Lin et al. Graph of graphs analysis for multiplexed data with application to imaging mass cytometry
CN116935382A (en) Pathological image cell topological feature extraction method
Kong et al. Multilevel regularization method for building outlines extracted from high-resolution remote sensing images
CN115184300A (en) Origin traceability method of honeysuckle based on near-infrared spectral features and 1D-VD-CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant