CN112418337B - Multi-feature fusion data classification method based on brain function hyper-network model - Google Patents

Multi-feature fusion data classification method based on brain function hyper-network model Download PDF

Info

Publication number
CN112418337B
CN112418337B CN202011364161.0A CN202011364161A CN112418337B CN 112418337 B CN112418337 B CN 112418337B CN 202011364161 A CN202011364161 A CN 202011364161A CN 112418337 B CN112418337 B CN 112418337B
Authority
CN
China
Prior art keywords
brain
super
feature
hyper
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011364161.0A
Other languages
Chinese (zh)
Other versions
CN112418337A (en
Inventor
程忱
李瑶
孙伯廷
荆智文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202011364161.0A priority Critical patent/CN112418337B/en
Publication of CN112418337A publication Critical patent/CN112418337A/en
Application granted granted Critical
Publication of CN112418337B publication Critical patent/CN112418337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The invention relates to an image processing technology, in particular to a multi-feature fusion magnetic resonance image data classification method based on a brain function hyper-network model. The method solves the problem of low accuracy caused by the use of single attribute characteristics in the traditional magnetic resonance imaging data classification method, the topological structure of the super network is evaluated in a multi-angle three-dimensional mode, the completeness of the topological information of the super network is presented, the inter-group difference characterization capability is enhanced, and the method is suitable for relevant research of functional magnetic resonance image classification. The method firstly uses a composition MCP method to construct a hyper-network model, and then extracts 11 different features from the hyper-network as fusion features for classification so as to make up for the defect that single attribute features contain single information. The feature set rich in sufficient information can represent the topology of all-around and multi-angle in a brain hyper-network, and the completeness of a hyper-network topological structure is presented, so that the classifier constructed later can effectively extract and distinguish information, and the upper limit of the classification precision of the classifier is improved.

Description

Multi-feature fusion data classification method based on brain function hyper-network model
Technical Field
The invention relates to an image processing technology, in particular to a multi-feature fusion data classification method based on a brain function hyper-network model.
Background
In recent years, neuroimaging techniques have been increasingly used to study interactions between brain regions. Significant low-frequency correlation exists in a Blood Oxygenation Level Dependent (BOLD) signal, and the signal can be used as a neurophysiological index to detect spontaneous brain activity in a resting state. We can construct a functional brain network from the BOLD signal. The functional brain network has been widely applied in the field of neuropsychiatric diseases, which helps to clarify the pathological mechanism of the neuropsychiatric diseases and possibly provide related imaging markers, thereby providing a new perspective for the diagnosis and evaluation of clinical brain diseases.
Researchers have also proposed various network construction methods for the study of brain disease machine learning based on image data obtained by functional magnetic resonance imaging (fMRI). However, most of the conventional network models capture information between two brain regions, and cannot effectively express interaction between multiple brain regions of the brain. Recent neuroscience analysis, however, has shown that there are necessarily high-order interactions in neuronal impulses, local field potentials, and cortical activity. Also, some studies have shown that individual brain regions will interact directly with some other brain regions. Therefore, in order to make up for the deficiencies of the conventional network in this respect, the super network is applied to research in this field. The super network is based on the super graph theory and is different from the traditional network, and the super edge of the super network can be connected with two or more nodes. In neuroimaging, the nodes represent brain regions, and the super-edges include a plurality of nodes to represent multi-element interactions between brain regions. In recent years, hypergraphs have been successfully applied to a variety of medical imaging fields including image segmentation and classification.
However, most of the past researches use a single topological attribute as feature classification, and the information contained in the single attribute is too extensive to comprehensively represent the topology of the super network, so that some important information is lost, the classification accuracy is low, and the application value of the super network is seriously influenced. Based on this, there is a need to provide a fusion attribute feature set containing rich identification information to solve the above-mentioned problems of the conventional magnetic resonance image data classification method.
The invention provides a multi-feature fusion magnetic resonance image data classification method based on a brain function hyper-network model, which is characterized in that a brain function hyper-network is constructed by using a composite MCP method, then features of brain areas are extracted by using various different indexes to comprehensively quantify the topology of the brain function hyper-network, and finally the brain function hyper-network is used for diagnosing brain diseases. The method is based on better simulation of complex and multiple interaction of human brain, and overcomes the problems of one-attribute feature information one-sided and low classification precision in the traditional method, the topological structure of the super-network is evaluated in a multi-angle and three-dimensional mode to enhance the characterization capability of the difference among groups, the classification accuracy is improved, and the diagnosis of mental diseases is more accurately realized.
Disclosure of Invention
The invention provides a functional magnetic resonance image data classification method based on a super-network fusion characteristic, which aims to solve the problems of information one-sided and low classification precision in a single attribute characteristic quantization brain function super-network model.
The invention adopts the technical scheme that a functional magnetic resonance image data classification method based on the super-network fusion attribute characteristics is specifically carried out according to the following steps:
step S1: preprocessing the resting state functional magnetic resonance image data, and extracting a time sequence according to a brain map template.
Step S2: in the research, a composition MCP method is used for solving a sparse linear regression model, so that a hyper-network is obtained;
step S3: compute 11 different topology attributes of the super network: the method comprises the following steps of including 3 different single-node-based super-network clustering coefficients (hereinafter represented by HCC), 5 different super-network mutual clustering coefficients (hereinafter represented by ComHCC), average shortest path (hereinafter represented by dist), point degree (hereinafter represented by n) and betweenness center degree (hereinafter represented by B);
step S4: adopting KS (Kolmogorov-Smimov) test as a feature selection method on a training set, and using p < 0.09 as a threshold value of feature selection;
step S5: using a Support Vector Machine (SVM) as a classifier, using fusion attributes as features (selecting difference features obtained after statistical analysis as classification features), using a given regularization parameter C and a given optimal feature subset to construct a classification model, and then adopting a cross validation method to test the constructed classifier;
step S6: the invention adopts a mutual information analysis method to respectively calculate the effectiveness and the redundancy of the characteristics, and then screens out the characteristics with higher effectiveness and lower redundancy according to the calculation result, thereby obtaining the optimal fusion characteristic set.
Further, in step S1, the static functional magnetic resonance image data is preprocessed, where the preprocessing at least includes temporal layer correction, panning correction, joint registration, spatial normalization, and low-frequency filtering. The fMRI data used by the invention has noise influence caused by the model of equipment, the head movement of a tested head and the like in the acquisition process. Therefore, the fMRI data needs to be preprocessed first to improve the signal-to-noise ratio of the image. Then, the influence is normalized to the selected standard space by methods such as local nonlinear transformation and the like.
The extraction of the average time sequence of each segmented brain region comprises the following specific steps: and extracting activation signals of all voxels in each brain region at different time points, and performing arithmetic mean on the activation signals of all voxels at different time points to obtain a mean time sequence of the brain regions.
Further, in step S2, solving the sparse linear regression model specifically includes:
the sparse linear regression model is as follows:
xk=Akαkk (1)
response vector xk∈RNRepresents the average time series of the kth region of interest (ROI). In particular, Ak=[x1,...,xk-1,0,xk+1,…,xK]∈RN×PA matrix representing the average time series of other ROIs except the k-th ROI whose response vector is set to zero. Alpha is alphak∈RpIs a weight vector representing the degree of influence of other ROIs on the kth ROI. Alpha is alphakThe ROIs corresponding to the zero elements represent no interaction with the selected k-th ROI, the brain regions and the selected brain regions are considered to be mutually independent, and the brain regions with the interaction are effectively represented by the mutually independent connection zero setting method. Tau iskIs a noise term.
Solving sparse regression models includes a variety of approaches. By introducing different penalty terms, the methods for solving the sparse regression model are different, and the methods for constructing the hyper-network are different. However, the traditional research is to construct brain function super-network by the Lasso method. Furthermore, some studies have extended the Lasso approach to further improve the construction of the super-network, taking into account the group effect problem between brain regions. However, the above methods all have the same problem that the excessive compression of the coefficients by the penalty function leads to biased estimation of the regression coefficients of the target variables in the model, so that the hyper-network constructed by using the method and extending the method may lose some important connections. Therefore, the invention provides a composite MCP method to create a super network, and further improves the construction of a brain function super network, thereby better simulating the complex multi-element interaction of the human brain. The composite MCP method is an extension of MCP, and bi-level variable selection is realized by using the MCP as an internal penalty method and an external penalty method at the same time, namely, variables can be selected between groups and important variables in the groups. Before the super-edge creation among the groups of the method, the brain areas are required to be clustered at first, and all the brain areas are required to be clustered, the brain areas are clustered by using a clustering algorithm, then the composite MCP is used for constructing the brain function super-network, and the optimization objective function is as follows:
Figure GDA0003243878630000041
wherein
Figure GDA0003243878630000042
Penalizing for MCP
Figure GDA0003243878630000043
xk,Ak,αkSimilar to that in equation (1). Gamma ray1,γ2Adjustment parameters for intra-group penalties and inter-group penalties respectively,
Figure GDA0003243878630000044
representing MCP penalty, wherein gamma is larger than 1, and when gamma is larger than → ∞, the sparsity of the solved regression coefficient vector gradually becomes smaller and gradually approaches to a Lasso model penalty term, and carrying out stricter compression; when γ → 1, the sparsity of the regression coefficient vector is getting larger and larger, and gradually no penalty will be made, i.e. no compression will be performed.In previous studies, it was mostly set as a default value. Lambda is more than or equal to 0 and is another adjusting parameter. The larger the λ, the more sparse the model, and vice versa.
Figure GDA0003243878630000045
The expression is the regression coefficient of the jth variable of the pth group, the non-zero regression coefficient expresses that the corresponding brain area is interacted with the kth ROI, the zero regression coefficient expresses that the corresponding brain area is independent from the kth ROI without interaction, and all the regression coefficients form alphakA weight vector.
A brain function hyper-network model is constructed based on the method, nodes represent one interested area, hyper-edges represent interaction among a plurality of interested areas, and alpha is calculatedkCan be obtained, namely alphakThe non-zero elements in (1) form a super edge. Specifically, for each subject, γ is fixed based on the selected ROI1And gamma2Selecting the value of lambda to obtain alphakThe weight vector, i.e., a super edge is generated. Here, γ is fixed on a per ROI basis in consideration of the influence of the multi-level of information between brain regions1And gamma2The value, the range of lambda values is varied to produce a set of hyper-edges based on a particular brain region. And finally, calculating weight vectors corresponding to all brain areas based on a composite MCP method to obtain the super edges generated by all brain areas, and combining the super edges to form the tested brain function super network model.
Further, in step S3, the method includes:
each single attribute feature needs to be computed separately. First is the node importance, including degree and betweenness centrality. There are also many definitions of degrees in a super network, including node degrees and super-edge degrees. The node degree refers to the number of nodes directly connected with each node, and the excess edge degree refers to the number of excess edges connected with each node. For the diagnosis of brain diseases, the hyper-network is generally characterized by calculating local attributes of nodes, and therefore, in this document, the node degree is introduced as an index for the quantification of the hyper-network. The formula is as follows:
Figure GDA0003243878630000051
in formula 4, H (v, e) represents the adjacency matrix of the hypergraph, and the calculation is obtained from formula (5); v represents a specific certain node, and e represents a specific certain super edge.
Figure GDA0003243878630000052
Where v E represents a node and E represents a superedge. Each column in the correlation matrix represents a super edge and each row represents a node. If v ∈ c, then H (v, e) ═ 1, if
Figure GDA0003243878630000053
Then H (v, e) is 0.
The betweenness centrality refers to the number of shortest paths passing through the node, and has been mainly used in social networks in previous studies. It is defined as:
Figure GDA0003243878630000054
gk: from vertex vjTo vkAll shortest path numbers. gk(i) The method comprises the following steps Passing node v in these shortest pathsiThe number of (2). n represents the number of nodes.
Then the calculation for the different clustering coefficients. The clustering coefficients that are first computed based on dual nodes are called mutual clustering coefficients. The cross-clustering coefficient requires a pair of nodes as parameters to calculate the result. Five different mutual clustering coefficients (ComHCC) are specifically defined as follows:
Figure GDA0003243878630000055
we can also get the same meaning definition by transforming denominator as well:
Figure GDA0003243878630000056
Figure GDA0003243878630000057
in addition to the conventional definitions set forth above. Still other geometry-based and hypergeometric definitions are ComHCC4,ComHCC5
Figure GDA0003243878630000058
Figure GDA0003243878630000061
Wherein u and v are both nodes; m (u) ═ ei∈E:u∈eiU represents a node, eiRepresents a super edge, M (u) refers to all super edges containing node u; total refers to all supercages.
Then, we use these two-node clustering coefficient definitions to define the clustering coefficient of a single vertex as the average of the clustering coefficients of the vertex and its neighboring points:
Figure GDA0003243878630000062
ComHCC (u, v) refers to the mutual clustering coefficients of the above five super networks; u, v refer to nodes;
Figure GDA0003243878630000068
v represents a node set, E represents a super edge set, E represents a super edge, and N (u) represents a set of nodes including other nodes in the super edge where the node u is located.
Next, a quantization hyper-network is performed based on the clustering coefficients on the single nodes. The clustering coefficient is defined to be closer to the ordinary graph, is used for measuring the closeness degree of the neighbor of a node, is an extension of the clustering coefficient of the ordinary graph, and needs additional information blended into the hypergraph. We observe all vertices in an edge and only count the connections between nodes if the edge satisfies a certain condition. The method comprises three different definition modes, which are as follows:
Figure GDA0003243878630000063
wherein u, v and w refer to nodes; n (u) has the same meaning as in the above formula (12); when in use
Figure GDA0003243878630000064
At the same time v, w ∈ eiAnd is
Figure GDA0003243878630000065
Then
Figure GDA0003243878630000069
And vice versa. HCC (HCC)1(u) finding connections between the neighbors of u that do not contain u has the advantage that any interaction found in this set may represent a true connection between the neighbors. It may be too much focused on those minor shared connections that have little relationship to u's interaction.
Figure GDA0003243878630000066
Wherein u, v and w refer to nodes; n (planchette) has the same meaning as in the above formula (12); when in use
Figure GDA0003243878630000067
At the same time v, w ∈ eiAnd u eiThen, IF(v, w, u) ═ 1, and vice versa. HCC (HCC)2(u) find those connections that contain the neighbors of u. The edges found in this way truly reflect the aggregation between u and the neighbors. It should be noted, however, that this connection may simply be a manual number of shared connections with uAccordingly.
Figure GDA0003243878630000071
Wherein u refers to a node; e refers to a super edge; n (u) has the same meaning as in the above formula (12); m (u) has the same meaning as in the above formula (7). HCC (HCC)3And (u) measuring the density of the neighborhood through the overlapping amount of the super edges of the neighborhood. Unlike both definitions, it defines the amount of overlap from the perspective of the node. Thereby avoiding the above-mentioned problems.
Further, in step S4, a Kolmogorov-Smirnov test (i.e., KS test) is used as a feature selection method on the training set. The method specifically comprises the following steps: Kolmogorov-Smimov is a test method that compares one frequency distribution f (x) with the theoretical distribution g (x) or two observed distributions. Its original assumption is H0: the two data are distributed consistently or the data fit into a theoretical distribution. D ═ max | f (x) -g (x) |, when the actual observed value D > D (n, α), H0 is rejected, otherwise the H0 hypothesis is accepted. By this method we can calculate the correlation between each feature and the label, denoted as p-value. Then p is less than 0.05 and is used as a threshold value, and better characteristics are selected;
further, in step S5, the construction of the classifier specifically includes: adopting an SVM classifier with a Gaussian kernel function, selecting an optimal feature subset as a classification feature, and selecting an optimal regularization parameter C, thereby constructing the classifier;
the constructed classifier is inspected by adopting a cross validation method, and the steps are as follows: randomly selecting 90% of samples from the optimal feature subset as a training set, and using the rest 10% of the optimal feature subset as a test set, thereby performing classification test and obtaining classification accuracy; and performing arithmetic mean on the classification accuracy obtained after 100 times of repeated classification tests, and taking the arithmetic mean as the classification accuracy of the classifier.
Further, in step S6, the quantization formula is specifically expressed as follows:
Figure GDA0003243878630000072
d represents the importance of the feature in the classifier; s represents a fusion feature set; | S | represents the total number of features in S; x is the number ofiRepresenting the selected feature; c represents a class label of the sample; f (x)iC) mutual information representing the selected feature and the class label c of the sample;
Figure GDA0003243878630000073
r represents the redundancy of the selected features in the classifier; x is the number ofiRepresenting the selected feature; x is the number ofjRepresenting other features of the fusion feature set; f (x)i,xj) Mutual information representing the selected feature and other features;
the secondary screening step is as follows: and ranking the selected features according to the importance and the redundancy respectively, and then screening out the features with higher importance and lower redundancy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a functional magnetic resonance image data classification method according to an embodiment of the present invention.
FIG. 2 is a schematic diagram comparing the proposed method with the conventional magnetic resonance classification method.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The functional magnetic resonance image data classification method based on the hyper-network fusion characteristics specifically comprises the following steps:
step S1: preprocessing resting state functional magnetic resonance image data, then performing region segmentation on the image according to a selected standardized brain map, and extracting average time sequences of all segmented brain regions;
step S2: in the research, a sparse linear regression model is solved by using composition MCP as a penalty term, so that a hyper-network is obtained;
step S3: compute 11 different topology attributes of the super network: the method comprises the following steps of including 3 different single-node-based super-network clustering coefficients (hereinafter represented by HCC), 5 different super-network mutual clustering coefficients (hereinafter represented by ComHCC), average shortest path (hereinafter represented by dist), point degree (hereinafter represented by n) and betweenness center degree (hereinafter represented by B);
step S4: adopting KS (Kolmogorov-Smimov) test as a feature selection method on a training set, and using p < 0.09 as a threshold value of feature selection;
step S5: using a Support Vector Machine (SVM) as a classifier, using fusion attributes as features (selecting difference features obtained after statistical analysis as classification features), using a given regularization parameter C and a given optimal feature subset to construct a classification model, and then adopting a cross validation method to test the constructed classifier;
step S6: because the number of the fusion attribute features is too large, a mutual information analysis method can be adopted to quantify the importance and redundancy of the selected features in the classifier, and then secondary screening is carried out on the selected features according to quantification results, so that the optimal fusion feature set is obtained.
Further, in step S1, the resting functional magnetic resonance image data is preprocessed, and there is a noise influence caused by the model of the device, the movement of the head to be tested, and the like during the acquisition of the fMRI data used in the present invention. Therefore, the fMRI data needs to be preprocessed first to improve the signal-to-noise ratio of the image. Then, the influence is normalized to the selected standard space by methods such as local nonlinear transformation and the like.
The extraction of the average time sequence of each segmented brain region comprises the following specific steps: and extracting activation signals of all voxels in each brain region at different time points, and performing arithmetic mean on the activation signals of all voxels at different time points to obtain a mean time sequence of the brain regions.
Further, the preprocessing step at least comprises time layer correction, head motion correction, joint registration, spatial standardization and low-frequency filtering.
Further, in step S2, solving the sparse linear regression model specifically includes:
the sparse linear regression model is as follows:
xk=Akαkk (1)
response vector xk∈RNRepresents the average time series of the kth region of interest (ROI). In particular, Ak=[x1,...,xk-1,0,xk+1,...,xK]∈RN×PA matrix representing the average time series of other ROIs except the k-th ROI whose response vector is set to zero. Alpha is alphak∈RpIs a weight vector representing the degree of influence of other ROIs on the kth ROI. Alpha is alphakThe ROIs corresponding to the zero elements represent no interaction with the selected k-th ROI, the brain regions and the selected brain regions are considered to be mutually independent, and the brain regions with the interaction are effectively represented by the mutually independent connection zero setting method. Tau iskIs a noise term.
Solving sparse regression models includes a variety of approaches. By introducing different penalty terms, the methods for solving the sparse regression model are different, and the methods for constructing the hyper-network are different. However, the traditional research is to construct brain function super-network by the Lasso method. Furthermore, some studies have extended the Lasso approach to further improve the construction of the super-network, taking into account the group effect problem between brain regions. However, the above methods all have the same problem that the excessive compression of the coefficients by the penalty function leads to biased estimation of the regression coefficients of the target variables in the model, so that the hyper-network constructed by using the method and extending the method may lose some important connections. Therefore, the invention provides a composite MCP method to create a super network, and further improves the construction of a brain function super network, thereby better simulating the complex multi-element interaction of the human brain. The composite MCP method is an extension of MCP, and bi-level variable selection is realized by using the MCP as an internal penalty method and an external penalty method at the same time, namely, variables can be selected between groups and important variables in the groups. Before the super-edge creation among the groups of the method, the brain areas are required to be clustered at first, and all the brain areas are required to be clustered, the brain areas are clustered by using a clustering algorithm, then the composite MCP is used for constructing the brain function super-network, and the optimization objective function is as follows:
Figure GDA0003243878630000101
wherein
Figure GDA0003243878630000102
Penalizing for MCP
Figure GDA0003243878630000103
xk,Ak,αkSimilar to that in equation (1). Gamma ray1,γ2Adjustment parameters for intra-group penalties and inter-group penalties respectively,
Figure GDA0003243878630000104
showing MCP punishment, wherein gamma is more than 1, and when gamma is → ∞, sparsity of a regression coefficient vector to be solved is gradually reduced and gradually approaches to the Lasso model punishmentPenalty term, performing more stringent compression; when γ → 1, the sparsity of the regression coefficient vector is getting larger and larger, and gradually no penalty will be made, i.e. no compression will be performed. In previous studies, it was mostly set as a default value. Lambda is more than or equal to 0 and is another adjusting parameter. The larger the λ, the more sparse the model, and vice versa.
Figure GDA0003243878630000105
The expression is the regression coefficient of the jth variable of the pth group, the non-zero regression coefficient expresses that the corresponding brain area is interacted with the kth ROI, the zero regression coefficient expresses that the corresponding brain area is independent from the kth ROI without interaction, and all the regression coefficients form alphakA weight vector.
A brain function hyper-network model is constructed based on the method, nodes represent one interested area, hyper-edges represent interaction among a plurality of interested areas, and alpha is calculatedkCan be obtained, namely alphakThe non-zero elements in (1) form a super edge. Specifically, for each subject, γ is fixed based on the selected ROI1And gamma2Selecting the value of lambda to obtain alphakThe weight vector, i.e., a super edge is generated. Here, γ is fixed on a per ROI basis in consideration of the influence of the multi-level of information between brain regions1And gamma2The value, the range of lambda values is varied to produce a set of hyper-edges based on a particular brain region. And finally, calculating weight vectors corresponding to all brain areas based on a composite MCP method to obtain the super edges generated by all brain areas, and combining the super edges to form the tested brain function super network model.
Further, in step S3, the method includes:
each single attribute feature needs to be computed separately. First is the node importance, including degree and betweenness centrality. There are also many definitions of degrees in a super network, including node degrees and super-edge degrees. The node degree refers to the number of nodes directly connected with each node, and the excess edge degree refers to the number of excess edges connected with each node. For the diagnosis of brain diseases, the hyper-network is generally characterized by calculating local attributes of nodes, and therefore, in this document, the node degree is introduced as an index for the quantification of the hyper-network. The formula is as follows:
Figure GDA0003243878630000111
in formula 4, H (v, e) represents the adjacency matrix of the hypergraph, and the calculation is obtained from formula (5); v represents a specific certain node, and e represents a specific certain super edge.
Figure GDA0003243878630000112
Where v E represents a node and E represents a superedge. Each column in the correlation matrix represents a super edge and each row represents a node. If v ∈ e, then H (v, e) ═ 1, if v ∈ e
Figure GDA0003243878630000113
Then H (v, e) is 0.
The betweenness centrality refers to the number of shortest paths passing through the node, and has been mainly used in social networks in previous studies. It is defined as:
Figure GDA0003243878630000114
gk: from vertex vjTo vkAll shortest path numbers. gk(i) The method comprises the following steps Passing node v in these shortest pathsiThe number of (2). n represents the number of nodes.
Then the calculation for the different clustering coefficients. The clustering coefficients that are first computed based on dual nodes are called mutual clustering coefficients. The cross-clustering coefficient requires a pair of nodes as parameters to calculate the result. Five different mutual clustering coefficients (ComHCC) are specifically defined as follows:
Figure GDA0003243878630000115
we can also get the same meaning definition by transforming denominator as well:
Figure GDA0003243878630000121
Figure GDA0003243878630000122
in addition to the conventional definitions set forth above. Still other geometry-based and hypergeometric definitions are ComHCC4,ComH0C5
Figure GDA0003243878630000123
Figure GDA0003243878630000124
Wherein u and v are both nodes; m (u) ═ ei∈E:u∈eiU represents a node, eiRepresents a super edge, M (u) refers to all super edges containing node u; total refers to all supercages.
Then, we use these two-node clustering coefficient definitions to define the clustering coefficient of a single vertex as the average of the clustering coefficients of the vertex and its neighboring points:
Figure GDA0003243878630000125
ComHCC (u, v) refers to the mutual clustering coefficients of the above five super networks; u, v refer to nodes;
Figure GDA0003243878630000126
v represents a node set, E represents a super edge set, E represents a super edge, and N (u) represents a set of nodes including other nodes in the super edge where the node u is located.
Next, a quantization hyper-network is performed based on the clustering coefficients on the single nodes. The clustering coefficient is defined to be closer to the ordinary graph, is used for measuring the closeness degree of the neighbor of a node, is an extension of the clustering coefficient of the ordinary graph, and needs additional information blended into the hypergraph. We observe all vertices in an edge and only count the connections between nodes if the edge satisfies a certain condition. The method comprises three different definition modes, which are as follows:
Figure GDA0003243878630000127
wherein u, v and w refer to nodes; n (u) has the same meaning as in the above formula (12); when in use
Figure GDA0003243878630000128
At the same time v, w ∈ eiAnd is
Figure GDA0003243878630000129
Then
Figure GDA00032438786300001210
And vice versa. HCC (HCC)1(u) finding connections between the neighbors of u that do not contain u has the advantage that any interaction found in this set may represent a true connection between the neighbors. It may be too much focused on those minor shared connections that have little relationship to u's interaction.
Figure GDA0003243878630000131
Wherein u, v and w refer to nodes; n (u) has the same meaning as in the above formula (12); when in use
Figure GDA0003243878630000132
At the same time v, w ∈ eiAnd u eiThen, IE(v, w, u) ═ 1, and vice versa. HCC (HCC)2(u) find those connections that contain the neighbors of u. This is achieved byThe edges found in this way truly reflect the aggregation between u and the neighbors. But it should be noted that this connection may simply be artificial data sharing the connection with u.
Figure GDA0003243878630000133
Wherein u refers to a node; e refers to a super edge; n (u) has the same meaning as in the above formula (12); m (u) has the same meaning as in the above formula (7). HCC3 measures the density of the neighborhood by the amount of overlap of the super edges of the neighborhood. Unlike both definitions, it defines the amount of overlap from the perspective of the node. Thereby avoiding the above-mentioned problems.
Further, in step S4, a Kolmogorov-Smirnov test (i.e., KS test) is used as a feature selection method on the training set. The method specifically comprises the following steps: Kolmogorov-Smimov is a test method that compares one frequency distribution f (x) with the theoretical distribution g (x) or two observed distributions. Its original assumption is H0: the two data are distributed consistently or the data fit into a theoretical distribution. D max | f (x) -g (x) l, rejecting H0 if the actual observation D > D (n, α), otherwise accepting the H0 hypothesis. By this method we can calculate the correlation between each feature and the label, denoted as p-value. Then p is less than 0.05 and is used as a threshold value, and better characteristics are selected;
further, in step S5, the construction of the classifier specifically includes: adopting an SVM classifier with a Gaussian kernel function, selecting an optimal feature subset as a classification feature, and selecting an optimal regularization parameter C, thereby constructing the classifier;
the constructed classifier is inspected by adopting a cross validation method, and the steps are as follows: randomly selecting 90% of samples from the optimal feature subset as a training set, and using the rest 10% of the optimal feature subset as a test set, thereby performing classification test and obtaining classification accuracy; and performing arithmetic mean on the classification accuracy obtained after 100 times of repeated classification tests, and taking the arithmetic mean as the classification accuracy of the classifier.
Further, in step S6, the quantization formula is specifically expressed as follows:
Figure GDA0003243878630000134
d represents the importance of the feature in the classifier; s represents a fusion feature set; | S | represents the total number of features in S; x is the number ofiRepresenting the selected feature; c represents a class label of the sample; f (x)iC) mutual information representing the selected feature and the class label c of the sample;
Figure GDA0003243878630000141
r represents the redundancy of the selected features in the classifier; x is the number ofiRepresenting the selected feature; x is the number ofjRepresenting other features of the fusion feature set; f (x)i,xj) Mutual information representing the selected feature and other features;
the secondary screening step is as follows: and ranking the selected features according to the importance and the redundancy respectively, and then screening out the features with higher importance and lower redundancy.
The invention has the beneficial effects that: compared with the traditional magnetic resonance image data classification method, the method firstly uses the composition MCP method to construct the super network model, and then extracts 11 different features from the super network as fusion features for classification so as to make up for the defect that the single attribute feature contains single information. The feature set rich in sufficient information can represent the topology of all-around and multi-angle in a brain hyper-network, and the completeness of a hyper-network topological structure is presented, so that the classifier constructed later can effectively extract and distinguish information, and the upper limit of the classification precision of the classifier is improved. The invention overcomes the defect that the previous research uses single attribute as the characteristic, and can be used for classifying the magnetic resonance image data.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (6)

1. The multi-feature fusion data classification method based on the brain function hyper-network model is characterized by comprising the following steps of:
step S1: acquiring resting state functional magnetic resonance image data, and preprocessing the image data; extracting average time sequence of each brain area in the preprocessed image data according to the selected brain atlas template;
step S2: constructing a sparse linear regression model based on the average time sequence of each brain region; solving a sparse linear regression model by using a composition MCP method to obtain a brain function hyper-network model;
the solving of the sparse linear regression model specifically comprises the following steps:
the sparse linear regression model is as follows:
xk=Akαkk (1)
wherein x isk∈RNRepresenting the average time sequence of the kth region of interest ROI; a. thek=[x1,...,xk-1,0,xk+1,...,xK]∈RN×PA matrix representing an average time series of ROIs other than the kth ROI; alpha is alphak∈RpThe weight vector represents the influence degree of the rest ROIs on the kth ROI, non-zero elements in column vectors represent that the corresponding ROI and the kth ROI generate interaction, and zero elements represent that the corresponding ROI and the kth ROI are independent from each other and do not have any interaction; tau iskIs a noise term;
clustering brain areas by using a clustering algorithm, and then solving a sparse linear regression model by using composite MCP to construct a brain function hyper-network, wherein an optimization objective function is as follows:
Figure FDA0003243878620000011
wherein the content of the first and second substances,
Figure FDA0003243878620000012
γ1,γ2adjustment parameters of intra-group punishment and inter-group punishment respectively;
Figure FDA0003243878620000013
represents MCP penalty, wherein gamma > 1; when gamma → ∞ is reached, the sparsity of the solved regression coefficient vector gradually becomes smaller, the regression coefficient vector gradually approaches to a Lasso model penalty term, and stricter compression is carried out; when gamma → 1, the sparsity of the regression coefficient vector is increasingly larger, and no penalty is gradually made, namely compression is not performed; lambda is another adjusting parameter, lambda is more than or equal to 0, the bigger lambda is, the more sparse the model is, and vice versa;
Figure FDA0003243878620000014
the expression is the regression coefficient of the jth variable of the pth group, the non-zero regression coefficient expresses that the corresponding brain area is interacted with the kth ROI, the zero regression coefficient expresses that the corresponding brain area is independent from the kth ROI without interaction, and all the regression coefficients form alphakA weight vector;
in the construction of brain function hyper-network model, a node represents an interested area, a hyper-edge represents the interaction among a plurality of interested areas, and alpha is calculatedkCan be obtained, namely alphakThe non-zero elements in (1) form a super edge;
based on the selected ROI, gamma is fixed1And gamma2Selecting the value of lambda to obtain alphakGenerating a super edge by the weight vector; based on each ROI, fix γ1And gamma2A value, varying the range of λ values such that it generates a set of hyper-edges based on a particular brain region; finally, calculating weight vectors corresponding to all brain areas based on a composite MCP method to obtain super edges generated by all brain areas, and combining the super edges to form a brain function super network model;
step S3: calculating 11 topological attributes of the brain function hyper-network model, comprising: 3 different super-network clustering coefficients based on a single node, 5 different super-network mutual clustering coefficients, an average shortest path, node degree and betweenness centrality;
step S4: carrying out feature selection on the topological attribute by adopting nonparametric displacement detection KS detection;
step S5: a support vector machine is adopted as a classifier, the difference features obtained after statistical analysis are selected as classification features, a classification model is constructed according to a given regularization parameter C and a given optimal feature subset, and the classification model is checked by adopting a cross validation method;
step S6: respectively calculating the effectiveness and the redundancy of the features by adopting a mutual information analysis method according to the features selected in the step S4 and the classification model constructed in the step S5; and screening out the features according to the calculation result to obtain an optimal fusion feature set.
2. The method for classifying multi-feature fusion data based on brain function hyper-network model according to claim 1, wherein in step S1:
the preprocessing at least comprises time layer correction, head motion correction, joint registration, space standardization and low-frequency filtering;
extracting the average time sequence of each brain region, and the specific steps comprise: and extracting activation signals of all voxels in each brain region at different time points, and performing arithmetic mean on the activation signals of all voxels at different time points to obtain a mean time sequence of each brain region.
3. The method for classifying multi-feature fusion data based on brain function hyper-network model according to claim 1, wherein in the step S3,
the node degree is as follows:
Figure FDA0003243878620000031
in formula 4, H (v, e) represents the adjacency matrix of the hypergraph, and the calculation is obtained from formula (5); v represents a specific certain node, and e represents a specific certain super edge;
Figure FDA0003243878620000032
wherein v E represents a node, E represents a super edge, E ═ E represents a super edge set, each column in the incidence matrix represents a super edge, and each row represents a node;
the mesomeric centrality is:
Figure FDA0003243878620000033
gkrepresenting the secondary vertex vjTo vkAll shortest path numbers of gk(i) Representing the passing node v in these shortest pathsiN represents the number of nodes;
the five different mutual clustering coefficients are respectively:
Figure FDA0003243878620000034
Figure FDA0003243878620000035
Figure FDA0003243878620000036
Figure FDA0003243878620000037
Figure FDA0003243878620000038
wherein u and v are both nodes; m (u) ═ ei∈E:u∈ei},eiRepresents a super edge, M (u) represents all super edges containing node u; total represents all supercages;
defining the clustering coefficient of a single node as the average value of the clustering coefficients of the node and each adjacent node:
Figure FDA0003243878620000041
ComHCCm(u, v) represents the five inter-clustering coefficients, m is 1, 2, 3, 4, 5; u and v are both nodes;
Figure FDA0003243878620000045
v represents a node set, E represents a super edge set, E represents a super edge, and N (u) represents a set of other nodes in the super edge where the node u is located;
three different single node-based clustering coefficients are defined as:
Figure FDA0003243878620000042
Figure FDA0003243878620000043
Figure FDA0003243878620000044
wherein u, v and w are all nodes; e represents a super edge;
in the formula (13), when
Figure FDA0003243878620000046
At the same time v, w ∈ eiAnd is
Figure FDA0003243878620000048
Then
Figure FDA0003243878620000049
And vice versa;
in formula (14), when
Figure FDA0003243878620000047
At the same time v, w ∈ eiAnd u eiThen, IE(v, w, u) ═ 1, and vice versa.
4. The method for classifying multi-feature fusion data based on brain function hyper-network model according to claim 1, wherein in step S4, non-parameter permutation test is used as a feature selection method to perform feature selection on topological attributes, and the threshold value of feature selection is p < 0.05.
5. The method for classifying multi-feature fusion data based on brain function hyper-network model according to claim 1, wherein the step of verifying the classification model by using the cross validation method in step S5 is specifically as follows: randomly selecting 90% of samples from the optimal feature subset as a training set, and using the rest 10% of the optimal feature subset as a test set, and performing classification test to obtain classification accuracy; and performing arithmetic mean on the classification accuracy obtained after the 100-time classification test is repeated, and then taking the arithmetic mean as the classification accuracy of the classification model.
6. The method for classifying multi-feature fusion data based on brain function hyper-network model according to claim 1, wherein in step S6, the quantization formula is specifically expressed as follows:
Figure FDA0003243878620000051
d represents the feature in the classification modelThe importance degree; s represents a fusion feature set; | S | represents the total number of features in S; x is the number ofiRepresenting the selected feature; c represents a class label of the sample; f (x)iC) mutual information representing the selected feature and the class label c of the sample;
Figure FDA0003243878620000052
r represents the redundancy of the selected features in the classification model; x is the number ofiRepresenting the selected feature; x is the number ofjRepresenting other features of the fusion feature set; f (x)i,xj) Mutual information representing the selected feature and other features;
the screening steps are as follows: and ranking the selected features according to the importance and the redundancy respectively, and then screening out the features with higher importance and lower redundancy.
CN202011364161.0A 2020-11-27 2020-11-27 Multi-feature fusion data classification method based on brain function hyper-network model Active CN112418337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011364161.0A CN112418337B (en) 2020-11-27 2020-11-27 Multi-feature fusion data classification method based on brain function hyper-network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011364161.0A CN112418337B (en) 2020-11-27 2020-11-27 Multi-feature fusion data classification method based on brain function hyper-network model

Publications (2)

Publication Number Publication Date
CN112418337A CN112418337A (en) 2021-02-26
CN112418337B true CN112418337B (en) 2021-11-02

Family

ID=74842872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011364161.0A Active CN112418337B (en) 2020-11-27 2020-11-27 Multi-feature fusion data classification method based on brain function hyper-network model

Country Status (1)

Country Link
CN (1) CN112418337B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160974B (en) * 2021-04-16 2022-07-19 山西大学 Mental disease biological type mining method based on hypergraph clustering
CN114418982B (en) * 2022-01-14 2022-11-01 太原理工大学 Method for constructing DTI multi-parameter fusion brain network
CN114862834B (en) * 2022-05-31 2023-04-25 太原理工大学 Resting state functional magnetic resonance image data classification method
CN115356672B (en) * 2022-10-21 2023-01-24 北京邮电大学 Multi-dimensional magnetic resonance imaging method, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133651B (en) * 2017-05-12 2018-03-16 太原理工大学 The functional magnetic resonance imaging data classification method of subgraph is differentiated based on super-network
CN107909117A (en) * 2017-09-26 2018-04-13 电子科技大学 A kind of sorting technique and device based on brain function network characterization to early late period mild cognitive impairment
CN108229066A (en) * 2018-02-07 2018-06-29 北京航空航天大学 A kind of Parkinson's automatic identifying method based on multi-modal hyper linking brain network modelling
CN111754395A (en) * 2020-07-01 2020-10-09 太原理工大学 Robustness assessment method for brain function hyper-network model

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700120B (en) * 2015-03-23 2018-11-13 南京工业大学 A kind of fMRI feature extractions and sorting technique based on adaptive entropy projected clustering algorithm
CN106295709A (en) * 2016-08-18 2017-01-04 太原理工大学 Functional magnetic resonance imaging data classification method based on multiple dimensioned brain network characterization
US10783639B2 (en) * 2016-10-19 2020-09-22 University Of Iowa Research Foundation System and method for N-dimensional image segmentation using convolutional neural networks
CN111553464B (en) * 2020-04-26 2023-09-29 北京小米松果电子有限公司 Image processing method and device based on super network and intelligent equipment
CN111680599B (en) * 2020-05-29 2023-08-08 北京百度网讯科技有限公司 Face recognition model processing method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133651B (en) * 2017-05-12 2018-03-16 太原理工大学 The functional magnetic resonance imaging data classification method of subgraph is differentiated based on super-network
CN107909117A (en) * 2017-09-26 2018-04-13 电子科技大学 A kind of sorting technique and device based on brain function network characterization to early late period mild cognitive impairment
CN108229066A (en) * 2018-02-07 2018-06-29 北京航空航天大学 A kind of Parkinson's automatic identifying method based on multi-modal hyper linking brain network modelling
CN111754395A (en) * 2020-07-01 2020-10-09 太原理工大学 Robustness assessment method for brain function hyper-network model

Also Published As

Publication number Publication date
CN112418337A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN112418337B (en) Multi-feature fusion data classification method based on brain function hyper-network model
CN109376751B (en) Human brain function network classification method based on convolutional neural network
Li et al. Deep spatial-temporal feature fusion from adaptive dynamic functional connectivity for MCI identification
CN113616184B (en) Brain network modeling and individual prediction method based on multi-mode magnetic resonance image
Dimitriadis et al. Improving the reliability of network metrics in structural brain networks by integrating different network weighting strategies into a single graph
CN113040715B (en) Human brain function network classification method based on convolutional neural network
Song et al. A modified probabilistic neural network for partial volume segmentation in brain MR image
CN107392907A (en) Parahippocampal gyrus function division method based on tranquillization state FMRI
CN111009324B (en) Auxiliary diagnosis system and method for mild cognitive impairment through multi-feature analysis of brain network
CN113724880A (en) Abnormal brain connection prediction system, method and device and readable storage medium
Li et al. MR brain image segmentation based on self-organizing map network
Demir et al. Augmented cell-graphs for automated cancer diagnosis
DE102008060789A1 (en) System and method for unmonitored detection and Gleason grading for a prostate cancer preparation (whole-mount) using NIR fluorescence
CN105117731A (en) Community partition method of brain functional network
CN106650768A (en) Gaussian image model-based brain network modeling and mode classification method
CN111754395B (en) Robustness assessment method for brain function hyper-network model
CN109509552A (en) A kind of mental disease automatic distinguishing method of the multi-level features fusion based on function connects network
CN113610808A (en) Individual brain atlas individualization method, system and equipment based on individual brain connection atlas
CN112348833B (en) Dynamic connection-based brain function network variation identification method and system
Ocampo-Pineda et al. Hierarchical microstructure informed tractography
Christe et al. Improved hybrid segmentation of brain MRI tissue and tumor using statistical features
Ramana Alzheimer disease detection and classification on magnetic resonance imaging (MRI) brain images using improved expectation maximization (IEM) and convolutional neural network (CNN)
Huerta et al. Inter-subject clustering of brain fibers from whole-brain tractography
Yeung et al. Pipeline comparisons of convolutional neural networks for structural connectomes: predicting sex across 3,152 participants
Chevaillier et al. Functional segmentation of renal DCE-MRI sequences using vector quantization algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210325

Address after: 030000 Yingze West Street, Taiyuan, Taiyuan, Shanxi

Applicant after: Taiyuan University of Technology

Applicant after: Xueyi Technology (Chengdu) Co.,Ltd.

Address before: 030000 Yingze West Street, Taiyuan, Taiyuan, Shanxi

Applicant before: Taiyuan University of Technology

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210928

Address after: 030000 Yingze West Street, Taiyuan, Taiyuan, Shanxi

Applicant after: Taiyuan University of Technology

Address before: 030000 Yingze West Street, Taiyuan, Taiyuan, Shanxi

Applicant before: Taiyuan University of Technology

Applicant before: Xueyi Technology (Chengdu) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant