CN113887397B - Classification method and classification system of electrophysiological signals based on ocean predator algorithm - Google Patents

Classification method and classification system of electrophysiological signals based on ocean predator algorithm Download PDF

Info

Publication number
CN113887397B
CN113887397B CN202111153139.6A CN202111153139A CN113887397B CN 113887397 B CN113887397 B CN 113887397B CN 202111153139 A CN202111153139 A CN 202111153139A CN 113887397 B CN113887397 B CN 113887397B
Authority
CN
China
Prior art keywords
matrix
feature
prey
training
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111153139.6A
Other languages
Chinese (zh)
Other versions
CN113887397A (en
Inventor
袁进
马可
肖鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Ophthalmic Center
Original Assignee
Zhongshan Ophthalmic Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Ophthalmic Center filed Critical Zhongshan Ophthalmic Center
Priority to CN202111153139.6A priority Critical patent/CN113887397B/en
Publication of CN113887397A publication Critical patent/CN113887397A/en
Application granted granted Critical
Publication of CN113887397B publication Critical patent/CN113887397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The present disclosure describes a method for classifying electrophysiological signals based on a marine predator algorithm, which is a method for fusing a marine predator algorithm and a machine learning-based classification model to optimize the classification model while feature selection, comprising acquiring electrophysiological signals; extracting target feature data of the electrophysiological signals based on the target feature set; and inputting target characteristic data into a trained classification model to obtain a classification result, wherein in the marine predator algorithm, a prey matrix and an elite matrix are initialized based on a hyper-parameter item and a training characteristic item, the fitness of each individual in the prey matrix is calculated based on the classification model, the prey matrix and the elite matrix are iteratively updated based on the fitness until a condition of stopping iteration is met to obtain the optimized prey matrix and the elite matrix and simultaneously obtain the trained classification model, and an optimal characteristic subset is obtained based on the optimized prey matrix to serve as a target characteristic set and a hyper-parameter corresponding to the trained classification model.

Description

Ocean predator algorithm-based electrophysiological signal classification method and classification system
Technical Field
The present disclosure relates generally to the field of machine learning, and more particularly to a method and system for classifying electrophysiological signals based on marine predator algorithm.
Background
The biological characteristics of electrophysiological signals contain a large amount of information, and are important signals for studying physiological conditions. Therefore, researchers in more and more fields (e.g., clinical field or health care field) are beginning to analyze electrophysiological signals. With the development of artificial intelligence technology, people begin to utilize a machine learning method to perform feature extraction and selection on electrophysiological signals, and classify, identify or predict the electrophysiological signals, so that other indexes can be effectively combined to assist in monitoring physiological conditions. Currently, feature selection is largely divided into filtering, wrapping, and embedded approaches. The filtering method can rank the importance of the features according to indexes such as correlation, mutual information, variance or statistical difference between the features, and further select the features with high importance. The wrapping method can select feature subsets through a certain search algorithm, input the feature subsets into a classification model based on artificial intelligence technology (such as machine learning), and select the optimal subset by taking classification performance as an objective function of each feature subset. The embedded approach may integrate the feature selection and classification model training processes, typically selecting features based on penalty terms or tree models.
However, the combination of features selected by the filtering method is not necessarily optimal, and the classification performance may be low; although the wrapped approach has better classification performance than the filtration approach, the calculation is time consuming; the embedded method is fast to operate compared with the wrapped method, but for electrophysiological signals with large individual differences and large feature types and quantities, the feature set has a high dimension, which often easily causes dimension disasters, overfitting and low classification accuracy. Therefore, how to perform efficient feature selection and classification remains a research hotspot and difficulty in the field of electrophysiological signal pattern recognition.
Disclosure of Invention
In view of the above, the present disclosure provides a method and system for classifying electrophysiological signals based on marine predator algorithm that can efficiently select and classify the characteristics of the electrophysiological signals.
To this end, the disclosure provides, in a first aspect, a method for classifying electrophysiological signals based on a marine predator algorithm, which is a classification method for fusing a marine predator algorithm and a machine learning-based classification model to optimize the classification model while feature selection, and includes: acquiring an electrophysiological signal; extracting a plurality of feature data of the electrophysiological signal corresponding to feature items in a target feature set based on the target feature set including a plurality of feature items as target feature data of the electrophysiological signal; and inputting the target feature data into a trained classification model to obtain a classification result, wherein the target feature set and the trained classification model are obtained by a training method comprising: constructing a training sample, wherein input data of the training sample comprises a plurality of signals to be trained; extracting a plurality of feature data corresponding to each signal to be trained and a training feature item as training feature data, wherein the training feature item comprises a plurality of feature items; the method comprises the steps of obtaining a target feature set and a trained classification model based on hyper-parameter items of hyper-parameters of the classification model, training feature items and training feature data and utilizing a marine predator algorithm, wherein in the marine predator algorithm, a prey matrix and an elite matrix are initialized based on the hyper-parameter items and the training feature items, the fitness of each individual in the prey matrix is calculated based on the classification model, the prey matrix and the elite matrix are updated iteratively based on the fitness until the condition of stopping iteration is met so as to obtain the optimized prey matrix and the elite matrix and obtain the trained classification model simultaneously, and the target feature set and the hyper-parameters corresponding to the trained classification model are obtained based on the optimized prey matrix. Under the condition, the unique advantages of the marine predator algorithm and the classification model based on machine learning are fused, the functions of feature selection, super-parameter optimization and classification of the classification model can be simultaneously realized, the classification performance of classifying the electrophysiological signals with large individual differences and large feature types and quantities is high, the generalization capability is strong, and the feature selection and classification can be effectively carried out on the electrophysiological signals.
In addition, in the classification method according to the first aspect of the present disclosure, optionally, the initializing a prey matrix and an elite matrix based on the hyper-parameter items and the training feature items is: and enabling the hyper-parameter items and the training characteristic items to correspond to columns in the prey matrix, initializing the columns in the prey matrix corresponding to the hyper-parameter items based on a first upper limit and a first lower limit, initializing the columns in the prey matrix corresponding to the training characteristic items based on a second upper limit and a second lower limit, and initializing the elite matrix based on the initialized prey matrix. Therefore, the columns corresponding to the hyper-parameter items and the columns corresponding to the training characteristic items in the prey matrix can be initialized through different upper limits and lower limits.
In the classification method according to the first aspect of the present disclosure, optionally, in calculating the fitness of the individuals of the animal matrix based on the classification model, a hyper-parameter of the classification model is set based on an element value of an individual corresponding to the hyper-parameter item, a feature subset is selected based on an element value of an individual corresponding to the training feature item and subjected to binarization processing, the classification model is trained based on a data set corresponding to the feature subset selected from the training feature data based on the feature subset to obtain an individual classification model, an error rate corresponding to the individual classification model is further obtained based on the individual classification model, and the fitness of the individual is calculated based on the error rate, wherein in the binarization processing, an element value corresponding to the training feature item of the individual is binarized based on a preset threshold value, if the binarized element value is 1, the corresponding feature item in the training feature item is selected, and if the binarized element value is 0, the corresponding feature item in the training feature item is discarded. Thus, the fitness of the individual can be calculated based on the error rate of the individual classification model. In addition, feature items can be selected from the training feature items as feature subsets on an individual basis.
In addition, in the classification method according to the first aspect of the present disclosure, optionally, the fitness of the individual satisfies a formula:
Figure BDA0003287715120000021
among them, fitnes num Is the fitness of the num individual, S is the number of the training characteristic items, dim num The dimensions of the feature subset selected for the num individuals,
Figure BDA0003287715120000031
and representing the error rate corresponding to the classification model trained on the feature subset selected on the basis of the num individuals, wherein alpha is a weight factor, the error rate is an average error rate, and the average error rate is the average value of the classification error rates corresponding to multiple verifications in the cross verification. Thus, the error rate and the feature number can be integrated to obtain the fitness.
In the classification method according to the first aspect of the present disclosure, optionally, in updating the prey matrix, the boundary of the element value corresponding to the hyper-parameter item in the prey matrix is constrained by using the first upper limit and the first lower limit, and the boundary of the element value corresponding to the hyper-parameter item in the prey matrix is constrained by using the second upper limit and the second upper limitThe second lower limit constrains boundaries of element values in the prey matrix corresponding to the training feature items, wherein the boundaries constraining the prey matrix satisfy the formula:
Figure BDA0003287715120000032
wherein, P up1 Represents the first upper limit, P low1 Denotes the first lower limit, P up2 Denotes the second upper limit, P low2 Denotes the second lower limit, X num,dim And B represents a set of elements corresponding to the training characteristic items in the prey matrix. Therefore, the boundaries corresponding to the hyper-parameter item and the training characteristic item of the updated prey matrix can be respectively restrained based on the first upper limit and the first lower limit, and the second upper limit and the second lower limit.
In addition, in the classification method according to the first aspect of the present disclosure, optionally, before the training feature data is extracted, preprocessing is performed on the signal to be trained, and the training feature data is acquired based on the signal to be trained after the preprocessing, where the preprocessing includes at least one of re-reference processing, down-sampling processing, band-pass filtering processing, average ambient normal channel processing, and windowing processing, in which each signal to be trained is divided into a plurality of segments, the plurality of segments are used to extract training feature data corresponding to each segment by segment, and the number of times or sampling points of each segment is the same; before extracting the target feature data, performing the preprocessing on the electrophysiological signal, wherein in windowing processing, a plurality of segments corresponding to the electrophysiological signal are obtained, the target feature data corresponding to each of the plurality of segments is extracted based on the target feature set, the target feature data is respectively input into the trained classification model to obtain a plurality of sub-classification results, and the classification result is obtained based on the plurality of sub-classification results. Thus, a signal to be trained with high quality can be obtained, and a classification result of the electrophysiological signal can be obtained based on a plurality of sub-classification results corresponding to a plurality of segments.
In addition, in the classification method according to the first aspect of the present disclosure, optionally, the electrophysiological signal is a multichannel electrophysiological signal, the signal to be trained is a multichannel electrophysiological signal, and the electrophysiological signal is one of an electroencephalogram signal, an electrooculogram signal, an electrocardiograph signal, and an electromyogram signal.
In addition, in the classification method according to the first aspect of the disclosure, optionally, the training feature item includes at least one type of feature from among a time-domain feature, a frequency-domain feature, a time-frequency-domain feature and a non-linear feature, where the time-domain feature includes at least one feature from among a mean value, a standard deviation, a variation coefficient, a peak-to-peak value, a root-mean-square, a peak factor, a kurtosis factor, a margin factor and an AR model coefficient of order 4, the frequency-domain feature includes at least one feature from among a peak frequency, a 95% spectral edge frequency, a mean power frequency and a median frequency, the time-frequency-domain feature includes at least one feature from among 8 wavelet packet energies, a wavelet packet energy entropy, an 8 wavelet packet shannon-entropy and a wavelet packet singular entropy, and the non-linear feature includes a sample entropy. This can improve the classification performance of the classification model.
In addition, in the classification method according to the first aspect of the present disclosure, optionally, the classification model is one of a support vector machine-based support vector machine model and a K-nearest neighbor classification model based on a K-nearest neighbor classification algorithm, the hyper-parameter terms of the support vector machine model are penalty factors and parameters of a kernel function, and the kernel function is one of a linear kernel function, a polynomial kernel function and a radial basis function. Thus, the marine predator algorithm can be fused with a variety of classification models.
The classification system fuses the marine predator algorithm and a machine learning-based classification model to optimize the classification model while selecting features, and comprises an acquisition module, an extraction module and a classification module; the acquisition module is configured to acquire an electrophysiological signal; the extraction module is configured to extract a plurality of feature data of the electrophysiological signal corresponding to feature items in a target feature set based on the target feature set including a plurality of feature items as target feature data of the electrophysiological signal; and the classification module is configured to input the target feature data into a trained classification model to obtain a classification result, wherein the target feature set and the trained classification model are obtained by a training method comprising: constructing a training sample, wherein input data of the training sample comprises a plurality of signals to be trained; extracting a plurality of feature data corresponding to each signal to be trained and a training feature item as training feature data, wherein the training feature item comprises a plurality of feature items; the method comprises the steps of obtaining a target feature set and a trained classification model based on hyper-parameter items of hyper-parameters of the classification model, training feature items and training feature data and utilizing a marine predator algorithm to obtain the target feature set and the trained classification model, wherein in the marine predator algorithm, based on the hyper-parameter items and the training feature items, initializing a prey matrix and an elite matrix, calculating the fitness of each individual in the prey matrix based on the classification model, iteratively updating the prey matrix and the elite matrix based on the fitness until the conditions of stopping iteration are met so as to obtain the optimized prey matrix and the elite matrix and obtain the trained classification model at the same time, and obtaining the target feature set and hyper-parameters corresponding to the trained classification model based on the optimized prey matrix. Under the condition, the unique advantages of the marine predator algorithm and the classification model based on machine learning are fused, the functions of feature selection, super-parameter optimization and classification of the classification model can be simultaneously realized, the classification performance of classifying the electrophysiological signals with large individual differences and large feature types and quantities is high, the generalization capability is strong, and the feature selection and classification can be effectively carried out on the electrophysiological signals.
According to the present disclosure, it is possible to provide a classification method and a classification system of electrophysiological signals based on the marine predator algorithm that efficiently performs feature selection and classification on the electrophysiological signals.
Drawings
The disclosure will now be explained in further detail by way of example only with reference to the accompanying drawings, in which:
fig. 1 is a schematic diagram illustrating an application scenario of a classification method of electrophysiological signals based on marine predator algorithm according to an example of the present disclosure.
FIG. 2 is a flow chart illustrating an example of a marine predator algorithm to which examples of the present disclosure relate.
FIG. 3 is a flow chart illustrating another example of a marine predator algorithm to which examples of the present disclosure relate.
Fig. 4 is a schematic diagram illustrating a training method according to an example of the present disclosure.
FIG. 5 is a flow chart illustrating a fused ocean predator algorithm and classification model search for optimal feature subsets and optimal hyper-parameters in accordance with examples of the present disclosure.
Fig. 6 is a flow chart illustrating a method of classification of electrophysiological signals based on a marine predator algorithm in accordance with examples of the present disclosure.
Fig. 7 is a schematic diagram illustrating 64-channel electrophysiological signals in accordance with examples of the present disclosure.
FIG. 8 is a block diagram illustrating a classification system for electrophysiological signals based on a marine predator algorithm in accordance with examples of the present disclosure.
FIG. 9 is a comparative schematic diagram illustrating the average fitness of various algorithms to which examples of the present disclosure relate.
Fig. 10 is a comparative schematic diagram showing the number of feature items of the optimal feature subset of the various algorithms to which examples of the present disclosure relate.
FIG. 11 is a comparative diagram illustrating classification performance of various algorithms in accordance with examples of the present disclosure.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same components are denoted by the same reference numerals, and redundant description thereof is omitted. The drawings are schematic, and the proportions of the dimensions of the components and the shapes of the components may be different from the actual ones. It is noted that the terms "comprises," "comprising," and "having," and any variations thereof, in this disclosure, for example, a process, method, system, article, or apparatus that comprises or has a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include or have other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. All methods described in this disclosure can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
The classification method and the classification system of the electrophysiological signals based on the marine predator algorithm can effectively perform feature selection and classification on the electrophysiological signals. The classification method according to the present disclosure may also be referred to as a processing method, a detection method, an identification method, or a discrimination method. The present disclosure is described in detail below with reference to the attached drawing figures. In addition, the application scenarios described in the examples of the present disclosure are for more clearly illustrating the technical solutions of the present disclosure, and do not constitute a limitation on the technical solutions provided by the present disclosure.
Fig. 1 is a schematic diagram illustrating an application scenario of a classification method of electrophysiological signals based on marine predator algorithm according to an example of the present disclosure. As shown in fig. 1, the acquiring device 10 may acquire the electrophysiological signal 20 of the user and transmit the electrophysiological signal to the server 30, and the server 30 may perform a classification method stored in the server 30 in the form of computer program instructions, and then classify the electrophysiological signal 20 by using the classification method to obtain a classification result (which may also be referred to as a prediction result). In some examples, the client 40 may obtain the classification result from the server 30, and then analyze the electrophysiological signal based on the classification result. In some examples, the server 30 may be a local server, a cloud server, a virtual server, or the like. In some examples, the client 40 may be a smartphone, a laptop, a Personal Computer (PC), or other types of electronic devices.
The electrophysiological signals involved in the present disclosure can be recorded or measured using electrodes, patch clamps, or other devices. In some examples, the electrophysiological signal may be one of a brain electrical signal, an eye electrical signal, a heart electrical signal, and a muscle electrical signal. Thereby, a variety of electrophysiological signals can be classified. However, the examples of the present disclosure are not limited thereto, and the classification method may be equally applicable to classification of other similar electrophysiological signals, and also to classification of other objects besides electrophysiological signals with large differences and large feature dimensions.
The classification method to which the present disclosure relates is based on marine predator algorithms. In particular, the classification method may fuse a marine predator algorithm and a machine learning-based classification model to optimize the classification model while feature selection. The Marine Predator Algorithm (MPA) is a new meta-heuristic optimization Algorithm with strong global search capability. The classification model may be used to classify the electrophysiological signal to obtain a classification result. Under the condition, the embedded feature extraction is carried out on the electrophysiological signals based on the meta-heuristic algorithm and the effective classification model, so that the classification accuracy can be effectively improved. Additionally, in the marine predator algorithm, iterative updates may be performed based on the prey matrix and the elite matrix to obtain an optimal solution. The prey matrix may be composed of a plurality of individuals, wherein the individual with the best fitness is considered the optimal solution. The elite matrix can be replicated from the optimal solution. In this case, the only individual obtained for memory savings after stopping the iteration is the optimal solution.
An exemplary flow of a marine predator algorithm is described below in conjunction with the figures. The exemplary process is for more clearly illustrating the technical solution of the present disclosure, and does not limit the technical solution of the present disclosure. FIG. 2 is a flow chart illustrating an example of a marine predator algorithm to which examples of the present disclosure relate.
In some examples, as shown in fig. 2, the marine predator algorithm may include initializing prey and elite matrices (step S110), iteratively updating prey matrices (step S130), accounting for eddy currents and fish aggregation effects (step S150), and performing memory savings and updating elite matrices (step S170).
In some examples, in step S110, a prey matrix and an elite matrix may be initialized. In some examples, a prey matrix (also referred to as a prey location) may be initialized within the search space, which may satisfy the formula:
X num,dim =X low +rand(X up -X low ), (1)
wherein rand may be [0,1 ]]Random number of inner, X up ,X low May be the upper and lower limits of the search space, X num,dim May be the second um element (also referred to as the second um dimension spatial position) of the num individual (also referred to as the game) of the game matrix.
In some examples, the prey matrix Pr and the elite matrix El initialized based on equation (1) may be represented as:
Figure BDA0003287715120000071
each individual in the elite matrix El may be a top-level predator vector (that is, the top-level predator vector may be replicated Num times to form the elite matrix El), num may be an individual number (also referred to as a population number), and Dim may be a search space dimension. In addition, the top predator vector may be calculated by calculating the fitness of each individual of the prey matrix Pr and then using the individual with the best fitness as the top predator vector. In addition, the prey matrix can be copied to be used as an optimal prey matrix. In this case, the updated prey matrix (which may also be referred to as the current prey matrix) can then be compared to the optimal prey matrix to update the optimal prey matrix.
In some examples, in step S130, the prey matrix may be iteratively updated. In some examples, iteratively updating the prey matrix may be divided into three phases. The three stages may be a high speed ratio stage, a unit speed ratio stage, and a low speed ratio stage, respectively. For convenience of describing the three phases, the total number of iterations is denoted as Ite max The current iteration number is Ite, symbol
Figure BDA0003287715120000074
Representing a multiplication by one term, el num Denotes the num individual of the Elite matrix, pr num Num of individuals representing the prey matrix, num representing the number of individuals, step num Representing the step size of the move of the num individual (which may also be referred to as the predator motion step size).
In some examples, the high ratio phase may occur in an early stage of the iterative update. That is, the current iteration number satisfies:
Figure BDA0003287715120000072
then (c) is performed. In this case, the predator is immobile (i.e., the elite matrix is not updated) and the prey explores the solution space using the Brownian swimming strategy. Therefore, the update of the prey matrix Pr can satisfy the formula:
Figure BDA0003287715120000073
wherein R is B Can represent random number vectors based on a normal distribution (which can represent Brownian walk), R can represent random number vectors based on a uniform distribution,. Can represent a multiplication, and P can be a constant. In some examples, P may be 0.5. In some examples, R B May be a search space dimension. In some examples, the dimension of R may be a search space dimension.
In some examples, the unit speed ratio phase may occur in the middle of the iterative update. That is, the current iteration number satisfies:
Figure BDA0003287715120000081
when the user wants to use the device. In this case, the predator and prey are at the same speed, the prey is explored using the Levy swimming strategy, and the predator is developed using the Brownian swimming strategy, thus dividing the population into two for exploration and development respectively. Therefore, the update of the first half of individuals of the prey matrix Pr can satisfy the formula:
Figure BDA0003287715120000082
wherein Num =1,2,3, \ 8230;, num/2,R L May represent a random number vector based on a L é vy distribution, R may represent a random number vector based on a uniform distribution, may represent a multiplication, and P may be a constant. In some examples, P may be 0.5. In some examples, R L May be a search space dimension. In some examples, the dimension of R may be a search space dimension.
In addition, the updating of the second half of the prey matrix Pr can satisfy the formula:
Figure BDA0003287715120000083
wherein Num = Num/2, num/2+1, \8230, num, R B Can represent a random number vector based on a normal distribution,. Can represent a multiplication, P can be a constant, CS can represent a control move Step num May also be referred to as adaptive parameters. In some examples, P may be 0.5. In some examples, R B May be a search space dimension. In some examples, CS may satisfy the formula:
Figure BDA0003287715120000084
in some examples, the low ratio phase may occur at the end of the iterative update. That is, the current iteration number satisfies:
Figure BDA0003287715120000085
then (c) is performed. In this case, the predator is faster than the prey and the predator explores using the Levy swimming strategy. Therefore, the update of the prey matrix Pr can satisfy the formula:
Figure BDA0003287715120000086
wherein R is L Can represent a random number vector based on a levy distribution,. Can represent a multiplication, P can be a constant, CS can represent a control move stepStep num The variable of (2). In some examples, P may be 0.5. In some examples, R L May be a search space dimension.
In some examples, memory savings may be made and the elite matrix updated (step S170, described later) before iteratively updating the prey matrix (step S130). In some examples, after iteratively updating the prey matrix (step S130), memory savings may be made and the elite matrix updated (step S170), and then eddy currents and fish-gathering effects resolved (step S150, described later).
As described above, the marine predator algorithm may include step S150. In some examples, in step S150, vortex and fish focusing effects (i.e., FADS effects) may be addressed. Some marine environmental factors (such as eddy currents and FADS effects) change the foraging behavior of predators, and thus fall into a locally optimal solution, and to avoid falling into a locally optimal solution, a longer jump strategy in different dimensions may be employed. Specifically, the update of the prey matrix Pr may satisfy the formula:
Figure BDA0003287715120000091
where R may represent a random number vector based on uniform distribution, and CS may represent a control move Step size Step num Variable of (A), xa up And Xa low Vectors consisting of upper and lower limits of all dimensions of the prey matrix Pr can be represented separately, U can represent a binary vector, and r can represent [0,1 []The FADs may represent probabilities (also referred to as constants) that affect the iterative (also referred to as optimization) process, and r1 and r2 may be indices of the random subscripts of the prey matrix Pr. In some examples, the dimension of R may be a search space dimension. In some examples, FADs may be 0.2. In some examples, each element in binary vector U may be initialized to [0,1 ]]And if the element value is less than or equal to the FADs, updating the element value to be 0, otherwise updating to be 1. Thereby, a binary vector can be obtained.
As described above, the marine predator algorithm may include step S170. In some examples, in step S170, memory savings may be made and the elite matrix (which may also be referred to as ocean memory or evaluating individual fitness) may be updated. In step S170, memory savings may be achieved by saving the old optimal solution. In some examples, in performing memory savings, the fitness of each individual of the prey matrix may be iteratively calculated and the elite matrix updated. Specifically, if the fitness of the individuals in the prey matrix (i.e., the current prey matrix) is better than the fitness of the individuals at the corresponding positions in the optimal prey matrix, the individuals at the corresponding positions in the optimal prey matrix are replaced by the individuals to update the optimal prey matrix, the times of copying the optimal individuals (i.e., the individuals with the optimal fitness) in the updated optimal prey matrix and the number of the individuals in the prey matrix are consistent with the number of the individuals in the prey matrix are used as the individuals in the elite matrix to update the elite matrix, and the current prey matrix is replaced by the updated optimal prey matrix to perform iterative update based on the optimal prey matrix. That is, each time the optimal prey matrix is updated, the updated optimal prey matrix may be used as the current prey matrix for the next iteration update. In this case, the optimal prey matrix can be used to save the old optimal individuals at each location of the prey matrix and continually update based on the current prey matrix.
In some examples, after resolving eddy currents and fish-gathering effects (step S150) or iteratively updating the prey matrix (step S130), memory savings may be made and the elite matrix updated.
As described above, the marine predator algorithm may include steps S110 through S170. To more clearly illustrate the marine predator algorithm, an exemplary flow of the marine predator algorithm is described below in connection with steps S110-S170. As shown in FIG. 3, after initializing the prey matrix and the elite matrix (step S110), memory saving and updating the elite matrix can be performed (step S170), then the prey matrix can be updated iteratively (step S130), after updating the prey matrix iteratively, memory saving and updating the elite matrix can be performed (step S170), then the eddy current and fish aggregation effects can be resolved (step S150) to obtain the latest solution, if the strip stopping iteration is satisfiedAnd if not, performing memory saving and updating the elite matrix (step S170), then iteratively updating the prey matrix (step S130), and repeating the steps until the condition of stopping iteration is met. In some examples, the condition for stopping iteration may be that the number of iterations is reached (i.e., ite ≧ Ite) max ) Or the fitness of the individuals in the prey matrix meets preset requirements.
The training method for fusing the marine predator algorithm and the classification model is described below with reference to the accompanying drawings. Fig. 4 is a schematic diagram illustrating a training method according to an example of the present disclosure.
In some examples, by fusing the marine predator algorithm and the classification model, the classification model may be trained to obtain a trained classification model while obtaining an optimal subset of features of the electrophysiological signal and a hyperparameter of the classification model (i.e., fusing the marine predator algorithm and the classification model enables obtaining an optimal subset of features, a trained classification model, and a hyperparameter of the trained classification model). Specifically, the classification model can classify electrophysiological signals and construct a fitness function (i.e., obtain the fitness of an individual) for the marine predator algorithm, which can search for an optimal subset of features for the classification model and optimize the hyper-parameters for the classification model.
In some examples, training feature items (i.e., original feature items extracted from the signal to be trained) corresponding to the signal to be trained and hyper-parameter items of the classification model may be associated with dimensions (also referred to as columns) in a prey matrix, and the classification result of the classification model is obtained based on an individual in the prey matrix and the fitness of the individual is obtained based on the classification result to iteratively update the prey matrix and the elite matrix, thereby obtaining the optimized prey matrix and elite matrix and obtaining the trained classification model at the same time. Therefore, the optimal feature subset and the hyper-parameters corresponding to the trained classification model can be obtained based on the optimized prey matrix.
In some examples, as shown in fig. 4, the training method may include constructing training samples (step S210), extracting training feature data corresponding to the training samples (step S230) and hyper-parameter items, training feature items and training feature data based on hyper-parameters, and obtaining a target feature set and a trained classification model using a marine predator algorithm (step S250).
In some examples, in step S210, training samples may be constructed. In some examples, the input data of the training samples may include a plurality of signals to be trained. In some examples, the training samples may have label data corresponding to the input data. In some examples, the label data of the training samples may include the true class (i.e., gold standard of the class) to which each signal to be trained belongs. Thereby, the error rate of the classification model can subsequently be obtained based on the label data.
In some examples, the signal to be trained may be an electrophysiological signal. The signal to be processed may comprise data of a plurality of sampling points. In some examples, the electrophysiological signal may be a multichannel electrophysiological signal. Therefore, more features can be extracted for optimizing the ocean predator algorithm and training the classification model. Taking the signal to be trained as the electroencephalogram signal, for example, 64 channels of electroencephalogram signals in which 21 subjects and subject information are removed can be collected to construct a training sample, where the 21 subjects can include 11 abnormally-awake subjects and 10 normally-awake subjects.
In some examples, before extracting the training feature data, the signal to be trained may be preprocessed and the training feature data may be obtained based on the preprocessed signal to be trained. Therefore, training characteristic data can be obtained based on the preprocessed signal to be trained.
In some examples, the pre-processing may include at least one of re-referencing processing, down-sampling processing, band-pass filtering processing, averaging ambient normal channel processing, and windowing processing. Thus, a signal to be trained with high quality can be obtained. It should be noted that the various pretreatments may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
In some examples, in the re-referencing process, re-referencing may be based on binaural mastoid electrodes. In some examples, the number of sample points may be reduced by reducing the frequency of sampling in the down-sampling process. For example, the frequency of sampling may be reduced to 1KHz (kilohertz). In some examples, in the band pass filtering process, the interference signal in the signal to be trained may be removed based on band pass filtering of a preset frequency range. For example, the preset frequency range may be 0.5 to 45Hz (hertz). In some examples, in averaging the surrounding normal channels, the surrounding normal channels may be averaged to eliminate the signal to be trained for the abnormal channel. In some examples, in the windowing process, each signal to be trained may be divided into a plurality of segments. In some examples, the number of time or sample points for each segment may be the same. This can improve the training effect. In some examples, windowing may be based on a preset window length and overlap length. For example, the training signal may be windowed to obtain a plurality of segments based on a preset window length of 10s and an overlap length of 5 s. For example, 2489 segments may be acquired for the above-mentioned signals to be trained for 21 subjects.
In some examples, the plurality of segments obtained via the windowing process may be used for training feature data extraction (i.e., the plurality of segments may be used to extract training feature data corresponding to each segment by segment). In this case, the training samples can be expanded, and the training time can be reduced and the training effect can be improved. It should be noted that, if the windowing processing is performed, the training feature data may be extracted based on the signal to be trained (i.e., each segment) after the windowing processing.
In some examples, in step S230, training feature data corresponding to the training samples may be extracted. In some examples, a plurality of feature data of each signal to be trained corresponding to a training feature item may be extracted as training feature data. In some examples, the training feature items may include a plurality of feature items. That is, the training feature data may include feature items and feature data corresponding to the feature items. It should be noted that, if the preprocessing is performed, training feature data may be acquired based on the signal to be trained after the preprocessing. In addition, the following description regarding the signal to be trained is equally applicable to each segment after the windowing process.
In some examples, if the signal to be trained is a multi-channel electrophysiological signal, feature data of multiple channels may be extracted as training feature data of the signal to be trained, and the training feature items may be feature items corresponding to the multiple channels. For example, taking the number of channels as 64 as an example, if the number of feature items corresponding to one channel is 36, and the feature items corresponding to the channels are the same, the training feature item may be the number of channels multiplied by the feature item corresponding to one channel, that is, the training feature item may be 2304. Specifically, if the electrophysiological signal is taken as the electroencephalogram signal as an example, the number of the feature items corresponding to one channel of the electrophysiological signal may be 42, and if the number of the channels is 64, the training feature item may be 2688. However, examples of the present disclosure are not limited thereto, and if each channel has not exactly the same feature term, the number of training feature terms may be the sum of the numbers of feature terms of the plurality of channels.
In some examples, the training feature term may include at least one type of feature among a time-domain feature, a frequency-domain feature, a time-frequency-domain feature, and a nonlinear feature. In this case, training can be performed based on a plurality of types of features. This can improve the classification performance of the classification model. Feature items corresponding to each type of feature are described in detail below. In addition, for convenience of describing the training feature item, let the data of the kth sampling point of one channel of the signal to be processed be represented as x (k), and the total number of sampling points of one channel of the signal to be processed be M, where k =1,2,3, \8230;, M. In addition, the following characteristic terms are data of sampling points for one channel unless otherwise specified.
In some examples, the temporal feature may include at least one feature of a mean, a standard deviation, a coefficient of variation, a peak-to-peak, a root-mean-square, a peak factor, a kurtosis factor, a margin factor, and an AR model coefficient of order 4.
In addition, the average value may be an average value of data of a plurality of sampling points of one channel of the signal to be processed. In addition, the standard deviation may be a standard deviation of data of a plurality of sampling points of one channel of the signal to be processed. In addition, the coefficient of variation may be a ratio of a standard deviation to an average value of one channel of the signal to be processed. In addition, the peak-to-peak value may be a value of a difference between a highest value and a lowest value of one channel of the signal to be processed. Alternatively, the root mean square may be obtained by summing the squares of all sampled data for one channel of the signal to be processed, then averaging and then squaring, and alternatively, the peaking factor may be the peak-to-peak value for one channel of the signal to be processed divided by the root mean square.
In addition, the kurtosis may be a ratio of a fourth-order central moment of a channel of the signal to be processed to the fourth power of the standard deviation. The kurtosis Ku may satisfy the formula:
Figure BDA0003287715120000121
where Mean may be the average value and SD may be the standard deviation. In addition, the kurtosis factor may represent the smoothness of the waveform of one channel of the signal to be processed. The kurtosis factor KF may satisfy the formula:
Figure BDA0003287715120000122
in addition, the margin factor may be a ratio of a peak-to-peak value to a square root amplitude of one channel of the signal to be processed. The margin factor may satisfy the formula:
Figure BDA0003287715120000123
where Pk may be the peak-to-peak value.
In addition, the 4 th order AR model coefficient may be directed to data of a sampling point of one channel of the signal to be processed, and may satisfy the formula:
Figure BDA0003287715120000124
where q may be the order of the AR model, ARMC o May represent the o-th order AR model coefficients, e (k) may be the white noise residual of the data for the k-th sample point. In some examples, if q is 4, 4 th order AR model coefficients may be obtained (i.e., 4 time-domain features can be obtained). Specifically, data of a sampling point of one channel of the signal to be processed may be substituted into formula (2), and a 4-order AR model of the AR model may be solved using a Levinson-Durbin recursive algorithmAnd (4) the coefficient. That is, ARMC 1 、ARMC 2 、ARMC 3 And ARMC 4 4 AR model coefficients.
In some examples, the frequency domain features may include at least one of a peak frequency, a 95% spectral edge frequency, an average power frequency, and a median frequency. In some examples, time domain information of the signal to be processed may be converted into frequency domain information by a fourier transform. The frequency domain information may include functional spectral density. For convenience of describing frequency domain features, let the functional spectral density corresponding to one channel of the signal to be processed be P (f), where f is the frequency, and df is the width at the frequency f (i.e., the differential of the power spectral density at f), let f be z Half the sampling frequency.
In addition, the peak frequency may be a frequency corresponding to a peak. The peak frequency PF may satisfy the formula: p (PF) = max { P (f) }. In addition, the 95% spectral edge frequency SEF may satisfy the formula:
Figure BDA0003287715120000131
in addition, the average power frequency MPF may satisfy the formula:
Figure BDA0003287715120000132
in addition, the median frequency MF may satisfy the formula:
Figure BDA0003287715120000133
in some examples, if the signal to be processed is a brain electrical signal, the frequency-domain feature may further include relative power spectrums of respective frequency bands of the plurality of frequency bands and corresponding speed-to-speed ratios of the plurality of frequency bands. In some examples, the relative power spectrum RP of the band-th frequency band band And the speed ratio SF can respectively satisfy the following formulas:
Figure BDA0003287715120000134
Figure BDA0003287715120000135
where band may represent an index of a frequency band, N may represent the number of frequency bands,q may be rounded down, power, half the number of frequency bands band The power of the band-th frequency band can be represented. In some examples, the frequency bands may include five frequency bands of 0.5 to 4Hz, 4 to 8Hz, 8 to 13Hz, 13 to 30Hz, and 30 to 45Hz, in that order. In this case, the number of the above frequency bands may be 5, and the rounding-down of half the number of the frequency bands may be 2.
In some examples, the time-frequency domain features may include at least one of 8 wavelet packet energies, wavelet packet energy entropies, 8 wavelet packet shannon entropies, and wavelet packet singular entropies that are solved based on a 3-layer wavelet packet decomposition. In addition, the wavelet packet energy entropy and the wavelet packet shannon entropy can reflect the complexity and the degree of order of signal energy in subspace distribution.
In addition, the energy of 8 wavelet packets may be found based on 3-layer wavelet packet decomposition. In some examples, the wavelet packet energy may be obtained based on the wavelet packet decomposition coefficients. In some examples, a binary tree structure may be obtained by performing wavelet packet decomposition on a signal to be trained, and then obtaining a wavelet packet decomposition coefficient of a node of a j-th layer based on the wavelet packet decomposition coefficient of the node of the j-1-th layer, where even-numbered nodes and odd-numbered nodes may correspond to different conjugate filter coefficients, and the number of decomposition layers may be 3. In some examples, the wavelet packet energy WPE (n) of the nth node of the jth tier may satisfy the formula:
Figure BDA0003287715120000141
wherein the content of the first and second substances, 2 it is possible to express the L2 norm,
Figure BDA0003287715120000142
may represent the ith wavelet packet decomposition coefficient of the jth node of the jth layer. In addition, for j =3 (i.e., layer 3), there are 8 nodes of wavelet packet energy. Thereby, 8 wavelet packet energies can be obtained. In some examples, the wavelet packet decomposition coefficients may be wavelet packet decomposition coefficients via reconstruction such that the wavelet packet decomposition coefficients remain consistent with the length of the signal to be trained (which may also be referred to as the original signal).
In addition, the wavelet packet energy entropy can satisfy the formula:
Figure BDA0003287715120000143
WPE (n) may be the energy of the wavelet packet of the nth node, and j may be 3.
In addition, the wavelet packet shannon entropy wpshenn (n) of the nth node may satisfy the formula:
Figure BDA0003287715120000144
where M may be the total number of sample points for one channel of the signal to be processed. In addition, for j =3 (i.e., layer 3), there are 8 nodes of wavelet packet shannon entropy. Thus, 8 wavelet packet shannon entropy can be obtained.
In addition, the wavelet packet Singular entropy can be obtained by performing Singular Value Decomposition (SVD) on a matrix composed of 8 wavelet packet coefficients to obtain a Singular Value matrix of only diagonal values and reducing the matrix into a one-dimensional vector, and then performing calculation based on the one-dimensional vector. In some examples, the wavelet packet singular entropy WPSVDEn may satisfy the formula:
Figure BDA0003287715120000145
where Λ (n) may represent the value of the nth element of the one-dimensional vector, and j may be 3.
In some examples, the non-linear characteristic may include sample entropy. In some examples, the sample entropy may be obtained based on data (which may also be referred to as a time series) of a plurality of sample points of one channel of the signal to be processed.
As described above, the training method may include step S250. In some examples, in step S250, the target feature set and the trained classification model may be obtained using a marine predator algorithm based on the hyper-parametric terms of the hyper-parameters of the classification model, the training feature terms, and the training feature data obtained in step S230.
In some examples, the classification model may be one of a support vector machine based support vector machine model and a K Nearest Neighbor classification model based on a K-Nearest Neighbor classification algorithm (KNN). Thus, the marine predator algorithm can be fused with a variety of classification models. In addition, the target feature set can be an optimal feature subset obtained by searching through a fusion ocean predator algorithm and a classification model. In addition, the hyper-parameters may be parameters set before the classification model begins training. By optimizing the hyper-parameters, a group of optimal hyper-parameters can be selected for the classification model so as to improve the classification performance. The hyper-parameter may include a hyper-parameter item and a hyper-parameter value corresponding to the hyper-parameter item.
In some examples, the hyper-parametric terms of the support vector machine model may be penalties and parameters of the kernel function. In some examples, the kernel function of the support vector machine model may be one of a linear kernel function, a polynomial kernel function, and a radial basis function. Therefore, the ocean predator algorithm and the support vector machine model can be fused to obtain the hyperparameters of the support vector machine model. In addition, for a linear kernel function, the parameters of the kernel function may include linear coefficients and constant terms. In addition, for a polynomial kernel, the parameters of the kernel may include a multiple coefficient and a constant term. In addition, for the radial basis function, the parameters of the kernel function may include a bandwidth of the radial basis function. Preferably, the hyper-parametric terms of the support vector machine model may include penalty factors and bandwidths of the radial basis functions.
In some examples, the hyper-parametric terms of the K-nearest neighbor classification model may include K values (which may be positive integers), distance metrics (e.g., distance metrics may include, but are not limited to, euclidean distance, minkowski distance, manhattan distance, and chebyshev distance, etc.), and distance weights (e.g., distance weights may include, but are not limited to, 1/distance, and 1/distance squared, etc.). Preferably, only the K value and the distance measure can be optimized.
In some examples, the hyperparameters may be set before the training of the classification models begins, and then the hyperparameters-set classification models are trained to obtain trained classification models. That is, the trained classification model has corresponding hyperparameters.
In some examples, in step S250, a sea predator algorithm and a classification model may be fused to search for an optimal feature subset and an optimal hyperparameter. The marine predator algorithm is described with reference to steps S110 through S170 above. FIG. 5 is a flow chart illustrating a fused ocean predator algorithm and classification model search for optimal feature subsets and optimal hyper-parameters in accordance with examples of the present disclosure.
In some examples, as shown in fig. 5, step S250 may include initializing a prey matrix and an elite matrix based on the hyper-parameter terms and the training feature terms (step S251). In some examples, the hyper-parameter terms and the training feature terms may be mapped to columns in the prey matrix prior to initialization. In some examples, the hyper-parameter terms and the training feature terms may be in a one-to-one correspondence with columns in the prey matrix. That is, the dimensions of the prey matrix may be the sum of the number of hyper-parameter terms and the number of training feature terms.
In some examples, the top columns in the prey matrix may be associated with the hyper-parameter entries, and the bottom columns may be associated with the training feature entries. For example, if the classification model is a support vector machine model with penalty factors and two hyper-parameter terms of bandwidth of radial basis function, and the number of training feature terms is s, the first two columns of the prey matrix may correspond to the hyper-parameter terms, and the last s columns may correspond to the training feature terms. Examples of the present disclosure are not limited thereto, and the hyper-parameter items and the training feature items may correspond to arbitrary columns in the prey matrix by labeling the correspondence of the hyper-parameter items and the training feature items with the columns in the prey matrix. For example, the hyper-parameter terms may correspond to the middle columns and the training feature terms may correspond to the other columns. As another example, the hyper-parameter terms may correspond to a later column and the training feature terms may correspond to a previous column.
In some examples, after the correspondence of the hyper-parameter items and the training feature items to the columns in the prey matrix is set, the prey matrix may be initialized based on the correspondence, and then the elite matrix may be initialized based on the initialized prey matrix.
In some examples, when initializing the prey matrix, columns corresponding to the hyper-parameter entries and columns corresponding to the training feature entries may be initialized with different upper and lower limits. In some examples, when initializing the prey matrix, columns in the prey matrix corresponding to the hyper-parameter entries may be initialized based on a first upper bound and a first lower bound, and columns in the prey matrix corresponding to the training feature entries may be initialized based on a second upper bound and a second lower bound. Therefore, the columns corresponding to the hyper-parameter items and the columns corresponding to the training characteristic items in the prey matrix can be initialized through different upper limits and lower limits. In some examples, for two hyper-parametric terms of the penalty factor and the bandwidth of the radial basis function of the support vector machine model, the first upper bound may be 1000 and the first lower bound may be 0.001. In some examples, the second upper limit may be 1 and the second lower limit may be 0. The process of initializing prey matrix and elite matrix may refer to the process of initializing prey matrix and elite matrix in the predator algorithm (step S110).
In some examples, as shown in fig. 5, step S250 may further include calculating a fitness of each individual in the prey matrix based on the classification model, and iteratively updating the prey matrix and the elite matrix based on the fitness until a condition for stopping the iteration is satisfied to obtain an optimized prey matrix and elite matrix and simultaneously obtain a trained classification model (step S252).
In some examples, in calculating the fitness of an individual of a prey matrix based on a classification model, a hyper-parameter value corresponding to a hyper-parameter term and a subset of features used to train the classification model may be obtained based on the individual. In some examples, the hyper-parameters of the classification model may be set using the hyper-parameter values and the classification model may be trained using the data sets corresponding to the feature subsets to obtain trained classification models corresponding to individuals (the trained classification models corresponding to the individuals may be referred to as individual classification models).
In some examples, the hyper-parameter values corresponding to the hyper-parameter items may be individual element values corresponding to the hyper-parameter items. In some examples, the feature subset used to train the classification model may be selected based on element values of the individual corresponding to the training feature terms and via a binarization process. Specifically, in the binarization processing, an element value of an individual corresponding to a training feature item may be binarized based on a preset binarization threshold, if the element value after binarization is 1, then selecting a corresponding feature item in the training feature item (that is, a feature item corresponding to an element whose element value is 1) may be represented, and if the element value after binarization is 0, then discarding a corresponding feature item in the training feature item (that is, a feature item corresponding to an element whose element value is 0) may be represented. In this case, feature items can be selected from the training feature items as feature subsets on an individual basis. Thereby, a subset of features can be obtained. In some examples, the preset binarization threshold may be 0.5.
In some examples, a data set corresponding to the feature subset may be selected from the training feature data based on the feature subset. In some examples, feature data corresponding to each feature item in the feature subset may be filtered out from the training feature data, and the feature subset and a plurality of feature data corresponding to the feature subset may be taken as the data set. In some examples, the data set may also include label data for the training samples.
In some examples, the fitness of an individual may be calculated based on an individual classification model. Specifically, each individual of the prey matrix may correspond to an individual classification model, and the fitness of an individual may be calculated based on the individual classification model. In some examples, the fitness may be calculated based on an indicator of the classification performance of the individual classification models.
In some examples, the indicators of classification performance may include at least one of sensitivity, specificity, error rate, and accuracy rate. In some examples, fitness of an individual may be calculated based on the error rate corresponding to the individual classification model. Specifically, the error rate corresponding to the individual classification model may be obtained based on the individual classification model, and the fitness of the individual may be calculated based on the error rate. Thus, the fitness of the individual can be calculated based on the error rate of the individual classification model. In some examples, the classification results predicted by the individual classification models may be compared to the label data of the training samples to obtain corresponding error rates for the individual classification models.
In some examples, the fitness of an individual is calculated based on the error rate corresponding to the individual classification model, and the formula may be satisfied:
Figure BDA0003287715120000171
among them, fitnes num Can be the fitness of the num individual, S can be the number of training feature items, dim num The dimensions of the feature subset that can be selected for the num individual,
Figure BDA0003287715120000172
may represent the error rate corresponding to the classification model trained based on the feature subset selected by the num individuals (i.e., the error rate corresponding to the individual classification model), and α may be a weighting factor (i.e., a factor integrating the error rate and the number of features). Thus, the adaptability can be obtained by integrating the error rate and the feature quantity. In some examples, α may be 0.98.
In some examples, the error rate may be an average error rate. The average error rate may be an average of classification error rates corresponding to multiple verifications in the cross-validation. In some examples, the cross-validation may be K-fold cross-validation. In some examples, K may be 5 or 10.
The following describes a process of obtaining an error rate of a support vector machine model by taking K-fold cross validation and the support vector machine model as examples. As described above, the hyper-parameter items of the support vector machine model may be two hyper-parameter items of a penalty factor and a bandwidth of a radial basis function, and the number of the training feature items is represented as s, the two hyper-parameter items may correspond to the first two elements of the prey matrix, and the training feature items may correspond to the last s columns of the prey matrix. It should be noted that the description of the process of obtaining the error rate does not represent a limitation to the present disclosure, and may be applied to the description of other classification performances.
In the present embodiment, first, the first two element values of each individual may be input into the support vector machine model (i.e., as hyper-parameters of the support vector machine model). Then, a feature subset is selected based on the s element values subjected to binarization processing after each individual (that is, the feature subset is selected based on the element values corresponding to the training feature items and subjected to binarization processing), and a data set corresponding to the feature subset is constructed based on the feature subset and the training feature data. Then, the data set can be divided into K-fold by using K-fold cross validation, wherein K-1-fold can be used for training the support vector machine model and the other fold can be used for testing the trained support vector machine model during each training, and K classification error rates can be obtained by repeating the K-fold cross validation and the K-fold cross validation. Finally, the classification error rates of K times may be averaged to obtain an average error and used as an error rate for calculating fitness of the individual.
As described above, in step S252, the prey matrix and the elite matrix may be iteratively updated based on the fitness until a condition to stop the iteration is satisfied. As described above, in some examples, the condition for stopping the iteration may be that the number of iterations is reached, or that the fitness of the individuals in the prey matrix meets a preset requirement. In some examples, the fitness of the individual meeting the preset requirement may be that the fitness of the individual is less than the preset fitness.
In some examples, in updating the game matrix, the values of the elements in the game matrix may be constrained using boundary values. Since the elite matrix can be updated based on the prey matrix. In this case, constraining the values of the elements in the prey matrix can simultaneously constrain the values of the elements in the elite matrix. In some examples, the element values in the prey matrix corresponding to the hyperparameter term and the training feature term may be constrained with different boundary values, respectively. In some examples, in the marine predation algorithm, the steps of iteratively updating the prey matrix (step S130) and updating the prey matrix to account for eddy currents and fish-gathering effects (step S150) and the like may be constrained by boundary values.
In some examples, the boundaries of the element values in the prey matrix corresponding to the hyper-parameter terms may be constrained using a first upper limit and a first lower limit, and the boundaries of the element values in the prey matrix corresponding to the training feature terms may be constrained using a second upper limit and a second lower limit. In some examples, the boundary of the constraint prey matrix may satisfy the formula:
Figure BDA0003287715120000181
wherein, P up1 May represent a first upper bound, P low1 May represent a first lower limit, P up2 May represent a second upper limit, P low2 May represent a second lower limit, X num,dim The second num element of the num individual in the prey matrix can be represented, a can represent the set of elements in the prey matrix corresponding to the hyper-parameter item, and B can represent the set of elements in the prey matrix corresponding to the training feature item. Therefore, the boundaries corresponding to the hyper-parameter items and the training feature items of the updated prey matrix can be respectively constrained based on the first upper limit and the first lower limit and the second upper limit and the second lower limit.
An example of step S252 is described in detail below in connection with a marine predation algorithm. In this embodiment, after initializing the prey matrix and the elite matrix in step S251, memory saving and updating the elite matrix (step S170), then iteratively updating the prey matrix (step S130), after iteratively updating the prey matrix, constraining the boundary value of the prey matrix, performing memory saving and updating the elite matrix (step S170), then resolving the eddy current and fish aggregation effect (step S150) to obtain the latest solution and constrain the boundary value of the prey matrix, if the condition of stopping iteration is satisfied, stopping iteration and obtaining the optimal solution, otherwise, performing memory saving and updating the elite matrix (step S170), then iteratively updating the prey matrix (step S130), after iteratively updating the prey matrix, constraining the boundary value of the prey matrix, and repeating the process until the condition of stopping iteration is satisfied. In some examples, the optimal solution may be the least fitness individual in the prey matrix.
In some examples, as shown in fig. 5, step S250 may further include obtaining a target feature set and hyper-parameters corresponding to the trained classification model based on the optimized prey matrix (step S253).
As described above, in some examples, the hyper-parameters of the classification model may be set using the hyper-parameter values and the classification model may be trained using the data sets corresponding to the feature subsets to obtain trained classification models corresponding to individuals. That is, each individual has a corresponding trained classification model, hyper-parameter values, and feature subsets.
In some examples, the least fit individual may be selected from the optimized prey matrix. In some examples, a trained classification model (i.e., an individual classification model) corresponding to the individual with the smallest fitness may be used as the trained classification model for predicting the classification result of the electrophysiological signal. In some examples, the hyper-parameter value corresponding to the individual with the smallest fitness and the hyper-parameter item corresponding to the hyper-parameter value may be used as the hyper-parameter corresponding to the trained classification model. In some examples, the feature subset corresponding to the individual with the smallest fitness (i.e., the optimal feature subset) may be used as the target feature set.
Hereinafter, the classification method according to the present disclosure will be described in detail with reference to the drawings. The trained classification model and the target feature set involved in the classification method can be obtained by training with the above-mentioned training method. Fig. 6 is a flow chart illustrating a method of classification of electrophysiological signals based on a marine predator algorithm in accordance with examples of the present disclosure.
In some examples, as shown in fig. 6, the classification method may include acquiring an electrophysiological signal (step S310), extracting target feature data of the electrophysiological signal (step S330), and inputting the target feature data into a trained classification model to acquire a classification result (step S350).
Fig. 7 is a schematic diagram illustrating a 64-channel brain electrical signal in accordance with an example of the present disclosure.
In some examples, in step S310, an electrophysiological signal may be acquired. The electrophysiological signal may be sampled data acquired at a preset frequency over time. In some examples, the electrophysiological signals may be multi-channel. The multiple channels may be signals of multiple locations of an acquired object, such as a brain. In some examples, the sampling frequency of each channel may be the same. As an example, fig. 7 is a schematic diagram showing 64 channels of brain electrical signals.
In some examples, in step S330, target characteristic data of the electrophysiological signal may be extracted. In some examples, a plurality of feature data of the electrophysiological signal corresponding to the feature items in the target feature set may be extracted as target feature data of the electrophysiological signal based on the target feature set. The target feature set may be obtained by the training method described above. The set of target features may include a plurality of feature items. The plurality of feature items in the target feature set may be derived from the training feature items described above. In some examples, the number of feature items of the target feature set may be less than the number of feature items of the training feature items. For details, refer to the related description in the training method. In some examples, if the electrophysiological signal is multi-channel, the target characteristic data may comprise characteristic data of the multiple channels.
In some examples, the electrophysiological signals may be pre-processed before extracting the target characteristic data. In some examples, the electrophysiological signals may be pre-processed in accordance with a training method (which may also be referred to as a training process). For example, if the training method performs windowing on the signal to be trained, the electrophysiological signal may be subjected to windowing in accordance with the training method.
In some examples, in step S350, the target feature data may be input into a trained classification model to obtain a classification result. Thereby, a classification result of the electrophysiological signal can be obtained. The trained classification model may be obtained by the training method described above. For details, refer to the related description in the training method.
In some examples, if in the pre-processing, the electrophysiological signals are windowed to obtain a plurality of segments corresponding to the electrophysiological signals, the classification results may be obtained based on the classification results (which may be referred to as sub-classification results for short) corresponding to the plurality of segments. Specifically, in the windowing process, a plurality of segments corresponding to the electrophysiological signal may be acquired, target feature data corresponding to each of the plurality of segments is extracted based on a target feature set, the target feature data is input into a trained classification model to acquire a plurality of sub-classification results, and the classification results are acquired based on the plurality of sub-classification results. For example, the result with the largest number of occurrences among the plurality of sub-classification results may be used as the classification result. Thereby, a classification result of the electrophysiological signal can be obtained based on a plurality of sub-classification results corresponding to the plurality of segments.
The classification system of electrophysiological signals based on marine predator algorithm according to the present disclosure is described in detail below with reference to the drawings. The classification system 100 may also be referred to as a processing system, a detection system, a recognition system, a discrimination system, or the like. The present disclosure relates to a classification system 100 for implementing the above-described classification method. Fig. 8 is a block diagram illustrating a classification system 100 for electrophysiological signals based on a marine predator algorithm in accordance with examples of the present disclosure. As shown in fig. 8, in some examples, classification system 100 may include an acquisition module 110, an extraction module 120, and a classification module 130.
In some examples, the acquisition module 110 may be configured to acquire electrophysiological signals. The electrophysiological signal may be sampled data acquired at a predetermined frequency over time. In some examples, the electrophysiological signals may be multi-channel. For details, refer to the related description in step S310.
In some examples, the extraction module 120 may be configured to extract target characteristic data of the electrophysiological signal. In some examples, a plurality of feature data of the electrophysiological signal corresponding to the feature items in the target feature set may be extracted as target feature data of the electrophysiological signal based on the target feature set. The target feature set may be obtained by the training method described above. The set of target features may include a plurality of feature items. The plurality of feature items in the target feature set may be derived from the training feature items described above. In some examples, the number of feature items of the target feature set may be less than the number of feature items of the training feature items. In some examples, if the electrophysiological signal is multi-channel, the target characteristic data may comprise characteristic data of the multiple channels. In some examples, the electrophysiological signals may be pre-processed before extracting the target characteristic data. In some examples, the electrophysiological signals may be pre-processed in accordance with a training method (which may also be referred to as a training process). For example, if the training method performs windowing on the signal to be trained, the electrophysiological signal may be subjected to windowing in accordance with the training method. For details, refer to the related description in step S330.
In some examples, the classification module 130 may be configured to input the target feature data into a trained classification model to obtain a classification result. Thereby, a classification result of the electrophysiological signal can be obtained. For details, refer to the related description in step S350.
FIG. 9 is a comparative schematic diagram illustrating the average fitness of various algorithms to which examples of the present disclosure relate. Fig. 10 is a comparative schematic diagram showing the number of feature items of the optimal feature subset of the various algorithms to which the disclosed examples relate. FIG. 11 is a comparative diagram illustrating classification performance of various algorithms in accordance with examples of the present disclosure.
In addition, in order to verify the effectiveness of the classification method according to the present disclosure, the method was compared with the feature selection and classification methods based on the search algorithms such as Particle Swarm Optimization (PSO), genetic Algorithm (GA), simulated Annealing (SA), gravity Search Algorithm (GSA), and Wolf's Wolf Optimizer (GWO), and 10 averaging was performed to reliably evaluate these methods, and the parameter settings are shown in table 1 below, wherein a general parameter may indicate that each method is the same parameter.
Figure BDA0003287715120000211
TABLE 1
As shown in fig. 9, the classification method of the present disclosure achieved the best mean fitness value of 0.1996 ± 0.0171, which is significantly lower than the 5 compared methods, with a one-way anova p <0.05.
As shown in fig. 10, the present disclosure achieves the number of feature terms of the optimal feature subset next to GWO of 74.9 ± 38.5, and the number of feature terms achieved next to GWO of 49.9 ± 10.3. Although lower than GWO, one-way anova showed p =0.49. That is, there is no statistical difference between the two. But the number of feature terms of the optimal feature subset of the present disclosure is significantly lower than the 4 other methods, one-way anova p <0.05.
As shown in fig. 11, the sensitivity of the classification method of the present disclosure was 74.73 ± 3.31%; secondly, GA with the sensitivity of 65.68 +/-2.04 percent; minimum GWO, sensitivity 0%. In terms of specificity, GWO is highest, specificity being 100%; secondly, PSO with the specificity of 96.89 +/-9.83 percent; next is the disclosure, specificity is 84.66 ± 2.42%. The disclosure is optimal in terms of accuracy, with an accuracy of 79.69 ± 1.74%; next is GA with an accuracy of 69.00 ± 1.46% and significantly higher than these 5 methods, with a one-way anova p <0.05. In addition, GWO predicts all training samples as negative class, and does not correctly identify positive class samples. Therefore, the classification method disclosed by the invention can effectively perform feature selection and classification on the electrophysiological signals, and is beneficial to solving the problem of large personal difference in the field of pattern recognition of the electrophysiological signals.
The classification method and system 100 according to the present disclosure fuses a marine predator algorithm and a machine learning-based classification model to optimize the classification model while feature selection, classifies electrophysiological signals using the classification model and a target feature set obtained by the feature selection to obtain a classification result, initializes a prey matrix and an elite matrix using a hyper-parameter item and a training feature item of a hyper-parameter of the classification model in the marine predator algorithm, calculates fitness of each individual in the prey matrix based on the classification model, and iteratively updates the prey matrix and the elite matrix based on the fitness to obtain the optimized prey matrix and the elite matrix and simultaneously obtain the trained classification model, wherein a target feature set and a hyper-parameter corresponding to the trained classification model are obtained based on the optimized prey matrix. Under the condition, the unique advantages of the ocean predator algorithm and the classification model based on machine learning are combined, the functions of feature selection, super-parameter optimization and classification of the classification model can be realized at the same time, the classification performance of classifying electrophysiological signals with large individual differences and large feature types and quantity is high, the generalization capability is strong, and the feature selection and classification can be effectively carried out on the electrophysiological signals.
While the present disclosure has been described in detail above with reference to the drawings and examples, it should be understood that the above description is not intended to limit the disclosure in any way. Variations and changes may be made as necessary by those skilled in the art without departing from the true spirit and scope of the disclosure, which fall within the scope of the disclosure.

Claims (10)

1. A method for classifying electrophysiological signals based on a marine predator algorithm, characterized in that it is a method for fusing a marine predator algorithm and a machine learning based classification model to optimize the hyper-parameters of the classification model while feature selection, comprising: acquiring an electrophysiological signal; extracting a plurality of feature data of the electrophysiological signal corresponding to feature items in the target feature set based on a target feature set comprising a plurality of feature items as target feature data of the electrophysiological signal; and inputting the target feature data into a trained classification model to obtain a classification result,
wherein the classification model is one of a support vector machine model based on a support vector machine and a K nearest neighbor classification model based on a K nearest neighbor classification algorithm, and the target feature set and the trained classification model are obtained by the following training method:
constructing a training sample, wherein input data of the training sample comprises a plurality of signals to be trained; extracting a plurality of feature data corresponding to each signal to be trained and a training feature item as training feature data, wherein the training feature item comprises a plurality of feature items; acquiring the target feature set and the trained classification model based on hyper-parametric terms of hyper-parameters of the classification model, the training feature terms and the training feature data and by using a marine predator algorithm,
wherein, in a marine predator algorithm, in initializing a prey matrix and an elite matrix based on the hyper-parameter items and the training feature items, the hyper-parameter items and the training feature items are in one-to-one correspondence with columns in the prey matrix, initializing columns in the prey matrix corresponding to the hyper-parameter items and columns corresponding to the training feature items with different upper and lower limits, initializing the elite matrix based on the initialized prey matrix, and in calculating fitness of each individual in the prey matrix based on the classification model, setting a hyper-parameter of the classification model based on an element value of the individual corresponding to the hyper-parameter item, selecting a feature subset based on an element value of the individual corresponding to the training feature items and subjected to binarization processing, selecting a data set corresponding to the feature subset from the training feature data based on the feature subset, training the classification model to obtain an individual classification model, and then obtaining an error rate corresponding to the individual classification model based on the individual classification model, and calculating fitness of the individual based on the error rate,
iteratively updating the prey matrix and the elite matrix based on the fitness until the condition of stopping iteration is met to obtain the optimized prey matrix and the elite matrix and simultaneously obtain the trained classification model, wherein in the process of updating the prey matrix, different boundary values are utilized to respectively constrain the element values corresponding to the hyperparameter item and the training characteristic item in the prey matrix,
and acquiring the target characteristic set and the hyper-parameters corresponding to the trained classification model based on the optimized prey matrix.
2. The classification method according to claim 1, characterized in that:
initializing columns in the prey matrix corresponding to the hyper-parameter items based on a first upper limit and a first lower limit, and initializing columns in the prey matrix corresponding to the training feature items based on a second upper limit and a second lower limit.
3. The classification method according to claim 1, characterized in that:
in the binarization processing, the element values of the individuals corresponding to the training feature items are binarized based on a preset binarization threshold value, if the element value after binarization is 1, the corresponding feature items in the training feature items are selected, and if the element value after binarization is 0, the corresponding feature items in the training feature items are discarded.
4. The classification method according to claim 1, characterized in that:
the fitness of the individual satisfies the formula:
Figure FDA0003814207760000021
among them, fitnes num Is the fitness of the num individual, S is the number of the training characteristic items, dim num The dimensions of the feature subset selected for the num individuals,
Figure FDA0003814207760000022
and representing the error rate corresponding to the classification model trained on the feature subset selected by the num individuals, wherein alpha is a weight factor, the error rate is an average error rate, and the average error rate is the average value of the classification error rates corresponding to multiple times of verification in the cross verification.
5. The classification method according to claim 2, characterized in that:
in updating the prey matrix, constraining boundaries of element values in the prey matrix corresponding to the hyper-parameter terms using the first upper limit and the first lower limit, and constraining boundaries of element values in the prey matrix corresponding to the training feature terms using the second upper limit and the second lower limit, wherein the boundaries constraining the prey matrix satisfy the formula:
Figure FDA0003814207760000031
wherein, P up1 Represents said first upper limit, P low1 Denotes the first lower limit, P up2 Denotes the second upper limit, P low2 Denotes the second lower limit, X num,dim And a second m element representing the num individual in the prey matrix, wherein A represents a set of elements corresponding to the hyper-parameter item in the prey matrix, and B represents a set of elements corresponding to the training characteristic item in the prey matrix.
6. The classification method according to claim 1, characterized in that:
before the training feature data are extracted, preprocessing the signals to be trained and acquiring the training feature data based on the preprocessed signals to be trained, wherein the preprocessing comprises at least one of re-reference processing, down-sampling processing, band-pass filtering processing, average surrounding normal channel processing and windowing processing, in the windowing processing, each signal to be trained is divided into a plurality of segments, the plurality of segments are used for extracting the training feature data corresponding to each segment according to the segments, and the time or the number of sampling points of each segment is the same; before extracting the target feature data, performing the preprocessing on the electrophysiological signal, wherein in windowing processing, a plurality of segments corresponding to the electrophysiological signal are obtained, the target feature data corresponding to each of the plurality of segments is extracted based on the target feature set, the target feature data is respectively input into the trained classification model to obtain a plurality of sub-classification results, and the classification result is obtained based on the plurality of sub-classification results.
7. The classification method according to claim 1, characterized in that:
the electrophysiological signals are multichannel electrophysiological signals, the signals to be trained are multichannel electrophysiological signals, and the electrophysiological signals are one of electroencephalogram signals, electrooculogram signals, electrocardio signals and electromyogram signals.
8. The classification method according to claim 1, characterized in that:
the training feature items comprise at least one type of feature from the group consisting of a mean, a standard deviation, a coefficient of variation, a peak-to-peak value, a root-mean-square, a peak factor, a kurtosis factor, a margin factor, and an order-4 AR model coefficient, the time-domain feature comprises at least one feature from the group consisting of a peak frequency, a 95% spectral edge frequency, an average power frequency, and a median frequency, the time-frequency domain feature comprises at least one feature from the group consisting of 8 wavelet packet energies, 8 wavelet packet shannon entropies, and wavelet packet singular entropies, and the non-linear feature comprises a sample entropy.
9. The classification method according to claim 1, characterized in that:
the hyper-parameter items of the support vector machine model are penalty factors and parameters of a kernel function, and the kernel function is one of a linear kernel function, a polynomial kernel function and a radial basis function.
10. A classification system of electrophysiological signals based on an ocean predator algorithm is characterized by being a classification system which integrates an ocean predator algorithm and a machine learning-based classification model to optimize the hyper-parameters of the classification model while selecting features, and comprising an acquisition module, an extraction module and a classification module; the acquisition module is configured to acquire an electrophysiological signal; the extraction module is configured to extract a plurality of feature data of the electrophysiological signal corresponding to feature items in a target feature set based on the target feature set including a plurality of feature items as target feature data of the electrophysiological signal; and the classification module is configured to input the target feature data into a trained classification model to obtain a classification result,
wherein the classification model is one of a support vector machine model based on a support vector machine and a K nearest neighbor classification model based on a K nearest neighbor classification algorithm, and the target feature set and the trained classification model are obtained by the following training method:
constructing a training sample, wherein input data of the training sample comprises a plurality of signals to be trained; extracting a plurality of feature data corresponding to each signal to be trained and a training feature item as training feature data, wherein the training feature item comprises a plurality of feature items; obtaining the target feature set and the trained classification model using a marine predator algorithm based on hyper-parametric terms of hyper-parameters of the classification model, the training feature terms, and the training feature data,
wherein, in a marine predator algorithm, in initializing a prey matrix and an elite matrix based on the hyper-parameter items and the training feature items, the hyper-parameter items and the training feature items are in one-to-one correspondence with columns in the prey matrix, initializing columns in the prey matrix corresponding to the hyper-parameter items and columns corresponding to the training feature items with different upper and lower limits, initializing the elite matrix based on the initialized prey matrix, and in calculating fitness of each individual in the prey matrix based on the classification model, setting hyper-parameters of the classification model based on element values of the individual corresponding to the hyper-parameter items, selecting a feature subset based on element values of the individual corresponding to the training feature items and subjected to binarization processing, selecting a data set corresponding to the feature subset from the training feature data based on the feature subset to train the classification model to obtain an individual classification model, and further obtaining error rates corresponding to the individual classification model based on the individual classification model, and calculating fitness of the individual based on the error rates,
iteratively updating the prey matrix and the elite matrix based on the fitness until the condition of stopping iteration is met to obtain the optimized prey matrix and elite matrix and simultaneously obtain the trained classification model, wherein in the process of updating the prey matrix, different boundary values are utilized to respectively constrain the element values corresponding to the hyperparameter item and the training characteristic item in the prey matrix,
and acquiring the target feature set and the hyper-parameters corresponding to the trained classification model based on the optimized prey matrix.
CN202111153139.6A 2021-09-29 2021-09-29 Classification method and classification system of electrophysiological signals based on ocean predator algorithm Active CN113887397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111153139.6A CN113887397B (en) 2021-09-29 2021-09-29 Classification method and classification system of electrophysiological signals based on ocean predator algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111153139.6A CN113887397B (en) 2021-09-29 2021-09-29 Classification method and classification system of electrophysiological signals based on ocean predator algorithm

Publications (2)

Publication Number Publication Date
CN113887397A CN113887397A (en) 2022-01-04
CN113887397B true CN113887397B (en) 2022-10-14

Family

ID=79008171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111153139.6A Active CN113887397B (en) 2021-09-29 2021-09-29 Classification method and classification system of electrophysiological signals based on ocean predator algorithm

Country Status (1)

Country Link
CN (1) CN113887397B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114533088A (en) * 2022-02-09 2022-05-27 华南师范大学 Multi-modal brain signal classification method and device, electronic equipment and storage medium
CN115192043B (en) * 2022-07-15 2023-03-31 中山大学中山眼科中心 Training method and training device for classification model for predicting visual fatigue predictability

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991364A (en) * 2019-12-09 2020-04-10 四川大学 Electroencephalogram signal classification method and system
WO2020136569A1 (en) * 2018-12-26 2020-07-02 Analytics For Life Inc. Method and system to characterize disease using parametric features of a volumetric object and machine learning
CN112182980A (en) * 2020-10-23 2021-01-05 河海大学 Hobbing parameter low-carbon solving method driven by ocean predator algorithm
CN112597986A (en) * 2021-03-05 2021-04-02 腾讯科技(深圳)有限公司 Physiological electric signal classification processing method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476158B (en) * 2020-04-07 2020-12-04 金陵科技学院 Multi-channel physiological signal somatosensory gesture recognition method based on PSO-PCA-SVM

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020136569A1 (en) * 2018-12-26 2020-07-02 Analytics For Life Inc. Method and system to characterize disease using parametric features of a volumetric object and machine learning
CN110991364A (en) * 2019-12-09 2020-04-10 四川大学 Electroencephalogram signal classification method and system
CN112182980A (en) * 2020-10-23 2021-01-05 河海大学 Hobbing parameter low-carbon solving method driven by ocean predator algorithm
CN112597986A (en) * 2021-03-05 2021-04-02 腾讯科技(深圳)有限公司 Physiological electric signal classification processing method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进海洋捕食者算法优化PSPNet的医学影像语义分割研究;彭晓旭;《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》;20210815(第08期);E076-1 *

Also Published As

Publication number Publication date
CN113887397A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
CN109389059B (en) P300 detection method based on CNN-LSTM network
Lavner et al. Baby cry detection in domestic environment using deep learning
CN113887397B (en) Classification method and classification system of electrophysiological signals based on ocean predator algorithm
CN110680313B (en) Epileptic period classification method based on pulse group intelligent algorithm and combined with STFT-PSD and PCA
Li et al. Application of MODWT and log-normal distribution model for automatic epilepsy identification
Al-Assaf Surface myoelectric signal analysis: dynamic approaches for change detection and classification
CN115804602A (en) Electroencephalogram emotion signal detection method, equipment and medium based on attention mechanism and with multi-channel feature fusion
KR20170064960A (en) Disease diagnosis apparatus and method using a wave signal
Oppong et al. A novel computer vision model for medicinal plant identification using log-gabor filters and deep learning algorithms
Kumar et al. Comparison of Machine learning models for Parkinson’s Disease prediction
Al Bashit et al. A mel-filterbank and MFCC-based neural network approach to train the Houston Toad call detection system design
Jamal et al. Cloud-Based Human Emotion Classification Model from EEG Signals
Sinha et al. HSCAD: Heart sound classification for accurate diagnosis using machine learning and MATLAB
Gurve et al. Deep learning of EEG time–frequency representations for identifying eye states
Behnam et al. Power complexity feature-based seizure prediction using DNN and firefly-BPNN optimization algorithm
Wang et al. A shallow convolutional neural network for classifying MI-EEG
Angayarkanni Predictive analytics of chronic kidney disease using machine learning algorithm
CN115631371A (en) Extraction method of electroencephalogram signal core network
Huynh A Survey of Machine Learning algorithms in EEG
Huijben et al. Som-cpc: Unsupervised contrastive learning with self-organizing maps for structured representations of high-rate time series
Neili et al. Gammatonegram based pulmonary pathologies classification using convolutional neural networks
Prasad et al. A two-step framework for Parkinson’s disease classification: Using multiple one-way ANOVA on speech features and decision trees
Fabietti et al. Artefact Detection in Chronically Recorded Local Field Potentials: An Explainable Machine Learning-based Approach
Parthiban et al. EfficientNet with Optimal Wavelet Neural Network for DR Detection and Grading
Prabhakar et al. Eigen vector method with swarm and non swarm intelligence techniques for epileptic seizure classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant