CN107450730B - Low-speed eye movement identification method and system based on convolution mixed model - Google Patents

Low-speed eye movement identification method and system based on convolution mixed model Download PDF

Info

Publication number
CN107450730B
CN107450730B CN201710695419.7A CN201710695419A CN107450730B CN 107450730 B CN107450730 B CN 107450730B CN 201710695419 A CN201710695419 A CN 201710695419A CN 107450730 B CN107450730 B CN 107450730B
Authority
CN
China
Prior art keywords
eye movement
independent
components
source
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710695419.7A
Other languages
Chinese (zh)
Other versions
CN107450730A (en
Inventor
吕钊
张贝贝
张超
吴小培
张磊
高湘萍
郭晓静
卫兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201710695419.7A priority Critical patent/CN107450730B/en
Publication of CN107450730A publication Critical patent/CN107450730A/en
Application granted granted Critical
Publication of CN107450730B publication Critical patent/CN107450730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/22Source localisation; Inverse modelling

Abstract

The invention discloses a slow eye movement identification method and a system based on a convolution mixed model, which belong to the technical field of electrooculogram, and comprise the steps of carrying out blind source separation on eye movement data of each frequency point by adopting a complex value ICA algorithm to obtain frequency domain independent components of each independent source signal on the corresponding frequency point; carrying out scale compensation on the independent components on each frequency point, and restoring the real proportional components of the independent components in the observed components; sorting and adjusting the compensated independent components by adopting a constrained DOA algorithm; carrying out short-time inverse Fourier transform processing on the independent components of the frequency points after the scale compensation and the sequencing to obtain complete time signals of the multi-channel independent source on the time domain; and performing wavelet decomposition on the complete time signal of the multi-channel independent source, comparing and analyzing an obtained decomposition result with a judgment standard of the slow eye movement, and identifying the time signal which is consistent with the characteristics of the slow eye movement as the slow eye movement. The invention performs wavelet analysis on the multi-channel EOG signal in the time domain, and can quickly extract slow eye movement from the EOG signal due to no interference of other source signals.

Description

Low-speed eye movement identification method and system based on convolution mixed model
Technical Field
The invention relates to the technical field of electrooculogram, in particular to a slow eye movement identification method and system based on a convolution mixed model.
Background
The visual system is the most important channel for obtaining external information by human beings, a psychologist begins to pay attention to eye movement characteristics and psychological significance of rules of the eye movement characteristics in early period of experimental psychology, and the research of information processing mechanisms of human beings under various different conditions by utilizing an eye movement technology becomes a research hotspot of modern psychology. The eye movement features, i.e. the movement of the eyeball, are closely related to the internal information processing mechanism, and there are two kinds of control, exogenous and endogenous, and in many cases, the eye movement is guided by tasks or purposes.
And Electro-oculogram (EOG) is as a low-cost eye movement signal measurement technique, compares in traditional video means, and not only the measurement is more accurate, and measuring equipment also has light in weight simultaneously, is convenient for long-time record, is changeed advantages such as wearable design of realizing. Therefore, the EOG has wide application prospect for eye movement signal acquisition.
According to the characteristics of eye movement, the types of eye movement signals are mainly classified into three types: fixations, saccades and follows eye movements. Fixation is often accompanied by subtle slow eye movements in three forms: spontaneous high frequency eye movement micro-tremor, slow eye movement and micro-tremor. Wherein slow eye movement refers to eye movement during the transition from awake to sleep, referred to as slow eye movement. During the acquisition of EOG data, there is slow eye movement in the acquired eye movement data, as subjects are prone to fatigue.
Slow eye movements contain this large amount of useful information and are widely used in many fields such as traffic psychology and clinical medicine. For example, the fatigue degree of the driver can be detected through slow eye movement, and some complex clinical cases can be researched and treated through the slow eye movement. In practical applications, however, slow eye movements are usually interleaved with other signals and are difficult to extract separately. At present, a linear regression method is used for identifying slow eye movements, and the slow eye movement identification is mainly performed through the ratio between the detected slow eye movement number and the visual detection number. However, the index has no clear algorithm performance, the recognition result is not ideal, and the practical purpose is difficult to achieve.
Disclosure of Invention
The invention aims to provide a slow eye movement identification method and system based on a convolution mixed model so as to improve the accuracy of slow eye movement identification.
To achieve the above object, in a first aspect, the present invention provides a slow eye movement recognition method based on a convolution mixture model, including:
s1, in the frequency domain, blind source separation is carried out on the eye movement data of each frequency point by adopting a complex value ICA algorithm, and frequency domain independent components of each independent source signal on the corresponding frequency point are obtained;
s2, carrying out scale compensation on the independent components on each frequency point, and restoring the real proportional components of the independent components in the observed components;
s3, sorting and adjusting the compensated independent components by adopting a constrained DOA algorithm, so that the independent sources on each frequency point are arranged from small to large according to the direction angle;
s4, carrying out short-time Fourier inverse transformation processing on the independent components of the frequency points after scale compensation and sequencing to obtain complete time signals of the multi-channel independent source on the time domain;
s5, performing wavelet decomposition on the multichannel independent source in the time domain to obtain wavelet coefficients of all levels;
and S6, comparing and analyzing the wavelet coefficients of each level with the judgment standard of the slow eye movement, and identifying the wavelet coefficients which are consistent with the characteristics of the slow eye movement as the slow eye movement.
In step S5, the mother function of the wavelet decomposition used is db4, and the number of layers of the wavelet decomposition is ten.
Wherein the slow eye movement is characterized by: the frequency of the eye movement signal is lower than 1Hz, the initial movement speed of the eye movement signal is approximately 0, and no artifact signal appears in the EOG signal.
Wherein, the step S2 specifically includes:
obtaining a mixed matrix of corresponding frequency points according to a separation matrix of each frequency point in a complex value ICA algorithm, wherein the separation matrix and the mixed matrix are inverse matrixes;
and compensating the independent components of the frequency points by using the coefficients of the mixed matrix to obtain the compensated independent components of the frequency points.
Wherein, the step S3 specifically includes:
a. initializing an angle for each independent source;
b. calculating different rows of each frequency point through a Root-Music algorithm to obtain the estimation of each source direction, wherein the rows of the separation matrix correspond to different independent sources;
c. setting the proximity measurement of the direction angle and the initialization angle of each independent source as epsilon (y, theta), and judging whether the angle and the initialization angle of each independent source are the same in the iteration process;
d. if the two are the same, executing the step e, and if the two are not the same, executing the step f;
e. will epsilon (y)jj) Setting to 0, and setting a direction angle matrix T to calculate an adjustment matrix Q;
f. will epsilon (y)jj) And setting the value to be 1, and returning to the iterative process to recalculate the separation matrix W.
Before step S1, the method further includes:
acquiring multi-channel EOG data to obtain eye movement data in a time domain;
performing band-pass filtering and mean value removing processing on the eye movement data in the time domain to obtain processed eye movement data;
and performing short-time Fourier transform on the processed eye movement data, and transforming the processed eye movement data from a time domain to a frequency domain to obtain the eye movement data on the frequency domain.
In a second aspect, the present invention provides a slow eye movement recognition system based on a convolution mixture model, comprising: the system comprises a blind source separation module, a scale compensation module, a sorting module, a recovery module, a wavelet decomposition module and a slow eye movement identification module which are connected in sequence;
the blind source separation module is used for performing blind source separation on the eye movement data of each frequency point on a frequency domain by adopting a complex value ICA algorithm to obtain frequency domain independent components of each independent source signal on the corresponding frequency point and transmitting the frequency domain independent components to the scale compensation module;
the scale compensation module is used for carrying out scale compensation on the independent components on each frequency point, restoring the real proportion components of the independent components in the observed components and transmitting the compensated independent components to the sequencing module;
the sorting module is used for sorting and adjusting the compensated independent components by adopting a constrained DOA algorithm, so that the independent sources on each frequency point are arranged from small to large according to the direction angle;
the recovery module is used for carrying out short-time inverse Fourier transform processing on the independent components of the frequency points after the scale compensation and the sequencing to obtain complete time signals of the multi-channel independent source on the time domain and transmitting the complete time signals of the multi-channel independent source to the decomposition module;
the wavelet decomposition module is used for performing wavelet decomposition on a complete time signal of a multi-channel independent source in a time domain to obtain wavelet coefficients of all levels and transmitting a decomposition result to the slow eye movement identification module;
the slow eye movement identification module is used for comparing and analyzing the wavelet coefficients of all levels with the judgment standard of the slow eye movement, and identifying the wavelet coefficients which are consistent with the characteristics of the slow eye movement as the slow eye movement.
Compared with the prior art, the invention has the following technical effects: the invention adopts complex value ICA algorithm to carry out blind source separation on frequency domain observation data, solves the problem of uncertain inherent scale of ICA by reducing the real proportional components of independent components in the observation components, simultaneously solves the inherent sorting fuzzy problem of ICA by constraining DOA algorithm, separates each independent source and converts the independent source into data in time domain, because time domain signals after inverse Fourier transform do not interfere with each other, carries out wavelet analysis on multi-channel EOG signals in the time domain, respectively carries out slow eye movement analysis on each level of decomposed wavelet coefficients, has high accuracy and small calculated amount because of no interference of other source signals, and can quickly extract the slow eye movement from the EOG signals.
Drawings
The following detailed description of embodiments of the invention refers to the accompanying drawings in which:
FIG. 1 is a schematic diagram of the distribution of electrodes on the face of a subject during the acquisition of EOG signals in the present invention;
FIG. 2 is a schematic flow chart of a slow eye movement recognition method based on a convolution mixture model according to the present invention;
FIG. 3 is a flow chart of a robust glance recognition algorithm for multi-channel EOG signals in accordance with the present invention;
FIG. 4 is a basic schematic diagram of Blind Source Separation (BSS) in the present invention;
FIG. 5 is a diagram of a time-frequency domain waveform of six adjacent frequency points according to the present invention;
FIG. 6 is a graph of EOG waveforms before and after separation by a convolutional ICA in accordance with the present invention;
FIG. 7 is a graph comparing the separation results of the linear ICA model and the convolution ICA model in the present invention;
FIG. 8 is a graph of average recognition rates for various methods of the present invention;
FIG. 9 is a graph showing the results of a slow eye movement test according to the present invention;
fig. 10 is a schematic structural diagram of a slow eye movement recognition system based on a convolution mixture model according to the present invention.
The present invention will be further described with reference to the following detailed description and accompanying drawings.
Detailed Description
To further illustrate the features of the present invention, refer to the following detailed description of the invention and the accompanying drawings. The drawings are for reference and illustration purposes only and are not intended to limit the scope of the present disclosure.
Firstly, it should be noted that, before identifying the EOG signal, the process of acquiring the EOG signal in the present invention is as follows:
as shown in fig. 1, the EOG signal of the subject is collected by using electrodes, the collection of the eye electrical signal uses Ag/AgCl electrodes, and 9 electrodes are used in the current collection in order to obtain the eye movement information and more spatial position information of the subject in four directions, namely, up, down, left and right, wherein the electrode V1 and the electrode V2 are placed at 1.5cm above and 1.5cm below the eyeball at the left side (or right side) of the subject for collecting the vertical EOG signal; electrode H1 and electrode H2 were placed 1.5cm left and 1.5cm right of the subject's left and right eyes, respectively, to acquire horizontal EOG signals; the electrode Fp1 and the electrode Fp2 are arranged at the forehead position to enhance the spatial information; the reference electrodes C1 and C2 are respectively arranged at the left and right breast bulges, and the grounding electrode D is arranged at the center of the vertex.
In the specific experiment collection, the subject sits in front of the screen and faces the screen, a "prepare" character appears on the screen along with a "beep" alarm sound, the subject can see a red arrow prompt (an upward arrow, a downward arrow, a leftward arrow and a rightward arrow respectively) on the screen after 1 second, the arrow continuously appears on the screen for 6 seconds, and in the time period, the experiment requires that the subject rotates the eyeball in the direction indicated by the arrow after seeing the arrow, and rotates back to the central point after seeing the observation point, and the subject cannot blink in the process. After this, there is a brief 2 second rest and the subject can blink and relax.
As shown in fig. 2 to 3, the present invention discloses a slow eye movement recognition method based on a convolution mixture model, which includes the following steps S1 to S6:
s1, in the frequency domain, blind source separation is carried out on the eye movement data of each frequency point by adopting a complex value ICA algorithm, and frequency domain independent components of each independent source signal on the corresponding frequency point are obtained;
it should be noted that, in this embodiment, the band-pass filtering and mean-removing processing is performed on the acquired multi-channel EOG data in the time domain, where a cutoff frequency of the band-pass filter is 0.01Hz to 8Hz, and then a sliding window with a window length of 256 and a window shift of 128 is used to perform Short-time fourier transform (STFT) on the processed eye movement data, so as to transform the eye movement data in the time domain into eye movement data in the frequency domain.
In the embodiment, the interference of signals including baseline drift, electromyographic EMG, electrocardio ECG, electroencephalogram EEG and the like is removed by performing band-pass filtering and mean value removing processing on the eye movement signals in the time domain, so that the interference of different noise signals on the original multi-channel eye movement data is reduced, and the identification accuracy is improved.
As shown in fig. 4, the process of performing blind source separation on the eye movement data in the frequency domain specifically includes:
1) from multi-channel observation data Xi( i 1,2, … N), a covariance matrix R of the observed data is calculatedxThe calculation formula of the covariance matrix is: rx=E{(X-mx)(X-mx)T}TWherein X is observed data, mXTo observe the mean value of the data, (.)TRepresenting transpose operations on formulas in parentheses, and E {. cndot.) represents desired operations on data in parentheses; determining a covariance matrix R of the observed dataxThen, whitening processing is needed to be carried out on the observation data, the orthogonalization of the mixed matrix is realized, and a whitening matrix V is calculated, wherein the calculation process comprises the following steps:
the covariance matrix RxThe decomposition is as follows: rx=EDETWherein E is represented by RxD ═ diag (λ) of the normalized orthogonal eigenvector of (a)12,…,λN) Is a diagonal matrix formed by eigenvalues corresponding to the eigenvectors.
The resulting whitening matrix is represented in the form: v ═ D-12ET
2) Whitening the observation data by using a whitening matrix through a formula Z (t) VX (t), and calculating the fourth-order cumulant of the whitening process, and through a formula N (lambda); n is a radical ofr L 1 < r < M } calculating the number of important features not exceeding M, wherein lambda is the feature vector, NrRepresenting the dimension of observation data, M representing the number of information sources, and r being an integer not exceeding the number of information sources;
3) using a unitary matrix to solve the equation N ═ λ; n is a radical ofrI1 & lt r & lt M & gt, wherein the unitary matrix is U, and a mixed matrix A is calculated by a formula A which is W multiplied by U;
4) since the mixing matrix a and the separation matrix W are inverse matrices to each other, the separation matrix W ═ a-1And blind source separation can be carried out on the observation data on each frequency point according to the separation matrix W.
S2, carrying out scale compensation on the independent components on each frequency point, and restoring the real proportional components of the independent components in the observed components;
specifically, the specific process of performing the scale compensation on the independent components is as follows:
obtaining a mixed matrix of corresponding frequency points according to a separation matrix of each frequency point in a complex value ICA algorithm, wherein the separation matrix and the mixed matrix are inverse matrixes;
and compensating the independent components of the frequency points by using the coefficients of the mixed matrix to obtain the independent components of the frequency points after the scale compensation.
Specifically, taking the two-dimensional ICA problem as an example, the observed signal is defined as x1、x2The source is s1、s2Then the observed signal can be expressed as:
x1=a11s1+a12s2=v11+v12
x2=a21s1+a22s2=v21+v22
wherein v isij=aijsijRepresenting independent sources sjIn the observation of signal xiThe true component in (1), i.e. the independent source sjIn the observation of signal xiDue to v11、v21All from independent sources s1And v is11、v21And s1But only in amplitude. Likewise, v12、v22From an independent source s2So too does the relationship of (c). Therefore, if W (f)k) For a certain frequency point separation matrix to be estimated, the mixing moment A (f) at the frequency point can be obtainedk)=W-1(fk). Then, the obtained mixed matrix coefficient can be used for compensating the independent component of each frequency point, namely:
Figure BDA0001379040170000071
wherein, Yj(fkτ) represents the independent component of the j-th channel separated before the scale compensation, vij(fk,τ)=Aij(fk)Yij(fkAnd tau) represents the real component of the jth independent component in the ith observed signal after scale compensation.According to the analysis, the formula is used for a certain frequency point fkAfter the independent components are subjected to scale compensation, N compensated outputs are generated by one frequency domain independent component, and the N compensation structures are subjected to subsequent processing such as elimination of sequencing ambiguity, combination of different frequency points, inverse transformation and the like to obtain N pure signals from the same independent source.
In practical applications, N pure signals from the same independent source may be alternatively output, or N signals from the same independent source may be averaged and output.
S3, processing the compensated independent components by adopting a constrained DOA algorithm, so that the independent sources on each frequency point are arranged from small to large according to the direction angle;
specifically, the process of sorting the compensated independent components is as follows:
a. initializing an angle for each independent source;
b. calculating different rows of each frequency point through a Root-Music algorithm to obtain the estimation of each source direction, wherein the rows of the separation matrix correspond to different independent sources;
c. setting the proximity measurement of the direction angle and the initialization angle of each independent source as epsilon (y, theta), and judging whether the angle and the initialization angle of each independent source are the same in the iteration process;
d. if the two are the same, executing the step e, and if the two are not the same, executing the step f;
e. will epsilon (y)jj) Setting to 0, and setting a direction angle matrix T to calculate an adjustment matrix Q;
f. will epsilon (y)jj) And setting the value to be 1, and returning to the iterative process to recalculate the separation matrix W.
Note that for each independent source sjInitializing an angle thetajBecause of the uncertainty of the position of the independent source, the angle of the ith independent source is set to be smaller than the angle of the (i + 1) th source, and the initialized angle r (theta) is taken as a constraint condition. Assuming that the rows of the separation matrix W correspond to different independent sources on the premise that the separation of each frequency point f is successful, through Root-The Music algorithm calculates different rows of each frequency point, and the estimation of each source direction can be obtained.
In order to effectively contrast and restrict the capacity of the DOA algorithm for distinguishing and sequencing the sequence errors, the proximity measurement of the angle obtained by each frequency point and the initialized angle is set as epsilon (y)jj) Wherein y isjFor the estimation of the respective source direction, the two angles are compared in an iterative process, if not identical, i.e. ε (y)jj) And returning to the iterative process to recalculate the separation matrix W as 1.
If the two angles are the same, i.e. ε (y)jj) If 0, a direction angle matrix T is required to calculate the adjustment matrix Q.
Wherein, the independent sources on each frequency point f are arranged in the order of the angles from small to large, and the direction angle matrix T is set as follows:
Figure BDA0001379040170000091
in the direction angle matrix T, the diagonal lines show the order of the angle arrangement, and after amplitude compensation is performed on the signal after blind source separation, the estimated y of the source signal S can be obtained:
y=P∧S=PV,
where P is the permutation matrix, Λ is the diagonal matrix, and S is the source signal. For convenience of analysis, X in y ═ WX is taken in by AS, and y ═ WAS ═ Ds is obtained. Wherein, W is a separation matrix, X is observation data, A is a mixing matrix, and S is a source signal. From the uncertainty of ICA, it is necessary that only one non-zero element exists for each row and column of matrix D. The matrix can be converted into D ═ P ^, wherein P is a permutation matrix, and Λ is a diagonal matrix. P and Λ introduce uncertainty in the ICA output ordering and amplitude, respectively. Therefore, the embodiment sets an adjustment matrix Q to perform adjustment calculation on the P matrix, thereby solving the problem of ordering uncertainty of ICA.
Further, the process of calculating the adjustment matrix Q by the direction angle matrix T is as follows:
Q=TP-1
at this time, if the permutation matrix P is the same as the direction angle matrix T, the independent sources on each frequency point are arranged in the order from small to large according to the angle, so that the adjustment is not required again;
if the permutation matrix P is different from the direction angle matrix T, the permutation matrix P is subjected to left multiplication by the adjusting matrix Q, and then a new permutation matrix P' is obtained;
by the formula P ═ QP ═ TP-1And processing the permutation matrix P by the equation P ═ T to obtain a new permutation matrix P ', and then obtaining the direction angle of the independent source on the frequency point again by the equation y ═ P ^ S ═ P' V. The independent sources on each frequency point f obtained at the moment are arranged from small to large according to the direction angle, so that the problem of fuzzy sequencing of the ICA is solved.
It should be noted that the uncertainty of the ordering of the ICA output (persistence algorithm) is an inherent limitation of the ICA algorithm. In the time-frequency domain blind deconvolution, a plurality of frequency point blind separation of different windows are involved, if the ICA separation results of all frequency points are not matched, frequency domain independent components belonging to the same source are combined together, namely sub-band signals from different sources are spliced together by errors, which can greatly affect the final separation effect, so that the signals recovered to the time domain are disordered, and further the identification result of the EOG signal is affected. After the independent components of each frequency point after amplitude compensation are sequenced and adjusted through a constrained DOA algorithm, the independent sources on each frequency point f are arranged from small to large according to the direction angle, the problem of fuzzy sequencing on each frequency point can be effectively solved, the quality of blind source separation is improved, and the improvement of the recognition rate is facilitated.
S4, carrying out short-time Fourier inverse transformation processing on the independent components of the frequency points after scale compensation and sequencing to obtain complete time signals of the multi-channel independent source on the time domain;
it should be noted that the method is used for performing short-time inverse fourier transform under the condition that the component arrangement corresponding to different sources of each frequency point is ensured to be correct and the amplitude is recovered, and finally, the obtained time domain signals are re-intercepted and combined to obtain the estimation of the source signals.
The short-time inverse Fourier transform process comprises the following steps:
during calculation, the obtained time-frequency matrix is subjected to inverse operation according to columns to obtain time signals at different window positions, and then the time signals are spliced according to the sequence of the small window to the large window to obtain a complete time signal of a source.
In the above operation process, the time signals in the adjacent windows will be partially overlapped, and the length of the overlap is defined when the original observation signal is windowed and framed at the beginning and is half of the frame length. The processing of overlapping data in adjacent windows is typically by adding the second half of the previous window length to the first half of the next window length and then dividing by 2 for averaging.
S5, performing wavelet decomposition on the complete time signal of the multi-channel independent source in the time domain to obtain wavelet coefficients of all levels;
specifically, the formula of the wavelet decomposition is as follows:
[c,1]=wavedec((Y,N,wname),
where c is the wavelet decomposition vector, 1 represents the length of each level from high to low, Y represents the variable to be decomposed, N represents the number of layers of decomposition, and wname represents the mother function. In this embodiment, the number of layers for performing wavelet decomposition on the multi-channel EOG data is ten, and the mother function of the wavelet decomposition is db 4.
And S6, comparing and analyzing the wavelet coefficients of each level with the judgment standard of the slow eye movement, and identifying the wavelet coefficients which are consistent with the characteristics of the slow eye movement as the slow eye movement.
Specifically, the slow eye movement is characterized by: (1) slow sinusoidal excursions in the signal occur for more than 1 second, i.e. the signal frequency is below 1 Hz; (2) the initial movement speed of the signal is close to zero, and in the embodiment, the initial movement speed of the signal is less than 0.000001, so that the initial movement speed of the signal can be considered to be close to zero; (3) the EOG waveform has no appearance of artifact signals such as winks, electroencephalograms, myoelectricity and the like. When the eye movement signal simultaneously satisfies the above-mentioned 3 conditions, it is considered that a slow eye movement occurs in the eye movement signal. The comparison and determination process in this embodiment is described with reference to fig. 9, and fig. 9B- (a) is a section of slow eye movement extracted from the time domain signal:
the first row of fig. 9B- (a) is a segment of waveform cut from the EOG signal returning to the time domain after blind separation in the frequency domain, the next two to six rows are wavelet coefficients obtained after wavelet decomposition, and the next one is a sixth layer wavelet, a seventh layer wavelet, and so on, until the tenth layer wavelet (marked in the figure). As can be seen from the graph, when fig. 9B- (a) is decomposed to the tenth layer wavelet coefficients, the frequency of the signal is already lower than 1Hz, the initial speed is 0 and no artifact signal appears in the EOG waveform, so it can be judged to be slow eye movement. While D6 and D7 in FIG. 9B- (a) easily see that the signal frequency is not lower than 1Hz and that there is a small amount of artifact signal present, so it is not a slow eye movement. While the initial velocity of D8 is not 0, the signal frequency of D9 is higher than 1Hz, and therefore is not a slow eye movement. In summary, when the slow eye movement is determined, wavelet coefficients at different levels need to be compared with the determination criteria of the slow eye movement, and only the waveforms satisfying all the characteristics at the same time are identified as the slow eye movement.
Because the time domain signals after the inverse Fourier transform do not interfere with each other, the wavelet analysis is carried out on the multichannel EOG signals in the time domain, the slow eye movement analysis is respectively carried out on each level of decomposed wavelet coefficients, and because the interference of other source signals does not exist, the accuracy is high, the calculated amount is small, and the slow eye movement can be quickly extracted from the EOG signals.
It should be noted that, as shown in fig. 5, a time-frequency domain waveform diagram of six adjacent frequency points of two independent sources is obtained after two-channel EOG signals are subjected to complex value ICA algorithm blind source separation. The abscissa indicates the position of the sliding window and the ordinate indicates the magnitude of the amplitude of the signal. As can be seen from the two waveform diagrams 5- (a) and 5- (b), the third channel and the fifth channel in the order from top to bottom present the problem of fuzzy ordering.
Fig. 6 is a waveform diagram of an EOG signal obtained before and after separation of multi-channel eye movement data by convolution ICA. The abscissa represents the sample point and the ordinate represents the magnitude of the signal amplitude. Comparing the two fig. 6- (a) and 6- (b) shows that after separation by the convolutional ICA, the source signal of the blink artifact is separated.
As shown in fig. 7, fig. 7(a) and 7(b) show saccadic EOG waveforms after separation of a linear ICA and a convolution ICA, respectively, in which the abscissa shows sample points and the ordinate is the amplitude of a signal. FIGS. 7(c) and 7(d) show time and frequency domain waveforms of the second channel segment of the swept EOG signal of FIGS. 7(a) and 7(b), respectively, as captured. The abscissa of the time domain waveform diagram represents the sampling point, the ordinate represents the amplitude of the signal, the abscissa of the frequency domain waveform diagram represents the frequency, and the ordinate represents the amplitude of the signal. As is clear from the two figures, the artifact signal is not "clean" after linear ICA separation, and a blink signal still exists, and the frequency band of the glance EOG signal after linear ICA separation is wider than that of the glance EOG signal after convolutional ICA separation. Therefore, in the present embodiment, it is preferable to perform blind source separation processing on the eye movement data by using the convolution ICA algorithm.
Fig. 8 shows a graph of the average recognition rate of slow eye movement signals for different algorithms. The abscissa represents the order of the subjects, and the ordinate represents the average recognition rate. It can be seen that the average recognition rate obtained by the convolution ICA method is 97.254%, which is improved by 4.854%, 7.168% and 2.64% respectively compared with the band-pass filtering method, the wavelet de-noising method and the linear ICA method.
As shown in fig. 9, fig. 9a (a) and fig. 9a (b) are time-domain EOG waveform diagrams after blind separation by convolutional ICA and linear ICA, respectively. Wavelet decomposition is performed on each channel, and slow eye movement (indicated by an arrow) of two waveforms in the fourth channel can be obtained, and the results are shown in fig. 9b (a) and 9b (b). As can be seen from both figures in fig. 9B, slow eye movement occurs when decomposing to the tenth layer. In order to compare the separation results of the linear ICA, wavelet decomposition and analysis were performed on each channel waveform after the linear ICA separation, respectively, at the point where the waveform after the convolution separation shows a slow eye movement, and the experimental results are shown in fig. 9C and 9D. As can be seen from the four graphs, no slow eye movement occurred in the waveform after the linear ICA separation.
In addition, as shown in fig. 10, this embodiment further discloses a slow eye movement recognition system based on a convolution mixture model, including: the system comprises a blind source separation module 10, a scale compensation module 20, a sorting module 30, a recovery module 40, a wavelet decomposition module 50 and a slow eye movement identification module 60 which are connected in sequence;
the blind source separation module 10 is configured to perform blind source separation on the eye movement data of each frequency point in a frequency domain by using a complex value ICA algorithm to obtain frequency domain independent components of each independent source signal at the corresponding frequency point, and transmit the frequency domain independent components to the scale compensation module 20;
the scale compensation module 20 is configured to perform scale compensation on the independent components at each frequency point, restore real proportional components of the independent components in the observed components, and transmit the compensated independent components to the sorting module 30;
the sorting module 30 is configured to sort the compensated independent components by using a constrained DOA algorithm, so that the independent sources at each frequency point are arranged from small to large according to a direction angle;
the recovery module 40 is configured to perform short-time inverse fourier transform processing on the independent components of the frequency points after the scale compensation and the sequencing to obtain a complete time signal of the multi-channel independent source in the time domain, and transmit the complete time signal of the multi-channel independent source to the wavelet decomposition module 50;
the wavelet decomposition module 50 is configured to perform wavelet decomposition on a complete time signal of the multi-channel independent source in the time domain to obtain wavelet coefficients of each level, and transmit a decomposition result to the slow eye movement identification module 60;
the slow eye movement identification module 60 is configured to compare and analyze the wavelet coefficients at each level with the evaluation criterion of the slow eye movement, and identify a low eye movement if the wavelet coefficients are consistent with the characteristics of the slow eye movement.
It should be noted that the slow eye movement recognition system based on the convolution mixture model disclosed in this embodiment has the same or corresponding technical features and technical effects as the method disclosed in the above embodiment, and details are not repeated here.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (5)

1. A slow eye movement identification method based on a convolution mixed model is characterized by comprising the following steps:
s1, in the frequency domain, blind source separation is carried out on the eye movement data of each frequency point by adopting a complex value ICA algorithm, and frequency domain independent components of each independent source signal on the corresponding frequency point are obtained;
s2, carrying out scale compensation on the independent components on each frequency point, and restoring the real proportional components of the independent components in the observed components;
obtaining a mixed matrix of corresponding frequency points according to a separation matrix of each frequency point in a complex value ICA algorithm, wherein the separation matrix and the mixed matrix are inverse matrixes;
compensating the independent components of the frequency points by using the coefficients of the mixed matrix to obtain the compensated independent components of the frequency points;
s3, sorting and adjusting the compensated independent components by adopting a constrained DOA algorithm, so that the independent sources on each frequency point are arranged from small to large according to the direction angle;
a. initializing an angle for each independent source;
b. calculating different rows of each frequency point through a Root-Music algorithm to obtain the estimation of each source direction, wherein the rows of the separation matrix correspond to different independent sources;
c. setting the proximity measurement of the direction angle and the initialization angle of each independent source as epsilon (y, theta), and judging whether the angle and the initialization angle of each independent source are the same in the iteration process;
d. if the two are the same, executing the step e, and if the two are not the same, executing the step f;
e. will epsilon (y)jj) Setting to 0, and setting a direction angle matrix T to calculate an adjustment matrix Q;
f. will epsilon (y)jj) Setting the value to be 1, returning to the iterative process to recalculate the separation matrix W;
s4, carrying out short-time Fourier inverse transformation processing on the independent components of the frequency points after scale compensation and sequencing to obtain complete time signals of the multi-channel independent source on the time domain;
s5, performing wavelet decomposition on the multichannel independent source in the time domain to obtain wavelet coefficients of all levels;
and S6, comparing and analyzing the wavelet coefficients of each level with the judgment standard of the slow eye movement, and identifying the wavelet coefficients which are consistent with the characteristics of the slow eye movement as the slow eye movement.
2. The method of claim 1, wherein in step S5, the wavelet decomposition has a mother function of db4 and the wavelet decomposition has ten layers.
3. The method of claim 1, wherein the slow eye movement is characterized by: the frequency of the eye movement signal is lower than 1Hz, the initial movement speed of the eye movement signal is approximately 0, and no artifact signal appears in the EOG signal.
4. The method of claim 1, wherein before said step S1, further comprising:
acquiring multi-channel EOG data to obtain eye movement data in a time domain;
performing band-pass filtering and mean value removing processing on the eye movement data in the time domain to obtain processed eye movement data;
and performing short-time Fourier transform on the processed eye movement data, and transforming the processed eye movement data from a time domain to a frequency domain to obtain the eye movement data on the frequency domain.
5. A recognition system based on the slow eye movement recognition method based on the convolution mixture model of one of claims 1 to 4, characterized by comprising: the system comprises a blind source separation module, a scale compensation module, a sorting module, a recovery module, a wavelet decomposition module and a slow eye movement identification module which are connected in sequence;
the blind source separation module is used for performing blind source separation on the eye movement data of each frequency point on a frequency domain by adopting a complex value ICA algorithm to obtain frequency domain independent components of each independent source signal on the corresponding frequency point and transmitting the frequency domain independent components to the scale compensation module;
the scale compensation module is used for carrying out scale compensation on the independent components on each frequency point, restoring the real proportion components of the independent components in the observed components and transmitting the compensated independent components to the sequencing module;
the sorting module is used for sorting and adjusting the compensated independent components by adopting a constrained DOA algorithm, so that the independent sources on each frequency point are arranged from small to large according to the direction angle;
the recovery module is used for carrying out short-time inverse Fourier transform processing on the independent components of the frequency points after the scale compensation and the sequencing to obtain complete time signals of the multi-channel independent source on the time domain and transmitting the complete time signals of the multi-channel independent source to the wavelet decomposition module;
the wavelet decomposition module is used for performing wavelet decomposition on a complete time signal of a multi-channel independent source in a time domain to obtain wavelet coefficients of all levels and transmitting a decomposition result to the slow eye movement identification module;
the slow eye movement identification module is used for comparing and analyzing the wavelet coefficients of all levels with the judgment standard of the slow eye movement, and identifying the wavelet coefficients which are consistent with the characteristics of the slow eye movement as the slow eye movement.
CN201710695419.7A 2017-08-15 2017-08-15 Low-speed eye movement identification method and system based on convolution mixed model Active CN107450730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710695419.7A CN107450730B (en) 2017-08-15 2017-08-15 Low-speed eye movement identification method and system based on convolution mixed model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710695419.7A CN107450730B (en) 2017-08-15 2017-08-15 Low-speed eye movement identification method and system based on convolution mixed model

Publications (2)

Publication Number Publication Date
CN107450730A CN107450730A (en) 2017-12-08
CN107450730B true CN107450730B (en) 2020-02-21

Family

ID=60492006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710695419.7A Active CN107450730B (en) 2017-08-15 2017-08-15 Low-speed eye movement identification method and system based on convolution mixed model

Country Status (1)

Country Link
CN (1) CN107450730B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036467B (en) * 2020-08-27 2024-01-12 北京鹰瞳科技发展股份有限公司 Abnormal heart sound identification method and device based on multi-scale attention neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102125429A (en) * 2011-03-18 2011-07-20 上海交通大学 Alertness detection system based on electro-oculogram signal
CN106163391A (en) * 2014-01-27 2016-11-23 因泰利临床有限责任公司 System for multiphase sleep management, method for the operation thereof, device for sleep analysis, method for classifying a current sleep phase, and use of the system and the device in multiphase sleep management

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100292545A1 (en) * 2009-05-14 2010-11-18 Advanced Brain Monitoring, Inc. Interactive psychophysiological profiler method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102125429A (en) * 2011-03-18 2011-07-20 上海交通大学 Alertness detection system based on electro-oculogram signal
CN106163391A (en) * 2014-01-27 2016-11-23 因泰利临床有限责任公司 System for multiphase sleep management, method for the operation thereof, device for sleep analysis, method for classifying a current sleep phase, and use of the system and the device in multiphase sleep management

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"An Algorithm for Reading Activity Recognition";Rui OuYang 等;《IEEE》;20151204;全文 *
"基于EOG的阅读行为识别中眨眼信号去除算法研究";张贝贝;《信号处理》;20170228;第33卷(第2期);第236至244页 *
"基于卷积神经网络的眼电信号疲劳检测";朱学敏;《中国优秀硕士学位论文全文数据库(医药卫生科技辑)》;20160715(第2016年第7期);第E080-4页,摘要,正文第15-16、27页 *

Also Published As

Publication number Publication date
CN107450730A (en) 2017-12-08

Similar Documents

Publication Publication Date Title
Chen et al. Removal of muscle artifacts from the EEG: A review and recommendations
Chen et al. Removal of muscle artifacts from single-channel EEG based on ensemble empirical mode decomposition and multiset canonical correlation analysis
Lakshmi et al. Survey on EEG signal processing methods
Hu et al. Adaptive multiscale entropy analysis of multivariate neural data
Barros et al. Extraction of event-related signals from multichannel bioelectrical measurements
CN110269609B (en) Method for separating ocular artifacts from electroencephalogram signals based on single channel
Enshaeifar et al. Quaternion singular spectrum analysis of electroencephalogram with application in sleep analysis
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
CN107348958B (en) Robust glance EOG signal identification method and system
Cheng et al. The optimal wavelet basis function selection in feature extraction of motor imagery electroencephalogram based on wavelet packet transformation
Haider et al. Performance enhancement in P300 ERP single trial by machine learning adaptive denoising mechanism
Jiang et al. Covariance and time-scale methods for blind separation of delayed sources
Elamvazuthi et al. Surface electromyography (sEMG) feature extraction based on Daubechies wavelets
Kaewwit et al. High accuracy EEG biometrics identification using ICA and AR model
Kaundanya et al. Performance of k-NN classifier for emotion detection using EEG signals
CN107450730B (en) Low-speed eye movement identification method and system based on convolution mixed model
Turnip et al. Utilization of EEG-SSVEP method and ANFIS classifier for controlling electronic wheelchair
Shoker et al. Removal of eye blinking artifacts from EEG incorporating a new constrained BSS algorithm
Hsu Wavelet-coherence features for motor imagery EEG analysis posterior to EOG noise elimination
Zhang et al. Multi-resolution dyadic wavelet denoising approach for extraction of visual evoked potentials in the brain
Mahmood et al. Frequency recognition of short-time SSVEP signal using CORRCA-based spatio-spectral feature fusion framework
Giraldo-Guzmán et al. Fetal ECG extraction using independent component analysis by Jade approach
Naik et al. SEMG for identifying hand gestures using ICA
Nawas et al. K-NN classification of brain dominance
Zhang et al. Robust EOG-based saccade recognition using multi-channel blind source deconvolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant