CN107450730A - Low-speed eye movement identification method and system based on convolution mixed model - Google Patents

Low-speed eye movement identification method and system based on convolution mixed model Download PDF

Info

Publication number
CN107450730A
CN107450730A CN201710695419.7A CN201710695419A CN107450730A CN 107450730 A CN107450730 A CN 107450730A CN 201710695419 A CN201710695419 A CN 201710695419A CN 107450730 A CN107450730 A CN 107450730A
Authority
CN
China
Prior art keywords
eye movement
frequency
signal
isolated component
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710695419.7A
Other languages
Chinese (zh)
Other versions
CN107450730B (en
Inventor
吕钊
张贝贝
张超
吴小培
张磊
高湘萍
郭晓静
卫兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201710695419.7A priority Critical patent/CN107450730B/en
Publication of CN107450730A publication Critical patent/CN107450730A/en
Application granted granted Critical
Publication of CN107450730B publication Critical patent/CN107450730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/22Source localisation; Inverse modelling

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a slow eye movement identification method and a system based on a convolution mixed model, which belong to the technical field of electrooculogram, and comprise the steps of carrying out blind source separation on eye movement data of each frequency point by adopting a complex value ICA algorithm to obtain frequency domain independent components of each independent source signal on the corresponding frequency point; carrying out scale compensation on the independent components on each frequency point, and restoring the real proportional components of the independent components in the observed components; sorting and adjusting the compensated independent components by adopting a constrained DOA algorithm; carrying out short-time inverse Fourier transform processing on the independent components of the frequency points after the scale compensation and the sequencing to obtain complete time signals of the multi-channel independent source on the time domain; and performing wavelet decomposition on the complete time signal of the multi-channel independent source, comparing and analyzing an obtained decomposition result with a judgment standard of the slow eye movement, and identifying the time signal which is consistent with the characteristics of the slow eye movement as the slow eye movement. The invention performs wavelet analysis on the multi-channel EOG signal in the time domain, and can quickly extract slow eye movement from the EOG signal due to no interference of other source signals.

Description

A kind of slow eye movement recognition methods and system based on convolved mixtures model
Technical field
The present invention relates to electroculogram technical field, more particularly to a kind of slow eye movement identification side based on convolved mixtures model Method and system.
Background technology
Vision system is that the mankind obtain the most important passage of external information, in the early history Psychology Memory of experimental psychology Family is begun to notice eye movement characteristics and its psychological significance of rule, and people is explored under various different conditions using eye movement technique Information procession mechanism also turn into current psychology study hotspot.Eye movement characteristics are the motion of eyeball, are processed with internal information Mechanism has close ties, and there is exogenous and two kinds of controls of endogenous, in the case of more, eye is dynamic can be by task or purpose Guide.
And electroculogram (Electro-oculogram, EOG) moves signal measurement technique as a kind of inexpensive eye, compare Compared with traditional video means, not only measure it is more accurate, while measuring apparatus also have it is in light weight, be easy to non-volatile recording, The advantages that being more easy to realize wearable design.Therefore, the dynamic signal acquisition of eye is carried out using EOG to be with a wide range of applications.
The characteristics of being moved according to eye, the type that eye moves signal are broadly divided into three classes:Watch attentively, sweep and pursuit eye movements.In watching attentively The as trickle slow eye movement of three kinds of forms is frequently accompanied by again:Spontaneous high frequency eye move it is micro- quiver, slow eye movement and micro- jump. Wherein slow eye movement refers to be referred to as slow eye movement from the dynamic motion of the clear-headed eye being transitioned into during sleeping.In the collection of EOG data During, because subject is easy to tired, slow eye movement be present in the eye movement data collected.
Slow eye movement includes this large amount of useful information, is suffered from many fields such as traffic psychology and clinical medicine It is widely applied.For example the degree of fatigue of driver can be detected by slow eye movement, can be by slow eye movement to some Complicated clinical case is studied and treated.But in actual applications, slow eye movement generally exists with other signal interleavings Together, it is difficult to individually extract.At present, the identification of slow eye movement is carried out using linear regression method, mainly by detecting Slow eye movement quantity and visual detection quantity between ratio carry out slow eye movement identification.But this index is not bright The performance of true algorithm, recognition result are undesirable, it is difficult to reach practical purpose.
The content of the invention
It is an object of the invention to provide a kind of slow eye movement recognition methods based on convolved mixtures model and system, to carry The accuracy rate of high slow eye movement identification.
To realize object above, in a first aspect, the present invention provides a kind of slow eye movement identification based on convolved mixtures model Method, including:
S1, on frequency domain, blind source separating is carried out to the eye movement data of each frequency using complex value ICA algorithm, obtains each independence Frequency domain isolated component of the source signal on corresponding frequency;
S2, yardstick compensation, true ratio of the reduction isolated component in observational components are carried out to the isolated component on each frequency Example composition;
S3, adjustment is ranked up to the isolated component after compensation using constraint DOA algorithms so that each independence on frequency Source all arranges from small to large according to deflection;
S4, the isolated component to each frequency after yardstick compensation and after sequence carry out inverse Fourier transform in short-term and handled, and obtain The complete time signal of multichannel independent source on to time domain;
S5, wavelet decomposition is carried out to multichannel independent source in time domain, obtain wavelet coefficients at different levels;
S6, the judgment criteria of wavelet coefficients at different levels and slow eye movement contrasted and analyzed, it is equal with slow eye movement feature What is be consistent is then identified as slow eye movement.
Wherein, in described step S5, the generating function of the wavelet decomposition used is db4, the number of plies of wavelet decomposition For ten layers.
Wherein, the feature of described slow eye movement includes:The dynamic signal frequency of eye moves signal initial motion speed less than 1Hz, eye Degree is approximately not occur artefact signal in 0 and EOG signal.
Wherein, described step S2, is specifically included:
The separation matrix of each frequency in complex value ICA algorithm, the hybrid matrix of corresponding frequency is obtained, wherein separating square Battle array and hybrid matrix inverse matrix each other;
The isolated component of each frequency is compensated using the coefficient of hybrid matrix, is compensated the independence point of rear each frequency Amount.
Wherein, described step S3, is specifically included:
A, an angle is initialized for each independent source;
B, not going together for each frequency is calculated by Root-Music algorithms, the estimation in each source direction can be obtained, its The corresponding different independent source of row of middle separation matrix;
C, the orientation angle for setting each independent source is ε (y, θ) with the proximity measurement for initializing angle, and in iterative process In, judge whether the angle of each independent source and initialization angle are identical;
D, step e is performed if identical, differs and then performs step f;
E, by ε (yjj) 0 is arranged to, and setting direction angle matrix T adjusts matrix Q to calculate;
F, by ε (yjj) 1 is arranged to, return to the iterative process and recalculate separation matrix W.
Wherein, before described step S1, in addition to:
Multichannel EOG data are acquired, obtain the eye movement data in time domain;
Bandpass filtering is carried out to the eye movement data in time domain and removes average value processing, the eye movement data after being handled;
Short Time Fourier Transform is done to the eye movement data after processing, is transformed from the time domain to frequency domain, is obtained on frequency domain Eye movement data.
Second aspect, the present invention provide a kind of slow eye movement identifying system based on convolved mixtures model, including:Connect successively Blind source separating module, yardstick compensating module, order module, recovery module, wavelet decomposition module and the slow eye movement identification connect Module;
Blind source separating module is used in frequency domain, and blind source point is carried out to the eye movement data of each frequency using complex value ICA algorithm From obtaining frequency domain isolated component of each Independent sources signal on corresponding frequency and transmit frequency domain isolated component to yardstick to compensate mould Block;
Yardstick compensating module is used to carry out yardstick compensation to the isolated component on each frequency, and reduction isolated component is in observation point Actual proportions composition in amount simultaneously transmits the isolated component after compensation to order module;
Order module is used to be ranked up adjustment to the isolated component after compensation using constraint DOA algorithms so that Mei Gepin Independent source on point all arranges from small to large according to deflection;
The isolated component that recovery module is used for each frequency after compensating yardstick and after sequence carries out Fourier's inversion in short-term Processing is changed, the complete time signal of multichannel independent source in time domain is obtained and transmits the complete time signal of multichannel independent source To decomposing module;
Wavelet decomposition module is used to carry out wavelet decomposition to the complete time signal of multichannel independent source in time domain, obtains each Level wavelet coefficient, and decomposition result is transmitted to slow eye movement identification module;
Slow eye movement identification module is used to the judgment criteria of wavelet coefficients at different levels and slow eye movement is contrasted and analyzed, Slow eye movement is then identified as with what slow eye movement feature was consistent.
Compared with prior art, there is following technique effect in the present invention:The present invention uses complex value ICA algorithm by domain observations Data carry out blind source separating, and solve the intrinsic chis of ICA by reducing actual proportions composition of the isolated component in observational components Uncertain problem is spent, while solves the intrinsic sequence fuzzy problems of ICA by constraining DOA algorithms, each independent source is separated Out and the data in time domain are converted to, it is right in the time domain due to not interfere with each other between the time-domain signal after inverse Fourier transform Multichannel EOG signal carries out wavelet analysis, slow eye movement analysis is carried out respectively to the wavelet coefficients at different levels after decomposition, due to not having There is the interference of other source signals, accuracy is high, and amount of calculation is few, and slow eye movement can be quickly extracted from EOG signal.
Brief description of the drawings
Below in conjunction with the accompanying drawings, the embodiment of the present invention is described in detail:
Fig. 1 be in the present invention in EOG signal gatherer process electrode subject face distribution situation schematic diagram;
Fig. 2 is a kind of schematic flow sheet of the slow eye movement recognition methods based on convolved mixtures model in the present invention;
Fig. 3 is that the robust of multichannel EOG signal in the present invention sweeps recognizer flow chart;
Fig. 4 is blind source separating (Blind Source Separation, BSS) general principle figure in the present invention;
Fig. 5 is the time-frequency domain oscillogram of six adjacent frequencies of the invention;
Fig. 6 is that the present invention separates front and rear EOG oscillograms through convolution ICA;
Fig. 7 is linear ICA models in the present invention and convolution ICA model separation comparative result figures;
Fig. 8 is the average recognition rate under distinct methods of the present invention;
Fig. 9 is slow eye movement experimental result picture in the present invention;
Figure 10 is a kind of structural representation of the slow eye movement identifying system based on convolved mixtures model in the present invention.
Below by way of embodiment, and with reference to accompanying drawing, the invention will be further described.
Embodiment
In order to illustrate further the feature of the present invention, please refer to the following detailed descriptions related to the present invention and accompanying drawing.Institute Accompanying drawing is only for reference and purposes of discussion, is not used for being any limitation as protection scope of the present invention.
Firstly the need of explanation, in the present invention before being identified to EOG signal, it is to the process that EOG signal is acquired:
As shown in figure 1, being acquired using the EOG signal of electrode pair subject, the collection of electro-ocular signal uses Ag/AgCl Electrode, information and more spatial positional informations are moved in order to obtain the eye of subject upper and lower, left and right four direction, at this 9 electrodes have been used in collection altogether, wherein, electrode V1 and electrode V2 is placed on the left of subject 1.5cm on (or right side) eyeball At lower 1.5cm, to gather vertical EOG signal;Electrode H1 and electrode H2 is placed in 1.5cm on the left of subject's left eye respectively At 1.5cm on the right side of right eye, to gather horizontal EOG signal;Electrode Fp1 and electrode Fp2 is placed in forehead position, to increase Strong spatial information;Reference electrode C1 and C2 are respectively placed in the newborn convex in the left and right sides, and grounding electrode D is located at crown center.
When specifically carrying out experiment collection, subject, which is sitting in front of screen, faces screen, occurs one on screen " prepare " character and with the alarm song of " beep ", in 1 second after subject on screen it can be seen that a red arrow carries Show and (be respectively:To upward arrow, to the left down arrow, arrow and right-hand arrow), it is 6 that arrow continues time of occurrence on screen Second, within this time, requirement of experiment subject rotates eyeball after arrow is seen to arrow direction indication, is seeing observation station After rotate back into central point, subject can not blink in this course.Afterwards, 2 seconds takes a break is had, subject can To blink, loosen.
As shown in Figure 2 to Figure 3, the invention discloses a kind of slow eye movement recognition methods based on convolved mixtures model, bag Include following steps S1 to S6:
S1, on frequency domain, blind source separating is carried out to the eye movement data of each frequency using complex value ICA algorithm, obtains each independence Frequency domain isolated component of the source signal on corresponding frequency;
It should be noted that bandpass filtering is carried out to the multichannel EOG data in the time domain of collection in the present embodiment and gone The cut-off frequency of average value processing, wherein bandpass filter is 0.01Hz~8Hz, and then using window a length of 256, it is 128 that window, which moves, Sliding window Short Time Fourier Transform (Short-TimeFourierTransform, STFT) is done to the eye movement data after processing, Eye movement data eye movement data in time domain being transformed on frequency domain.
By moving signal progress bandpass filtering to the eye in time domain and removing average value processing in the present embodiment, removal includes baseline The interference of the signals such as drift, myoelectricity EMG, electrocardio ECG, brain electricity EEG, different noise signals are reduced to the dynamic number of original multi-channel eye According to interference, so as to improve recognition correct rate.
As shown in figure 4, the process that blind source separating is carried out to the eye movement data on frequency domain is specially:
1) according to multi-channel GPS observations data Xi(i=1,2 ... N), the covariance matrix R of calculating observation datax, covariance square Battle array calculation formula be:Rx=E { (X-mx)(X-mx)T}T, wherein X is to observe data, mXTo observe the average of data, ()T Represent and transposition computing is carried out to the formula in bracket, E { } is to represent to carry out expectation computing to the data in bracket;Obtain observation The covariance matrix R of dataxAfterwards, it is necessary to carry out whitening processing to observation data, the orthogonalization of hybrid matrix is realized, it is white to calculate its Change matrix V, calculating process is:
By covariance matrix RxIt is decomposed into:Rx=EDET, wherein E is by RxOrthonormalization characteristic vector form square Battle array, D=diag (λ12,…,λN) it is the diagonal matrix being made up of the corresponding characteristic value of characteristic vector.
The form of expression of obtained whitening matrix is:V=D-12ET
2) whitening matrix is utilized, whitening processing is carried out to observation data by formula Z (t)=VX (t), and obtain albefaction The fourth order cumulant of journey, and pass through formula N={ λ;Nr| 1 < < r < < M } key character of the number of computations no more than M, its Middle λ is characterized vector, NrObservation data dimension is represented, M represents information source number, and r is an integer no more than information source number;
3) using unitary matrice to formula N={ λ;Nr| 1 < < r < < M } Joint diagonalization is carried out, wherein unitary matrice is U, And hybrid matrix A is calculated by formula A=W × U;
4) due to hybrid matrix A and separation matrix W inverse matrix each other, then separation matrix W=A-1, it is according to separation matrix W Blind source separating can be carried out to the observation data on each frequency.
S2, yardstick compensation, true ratio of the reduction isolated component in observational components are carried out to the isolated component on each frequency Example composition;
Specifically, it is to the detailed process of isolated component progress yardstick compensation:
The separation matrix of each frequency in complex value ICA algorithm, the hybrid matrix of corresponding frequency is obtained, wherein separating square Battle array and hybrid matrix inverse matrix each other;
The isolated component of each frequency is compensated using the coefficient of hybrid matrix, obtain yardstick compensation after each frequency it is only Vertical component.
Specifically, by taking two-dimentional ICA problems as an example, definition observation signal is x1、x2, source s1、s2, then observation signal can table It is shown as:
x1=a11s1+a12s2=v11+v12,
x2=a21s1+a22s2=v21+v22
Wherein, vij=aijsijRepresent independent source sjIn observation signal xiIn true composition be independent source sjIn observation signal xiIn projection, due to v11、v21It is all from independent source s1, and v11、v21With s1The simply difference in amplitude.Equally, v12、 v22With independent source s2Relation be also such.Therefore, if W (fk) be certain estimated frequency separation matrix, then it can obtain Mixed moment A (f on the frequencyk)=W-1(fk).Then each frequency isolated component can be entered using gained hybrid matrix coefficient Row compensation, i.e.,:
Wherein, Yj(fk, τ) and represent the isolated component of separated jth passage before yardstick compensation, vij(fk, τ) and=Aij(fk) Yij(fk, τ) represent after yardstick compensates, true composition of j-th of isolated component in i-th of observation signal.According to above-mentioned Analysis, above-mentioned formula is being used to certain frequency fkIsolated component carry out yardstick compensation after, a frequency domain isolated component will produce N Output after individual compensation, this N number of follow-up being sorted such as elimination of collocation structure progress is obscured, the combinations and inversion of different frequent points The processing such as change, obtain N number of purified signal from same independent source.
In actual applications, an output can be selected in N number of purified signal from same independent source, can also be to N number of next Exported after being averaged from the signal of same independent source.
S3, using constraint DOA algorithms the isolated component after compensation is handled so that the independent source on each frequency Arranged from small to large according to deflection;
Specifically, the process being ranked up to the isolated component after compensation is:
A, an angle is initialized for each independent source;
B, not going together for each frequency is calculated by Root-Music algorithms, the estimation in each source direction can be obtained, its The corresponding different independent source of row of middle separation matrix;
C, the orientation angle for setting each independent source is ε (y, θ) with the proximity measurement for initializing angle, and in iterative process In, judge whether the angle of each independent source and initialization angle are identical;
D, step e is performed if identical, differs and then performs step f;
E, by ε (yjj) 0 is arranged to, and setting direction angle matrix T adjusts matrix Q to calculate;
F, by ε (yjj) 1 is arranged to, return to the iterative process and recalculate separation matrix W.
It should be noted that it is each independent source sjInitialize an angle, θj, due to not knowing for individual source locations Property, the angle for setting i-th of independent source here is less than the angle in i+1 source, and using the angle r (θ) of initialization as one Constraints.Assuming that on the premise of each frequency f is separated successfully, the separation matrix W corresponding different independent source of row, pass through Root-Music algorithms calculate not going together for each frequency, you can obtain the estimation to each source direction.
Here in order to effectively contrast discrimination and sequencing ability of the constraint DOA algorithms to order mistake, it is set by each frequency Obtained angle is ε (y with initialization angle proximity measurementjj), wherein yjFor the estimation in each source direction, by two kinds of angles It is compared in an iterative process, if it is not the same, i.e. ε (yjj)=1, then return to iterative process and recalculate separation matrix W.
If two angles are identical, i.e. ε (yjj)=0, then need to set an orientation angle matrix T to calculate regulation square Battle array Q.
Independent source on wherein each frequency f putting in order from small to large, setting direction angle matrix T according to angle For:
In orientation angle matrix T, diagonal is shown angle and put in order, and width is carried out to the signal after blind source separating After degree compensation, you can obtain the estimation y to source signal S:
Y=P ∧ S=PV,
Wherein, P is permutation matrix, and ∧ is diagonal matrix, and S is source signal.For the ease of analysis, the X in y=WX is used AS is brought into, can obtain y=WAS=Ds.Wherein, W is separation matrix, and X is observation data, and A is hybrid matrix, and S is source signal.By ICA uncertainty understands that every a line of matrix D and each row must only have an element being not zero to exist.It can convert For D=P ∧, wherein P is permutation matrix, and ∧ is diagonal matrix.P and ∧ introduce respectively ICA output sequence and amplitude it is not true It is qualitative.Therefore, the present embodiment sets a regulation matrix Q, calculating is adjusted to P matrixes, so as to solve ICA sequence Uncertain problem.
Further, orientation angle matrix T calculating regulation matrix Q process is:
Q=TP-1,
Now, if permutation matrix P and orientation angle matrix T-phase are same, now on each frequency independent source according to angle Order arranges degree from small to large, therefore need not re-start regulation;
If permutation matrix P differs with orientation angle matrix T, permutation matrix P is adjusted into matrix Q by premultiplication, and then Obtain new permutation matrix P`;
Pass through formula P`=QP=TP-1P=T obtains new permutation matrix P`, Ran Houtong to permutation matrix P processing Cross the deflection that formula y=P` ∧ S=P`V retrieve independent source on frequency.The independent source on each frequency f now obtained Arranged from small to large according to deflection, solve ICA sequence fuzzy problem.
It should be noted that it is that ICA algorithms are present that the sequence of ICA outputs, which does not know (permutation ambiguity), Inherent limitation.In time-frequency domain blind deconvolution, it is related to multiple frequency blind separations of different windows, if not to each frequency ICA separating resultings are matched, and the frequency domain isolated component for belonging to same source are combined, i.e., from not homologous subband Signal is stitched together by mistake, and this can be produced a very large impact to final separating effect so that the letter returned in time domain Number produce entanglement, and then influence EOG signal recognition result.It is independent to each frequency after Amplitude Compensation by constraining DOA algorithms After component is ranked up adjustment so that the independent source on each frequency f arranges from small to large according to deflection, can effectively solve Sort fuzzy problem on each frequency, improves the quality of blind source separating, contributes to the lifting of discrimination.
S4, the isolated component to each frequency after yardstick compensation and after sequence carry out inverse Fourier transform in short-term and handled, and obtain The complete time signal of multichannel independent source on to time domain;
It should be noted that be restored for the correct and amplitude in the not homologous corresponding component arrangement of each frequency of guarantee In the case of carry out Short-time Fourier inverse transformation, finally by obtained time-domain signal again intercept combination obtain estimating source signal Meter.
Inverse Fourier transform process in short-term is carried out to it is:
During calculating, by row to obtained time-frequency matrix inversion operation, the time signal on window position when different is obtained, Then the order of window from small to large is spliced to time signal on time, obtains the complete time signal in source.
In above-mentioned calculating process, the time signal in adjacent window apertures can overlap, and overlapping length is right at the beginning Original observation signal carries out what is defined during adding window framing, is the half of frame length.Processing to overlapped data in adjacent window apertures is general It is to be added the later half that previous window is grown with the first half that latter window is grown and then except 2 are averaging.
S5, wavelet decomposition is carried out to the complete time signal of multichannel independent source in time domain, obtain wavelet coefficients at different levels;
Specifically, the formula of wavelet decomposition is as follows:
[c, 1]=wavedec ((Y, N, wname),
Wherein, c is wavelet decomposition vector, and 1 represents length at different levels from high to low, and Y represents the variable decomposed, N generations The number of plies that table decomposes, wname represent generating function.The number of plies that in the present embodiment multichannel EOG data are carried out with wavelet decomposition is ten Layer, the generating function of wavelet decomposition is db4.
S6, the judgment criteria of wavelet coefficients at different levels and slow eye movement contrasted and analyzed, it is equal with slow eye movement feature What is be consistent is then identified as slow eye movement.
Specifically, slow eye movement is characterized as:(1) occur slow sine offset in signal to be continued above 1 second, that is, believe Number frequency is less than 1Hz;(2) the initial motion speed of signal is close to zero, and the initial motion speed of signal is less than in the present embodiment 0.000001, it is believed that the initial motion speed of signal is close to zero;(3) EOG waveforms are without such as blink, brain electricity and myoelectricity puppet Mark signal occurs.When eye moves signal while meets above-mentioned 3 conditions, that is, think slow eye movement occur in eye moves signal. The contrast deterministic process in the present embodiment is illustrated with reference to Fig. 9, Fig. 9 B- (a) are one section and extracted from time-domain signal Slow eye movement:
Fig. 9 B- (a) the first row, it is the one section of waveform intercepted in the EOG signal for returned to after frequency domain blind separation time domain, under The row of face two to six is respectively the wavelet coefficient obtained after wavelet decomposition, and it is layer 6 small echo, layer 7 respectively to get off successively Small echo etc. is until the tenth layer of small echo (having been marked in figure).It is seen that when Fig. 9 B- (a) decompose the tenth layer of wavelet systems During number, the frequency of signal is already below 1Hz, and initial velocity is not have the appearance of artefact signal in 0 and EOG waveforms, so can sentence Breaking, it be slow eye movement.And D6, D7 in Fig. 9 B- (a) are readily seen its signal frequency not less than 1Hz and have a small amount of artefact The appearance of signal, so it is not slow eye movement.And the signal frequency that D8 initial velocity is not 0, D9 is higher than 1Hz, so It is not slow eye movement.In summary, the judgment criteria by wavelet coefficients at different levels and slow eye movement is needed to enter when slow eye movement judges Row only meets that the waveform of all features is just identified as slow eye movement simultaneously in contrast to analysis.
Due to not interfere with each other between the time-domain signal after inverse Fourier transform, multichannel EOG signal is carried out in the time domain Wavelet analysis, slow eye movement analysis is carried out respectively to the wavelet coefficients at different levels after decomposition, due to the interference without other source signals, Accuracy is high, and amount of calculation is few, and slow eye movement can be quickly extracted from EOG signal.
It should be noted that as shown in figure 5, obtained after complex value ICA algorithm blind source separating two of two passage EOG signals The time-frequency domain oscillogram of six adjacent frequencies of independent source.What its abscissa represented is the position of sliding window, what ordinate represented It is the amplitude size of signal.From two amplitude wave shape Fig. 5-(a), 5- (b) as can be seen that from top to bottom the third channel of order and There is sequence fuzzy problem in Five-channel.
Fig. 6 is the EOG signal oscillogram obtained before and after multichannel eye movement data separates through convolution ICA.Its abscissa represents Be sampled point, what ordinate represented is the amplitude size of signal.Contrast knowable to from the point of view of two width Fig. 6-(a), 6- (b), through convolution After ICA separation, blink artefact source signal is separated.
As shown in fig. 7, what Fig. 7 (a) and 7 (b) represented respectively is pan EOG ripples after linear ICA and convolution ICA is separated Sampled point is shown in shape figure, wherein abscissa, and ordinate is the amplitude of signal.What Fig. 7 (c) and 7 (d) were represented is to intercept respectively The time domain and frequency-domain waveform figure of one section of pan EOG signal of Fig. 7 (a) and 7 (b) second channel.Its time domain beamformer abscissa represents Be sampled point, what ordinate represented is the amplitude of signal, and what frequency-domain waveform figure abscissa represented is frequency, what ordinate represented It is the amplitude of signal.It can be clearly seen from two figures, after linear ICA separation, artefact signal does not separate " clean ", still has Signal of blinking is present, and the pan EOG signal after linear ICA separation is than the frequency of the pan EOG signal after convolution ICA separation Bandwidth.Therefore, blind source separating processing is preferably carried out to eye movement data using convolution ICA algorithm in the present embodiment.
Fig. 8 shows the average recognition rate schematic diagram of the slow eye movement signal of algorithms of different.Its abscissa represent be by The order of examination person, what ordinate represented is average recognition rate.As can be seen that the average recognition rate obtained through convolution ICA methods is 97.254%, than bandpass filtering method, Wavelet-denoising Method and and linearly ICA methods have been respectively increased 4.854%, 7.168% He 2.64%.
As shown in figure 9, Fig. 9 A (a) and Fig. 9 A (b) are time domain EOG ripples after convolution ICA and linear ICA blind separations respectively Shape figure.Wavelet decomposition is carried out to each passage respectively, can obtain there are two sections of waveforms slow eye movement (arrow occur in fourth lane Shown in), as a result as shown in Fig. 9 B (a) and 9B (b).Two figures can be seen that from Fig. 9 B, have eye at a slow speed when decomposing the tenth layer It is dynamic to occur.In order to contrast linear ICA separating resulting, waveform occurs at slow eye movement after convolution separation, respectively to linear ICA Each passage waveform after separation carries out wavelet decomposition and analyzed, and experimental result is as shown in Figure 9 C and 9D.Can from four figures Go out, slow eye movement do not occur in the waveform after linear ICA separation.
In addition, as shown in Figure 10, the present embodiment also discloses a kind of slow eye movement identification system based on convolved mixtures model System, including:The blind source separating module 10 that is sequentially connected, yardstick compensating module 20, order module 30, recovery module 40, small wavelength-division Solve module 50 and slow eye movement identification module 60;
Blind source separating module 10 is used in frequency domain, and blind source is carried out to the eye movement data of each frequency using complex value ICA algorithm Separation, obtain frequency domain isolated component of each Independent sources signal on corresponding frequency and transmit frequency domain isolated component to yardstick to compensate Module 20;
Yardstick compensating module 20 is used to carry out the isolated component on each frequency yardstick compensation, and reduction isolated component is being observed Actual proportions composition in component simultaneously transmits the isolated component after compensation to order module 30;
Order module 30 is used to be ranked up processing to the isolated component after compensation using constraint DOA algorithms so that each Independent source on frequency all arranges from small to large according to deflection;
Recovery module 40 is used for inverse to carrying out Fourier in short-term with the isolated component of each frequency after sequence after yardstick compensation Conversion process, obtain the complete time signal of multichannel independent source in time domain and pass the complete time signal of multichannel independent source Transport to wavelet decomposition module 50;
Wavelet decomposition module 50 is used to carry out wavelet decomposition to the complete time signal of multichannel independent source in time domain, obtains Wavelet coefficients at different levels, and decomposition result is transmitted to slow eye movement identification module 60;
Slow eye movement identification module 60 is used to be contrasted the judgment criteria of wavelet coefficients at different levels and slow eye movement with being divided Analysis, slow eye movement is then identified as with what slow eye movement feature was consistent.
It should be noted that disclosed in the present embodiment it is a kind of based on the slow eye movement identifying system of convolved mixtures model with it is upper Stating the method disclosed in embodiment has identical or corresponding technical characteristic and technique effect, repeats no more at this.
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all the present invention spirit and Within principle, any modification, equivalent substitution and improvements made etc., it should be included in the scope of the protection.

Claims (7)

  1. A kind of 1. slow eye movement recognition methods based on convolved mixtures model, it is characterised in that including:
    S1, on frequency domain, blind source separating is carried out to the eye movement data of each frequency using complex value ICA algorithm, obtains each independent source letter Frequency domain isolated component number on corresponding frequency;
    S2, carry out yardstick compensation to the isolated component on each frequency, actual proportions of the reduction isolated component in observational components into Point;
    S3, adjustment is ranked up to the isolated component after compensation using constraint DOA algorithms so that each independent source on frequency Arranged from small to large according to deflection;
    S4, the isolated component to each frequency after yardstick compensation and after sequence carry out inverse Fourier transform in short-term and handled, when obtaining The complete time signal of multichannel independent source on domain;
    S5, wavelet decomposition is carried out to multichannel independent source in time domain, obtain wavelet coefficients at different levels;
    S6, the judgment criteria of wavelet coefficients at different levels and slow eye movement contrasted and analyzed, be consistent with slow eye movement feature Be then identified as slow eye movement.
  2. 2. the method as described in claim 1, it is characterised in that in described step S5, female letter of the wavelet decomposition of use Number is db4, the number of plies of wavelet decomposition is ten layers.
  3. 3. the method as described in claim 1, it is characterised in that the feature of described slow eye movement includes:The dynamic signal frequency of eye It is approximately not occur artefact signal in 0, EOG signal to move signal initial motion speed less than 1Hz, eye.
  4. 4. the method as described in claim 1, it is characterised in that described step S2, specifically include:
    The separation matrix of each frequency in complex value ICA algorithm, obtains the hybrid matrix of corresponding frequency, wherein separation matrix and Hybrid matrix inverse matrix each other;
    The isolated component of each frequency is compensated using the coefficient of hybrid matrix, is compensated the isolated component of rear each frequency.
  5. 5. method as claimed in claim 4, it is characterised in that described step S3, specifically include:
    A, an angle is initialized for each independent source;
    B, not going together for each frequency is calculated by Root-Music algorithms, the estimation in each source direction can be obtained, wherein dividing From the corresponding different independent source of the row of matrix;
    C, the orientation angle for setting each independent source is ε (y, θ) with the proximity measurement for initializing angle, and in an iterative process, Judge whether angle and the initialization angle of each independent source are identical;
    D, step e is performed if identical, differs and then performs step f;
    E, by ε (yjj) 0 is arranged to, and setting direction angle matrix T adjusts matrix Q to calculate;
    F, by ε (yjj) 1 is arranged to, return to the iterative process and recalculate separation matrix W.
  6. 6. method as claimed in claim 5, it is characterised in that before described step S1, in addition to:
    Multichannel EOG data are acquired, obtain the eye movement data in time domain;
    Bandpass filtering is carried out to the eye movement data in time domain and removes average value processing, the eye movement data after being handled;
    Short Time Fourier Transform is done to the eye movement data after processing, is transformed from the time domain to frequency domain, the eye obtained on frequency domain moves Data.
  7. A kind of 7. slow eye movement identifying system based on convolved mixtures model, it is characterised in that including:The blind source being sequentially connected point From module, yardstick compensating module, order module, recovery module, wavelet decomposition module and slow eye movement identification module;
    Blind source separating module is used in frequency domain, carries out blind source separating to the eye movement data of each frequency using complex value ICA algorithm, obtains To frequency domain isolated component of each Independent sources signal on corresponding frequency and frequency domain isolated component is transmitted to yardstick compensating module;
    Yardstick compensating module is used to carry out the isolated component on each frequency yardstick compensation, and reduction isolated component is in observational components Actual proportions composition and the isolated component after compensation is transmitted to order module;
    Order module is used to be ranked up adjustment to the isolated component after compensation using constraint DOA algorithms so that on each frequency Independent source all arranged from small to large according to deflection;
    The isolated component that recovery module is used for each frequency after compensating yardstick and after sequence is carried out in short-term at inverse Fourier transform Reason, obtains the complete time signal of multichannel independent source in time domain and transmits the complete time signal of multichannel independent source to small Wave Decomposition module;
    Wavelet decomposition module is used to carry out wavelet decomposition to the complete time signal of multichannel independent source in time domain, obtains at different levels small Wave system number, and decomposition result is transmitted to slow eye movement identification module;
    Slow eye movement identification module is used to the judgment criteria of wavelet coefficients at different levels and slow eye movement is contrasted and analyzed, and slow What fast eye movement characteristics were consistent is then identified as slow eye movement.
CN201710695419.7A 2017-08-15 2017-08-15 Low-speed eye movement identification method and system based on convolution mixed model Active CN107450730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710695419.7A CN107450730B (en) 2017-08-15 2017-08-15 Low-speed eye movement identification method and system based on convolution mixed model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710695419.7A CN107450730B (en) 2017-08-15 2017-08-15 Low-speed eye movement identification method and system based on convolution mixed model

Publications (2)

Publication Number Publication Date
CN107450730A true CN107450730A (en) 2017-12-08
CN107450730B CN107450730B (en) 2020-02-21

Family

ID=60492006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710695419.7A Active CN107450730B (en) 2017-08-15 2017-08-15 Low-speed eye movement identification method and system based on convolution mixed model

Country Status (1)

Country Link
CN (1) CN107450730B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036467A (en) * 2020-08-27 2020-12-04 循音智能科技(上海)有限公司 Abnormal heart sound identification method and device based on multi-scale attention neural network
CN118262403A (en) * 2024-03-27 2024-06-28 北京极溯光学科技有限公司 Eye movement data processing method, device, equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100292545A1 (en) * 2009-05-14 2010-11-18 Advanced Brain Monitoring, Inc. Interactive psychophysiological profiler method and system
CN102125429A (en) * 2011-03-18 2011-07-20 上海交通大学 Alertness detection system based on electro-oculogram signal
CN106163391A (en) * 2014-01-27 2016-11-23 因泰利临床有限责任公司 System for multiphase sleep management, method for the operation thereof, device for sleep analysis, method for classifying a current sleep phase, and use of the system and the device in multiphase sleep management

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100292545A1 (en) * 2009-05-14 2010-11-18 Advanced Brain Monitoring, Inc. Interactive psychophysiological profiler method and system
CN102125429A (en) * 2011-03-18 2011-07-20 上海交通大学 Alertness detection system based on electro-oculogram signal
CN106163391A (en) * 2014-01-27 2016-11-23 因泰利临床有限责任公司 System for multiphase sleep management, method for the operation thereof, device for sleep analysis, method for classifying a current sleep phase, and use of the system and the device in multiphase sleep management

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RUI OUYANG 等: ""An Algorithm for Reading Activity Recognition"", 《IEEE》 *
张贝贝: ""基于EOG的阅读行为识别中眨眼信号去除算法研究"", 《信号处理》 *
朱学敏: ""基于卷积神经网络的眼电信号疲劳检测"", 《中国优秀硕士学位论文全文数据库(医药卫生科技辑)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036467A (en) * 2020-08-27 2020-12-04 循音智能科技(上海)有限公司 Abnormal heart sound identification method and device based on multi-scale attention neural network
CN112036467B (en) * 2020-08-27 2024-01-12 北京鹰瞳科技发展股份有限公司 Abnormal heart sound identification method and device based on multi-scale attention neural network
CN118262403A (en) * 2024-03-27 2024-06-28 北京极溯光学科技有限公司 Eye movement data processing method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN107450730B (en) 2020-02-21

Similar Documents

Publication Publication Date Title
Conte et al. Hermite expansions of compact support waveforms: applications to myoelectric signals
CN110269609B (en) Method for separating ocular artifacts from electroencephalogram signals based on single channel
CN107260166A (en) A kind of electric artefact elimination method of practical online brain
CN109255309B (en) Electroencephalogram and eye movement fusion method and device for remote sensing image target detection
CN112244878B (en) Method for identifying key frequency band image sequence by using parallel multi-module CNN and LSTM
CN105054928A (en) Emotion display equipment based on BCI (brain-computer interface) device electroencephalogram acquisition and analysis
Miller et al. Higher dimensional analysis shows reduced dynamism of time-varying network connectivity in schizophrenia patients
Metsomaa et al. Blind source separation of event-related EEG/MEG
CN107348958A (en) Robust glance EOG signal identification method and system
CN107450730A (en) Low-speed eye movement identification method and system based on convolution mixed model
CN108338787A (en) A kind of phase property extracting method of multi-period multi-component multi-dimension locking phase value
CN107480635A (en) Glance signal identification method and system based on bimodal classification model fusion
Sugumar et al. Joint blind source separation algorithms in the separation of non-invasive maternal and fetal ECG
CN111466909A (en) Target detection method and system based on electroencephalogram characteristics
Maki et al. Graph regularized tensor factorization for single-trial EEG analysis
Giraldo-Guzmán et al. Fetal ECG extraction using independent component analysis by Jade approach
Zhang et al. Fetal ECG subspace estimation based on cyclostationarity
Mustafa et al. Glcm texture classification for eeg spectrogram image
Lan et al. A comparison of different dimensionality reduction and feature selection methods for single trial ERP detection
Naik et al. SEMG for identifying hand gestures using ICA
Zhang et al. Robust EOG-based saccade recognition using multi-channel blind source deconvolution
Massar et al. DWT-BSS: Blind Source Separation applied to EEG signals by extracting wavelet transform’s approximation coefficients
Munkanpalli et al. Design and development of EEG controlled mobile robots
Korczowski et al. Mining the bilinear structure of data with approximate joint diagonalization
CN104188649B (en) Ensure that linearly synthesizes a kind of method of real-time in multiple spot physiology pyroelectric monitor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant