CN101114336A - Artificial visible sensation image processing process based on wavelet transforming - Google Patents

Artificial visible sensation image processing process based on wavelet transforming Download PDF

Info

Publication number
CN101114336A
CN101114336A CNA2007100448982A CN200710044898A CN101114336A CN 101114336 A CN101114336 A CN 101114336A CN A2007100448982 A CNA2007100448982 A CN A2007100448982A CN 200710044898 A CN200710044898 A CN 200710044898A CN 101114336 A CN101114336 A CN 101114336A
Authority
CN
China
Prior art keywords
wavelet
coefficient
wavelet transform
sampling
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007100448982A
Other languages
Chinese (zh)
Inventor
朱贻盛
郭虹
邱意弘
童善保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CNA2007100448982A priority Critical patent/CN101114336A/en
Publication of CN101114336A publication Critical patent/CN101114336A/en
Pending legal-status Critical Current

Links

Images

Abstract

An artificial vision image processing method based on wavelet transform in the vision substitution system is characterized in that a two-dimensional discreteness small wave conversion of an original gray scale natural image is finished; a threshold processing and sampling is done to the transformed wavelet coefficient, the channel output 4 is chosen out by comparing the high-frequency coefficient energy; a driving impulse generator generates pulse wave to stimulate optic cells by the four groups of electrode array on optic nerves, and forms optical illusion in human brain. The invention applies the two-dimensional discreteness small wave conversion to make an input image with a pixel of 32 multiplied by 32 finally restore to the original state in human brain; the image information is decomposed according to blocks and frequencies by the wavelet transform with a 0(N) computational complexity. The novel method is easy to be embedded into any arbitrary digital signal processing system, and has good timeliness and implementing criterion.

Description

Artificial visual image processing method based on wavelet transformation
Technical Field
The invention relates to a method in the technical field of image processing, in particular to an artificial visual image processing method based on wavelet transformation.
Background
Both cochlear implant technology and visual prosthesis technology belong to the category of artificial perception. Since the physiological mechanisms of vision and hearing are very similar, cochlear implant technology and artificial vision technology are also similar. The method comprises the steps of carrying out manual processing and coding on an external sound signal collected by a microphone and an external image signal collected by a camera respectively, and then applying a coded result to an auditory nerve and an optic nerve/retina by an electrode array in a current pulse wave mode to enable the coded result to generate nerve impulses which are transmitted to a brain to generate auditory hallucinations and visual hallucinations. The cochlear implant technology which began in the 50's of the past now has become the only effective treatment for recovering hearing of the deaf patients in clinic.
The wavelet technique is a technique that has been developed to solve the limitation of the fourier transform. In 1989, malat (s. Mallat) published multiple channel computations of images and wavelet models on IEEE Trans on optics, speech and Signal Processing (proceedings of Acoustics, speech and Signal Processing) (1989, volume 37, pages 12, 2091 to 2110), presented the concept of Multi-Resolution Analysis (Multi-Resolution Analysis), and presented an algorithm for Fast Wavelet Transform (FWT), called the malat algorithm (Mallat algorithm). The Marait algorithm greatly improves the calculation speed of the wavelet transformation, so that the wavelet transformation is widely applied in the engineering field: the wavelet transform has excellent performance in a sound processing system of the cochlear prosthesis; discrete wavelet transforms have also been widely adopted in digital image compression, processing and analysis. Moreover, the mechanisms of the wavelet transform and the human vision processing system have certain similarities: locality, multiresolution, sparsity.
According to the prior art, ECKMILLER et al found that flexible retina encoders for retina implants in JOURNAL OF NEURAL ENGINEERING (J. NEURECHNOLOGY) (2005, 2 months 91-104): why and how (adjustable retinal vision prosthesis encoder), which proposes an adjustable retinal image processing method, the specific method is: two modules of retina and central vision system are applied, images are coded by a time-space filter, and a moving circle is used as a sample to train the state parameters of the two modules. However, the method has the defects that the method only stays at the reconstruction level of the image, the image is not connected with the electrode stimulation of the visual prosthesis, the model has poor real-time performance, the training process is complex, the controllability in practical application is poor, and the method is not convenient to be embedded into a visual prosthesis application system.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an artificial vision image processing method based on wavelet transformation, which aims at a signal processing system in a retina or optic nerve vision prosthesis, refers to a sound signal processing method in an artificial cochlea, transforms an original image by adopting two-dimensional discrete wavelet transformation, and converts a transformed wavelet coefficient into the current intensity of a stimulation electrode in the vision prosthesis through sampling, selecting and mapping so as to stimulate a human vision system to generate visual illusion.
The invention is realized by the following technical scheme, and the invention comprises the following steps:
performing two-dimensional discrete wavelet transform and two-level wavelet analysis on an acquired image to obtain a wavelet coefficient;
and (3) carrying out contrast enhancement on an original gray level image, then carrying out two-dimensional discrete wavelet transform by using a Marait fast wavelet transform algorithm, and carrying out secondary wavelet analysis to obtain a wavelet coefficient. The wavelet is biorthogonal wavelet, because the wavelet function is close to the receptive field model of ganglion cells. The neurons in the visual system of the present invention are viewed as simply independent individuals, and the response of each neuron is linear. The degree of response of each neuron is then the size of the wavelet coefficients produced by the wavelet transform.
Performing two-level wavelet analysis on an original gray image with 32 × 32 pixels to obtain seven wavelet coefficient matrixes, namely a low-frequency approximate coefficient matrix cA 2 (14 × 14) and six high frequency coefficient matrices, the six high frequency coefficient matrices including three secondary wavelet coefficient matrices (14 × 14): horizontal cD 2 (h ) Vertical cD 2 (v) And diagonal coefficient matrix cD 2 (d) And three primary wavelet coefficient matrices (20 × 20): horizontal cD 1 (h) Vertical cD 1 (v) And diagonal coefficient matrix cD 1 (d)
Step two, carrying out hard thresholding treatment on the wavelet coefficient obtained in the step one;
for the wavelet coefficient obtained by wavelet transform, the coefficient distributed near zero is considered as being insufficient to cause the response of nerve cells to set the coefficient to 0, which is called a hard thresholding process, namely:
where δ is the selected threshold, w is the wavelet coefficient;
the wavelet coefficient not only represents the similarity degree of the local part of the image and the wavelet base, but also reflects the activated degree of nerve cells of the local part. The invention will therefore process for wavelet coefficients rather than for image components reconstructed from a single wavelet coefficient.
Step three, sampling the coefficients after thresholding in the step two;
the sampling is to uniformly sample a 4 multiplied by 4 coefficient matrix as the output of a low-frequency component channel by adopting a global uniform sampling method for low-frequency approximate coefficients according to the distribution property of retinal ganglion cells and a human visual mechanism; for the first-level second-level wavelet coefficient and the second-level wavelet coefficient, a middle local uniform sampling method is adopted, wherein a first-level horizontal, vertical and diagonal coefficient matrix is 20 x 20 in size, 4 x 4 wavelet coefficients are uniformly sampled for an 8 x 8 matrix in the middle of the first-level horizontal, vertical and diagonal coefficient matrix, the second-level horizontal, vertical and diagonal coefficient matrix is 14 x 14 in size, and the middle 4 x 4 matrix is directly sampled. Only the middle matrix subblock of the high-frequency matrix is sampled and the surrounding high-frequency information is ignored, so that the method is based on the environment perception of human eyes and is a middle fine and coarse peripheral coding mode.
Step four, comparing and selecting the energy of the sampled high-frequency coefficients in the step three, and selecting 4 channels for output;
the energy comparison selection refers to that sampling results of the low-frequency approximate coefficient matrix are used as the output of the first channel, wavelet energy is calculated for the remaining 6 high-frequency coefficient matrices, and the three sampling coefficient matrices with the maximum energy are selected as the output of the remaining three channels.
For selecting the three sampling coefficient matrices with the largest energy, the wavelet energy after each high frequency component sample is first calculated. The wavelet energy E is calculated as:
Figure A20071004489800062
where w (i, j) is the wavelet coefficient for the sample point.
Because the number of the electrode arrays is limited, the output of each group of electrodes contains more energy pattern information as much as possible; the three sampling coefficient matrices with the highest energy are selected as the outputs of the three high frequency channels.
Through the step, a 4-channel output with image information is obtained, wherein the 4 channels are matrix outputs containing different scales and different position information of the original image, and the size of an output matrix of each channel is 4 multiplied by 4.
And step five, adopting the 4-channel output in the step four to drive a pulse generator to generate pulse waves, stimulating optic nerve cells through an electrode array, and forming visual hallucinations in the human brain.
The pulse wave is generated by a group of pulse generators in the driving electrode array corresponding to the coefficient matrix of each output channel of the 4 channels obtained in the step, the pulse wave stimulates corresponding nerve cells through the electrode array embedded on the retina or optic nerve of a human body, and the nerve cells can respond through electrical stimulation, so that visual illusion is formed in the brain of the human body, and images are restored.
Compared with the prior art, the invention combines the characteristics of human visual perception, combines the experience of artificial cochlea, applies two-dimensional discrete wavelet transform, and finally restores a 32 x 32 pixel input image in the human brain through the stimulation of 4 groups of electrodes on the retina or the optic nerve to form a signal processing system of the visual prosthesis. The wavelet transform carries out block frequency division rate decomposition on the information in the image, and the information is respectively output to the corresponding electrode arrays. The computation complexity of the wavelet transform is O (N), and the whole method is easily embedded into any digital signal processing system, so that the real-time performance and the implementability are good. The invention is combined with image acquisition and electrode manufacturing technology to realize an artificial vision prosthesis system.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of a coefficient matrix sampling rule in the present invention;
FIG. 3 is a schematic diagram of an original image and a four-channel sampling matrix output according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image simulation output method according to the present invention;
fig. 5 is a schematic diagram of an original image and a simulated output image in an embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in detail with reference to the accompanying drawings, which are implemented on the premise of the technical solution of the present invention, and give detailed implementation manners and specific operation procedures, but the scope of the present invention is not limited to the following embodiments.
As shown in fig. 1, the present embodiment processes an original gray image of 32 × 32 pixels as follows.
Step one, contrast enhancement is carried out on an original gray level image, so that the gray level values of all pixels are distributed between 0 and 255, and the enhanced image is shown as fig. 3 (a) and fig. 5 (a); using maraca for the enhanced gray-scale imageThe ultra-fast wavelet transform algorithm performs two-dimensional discrete wavelet transform, the wavelet function of the ultra-fast wavelet transform algorithm selects biorthogonal wavelets and performs secondary wavelet analysis, and totally 7 coefficient matrixes are obtained: 1 low-frequency approximation coefficient matrix cA 2 (14 × 14) and 6 high frequency coefficient matrices, the 6 high frequency coefficient matrices including 3 secondary wavelet coefficient matrices (14 × 14): horizontal cD 2 (h) Vertical cD 2 (v) And diagonal coefficient matrix cD 2 (d) And 3 one-level wavelet coefficient matrices (20 × 20): horizontal cD 1 (h) Vertical cD 1 (v) And diagonal coefficient matrix cD 1 (d)
Step two, performing hard thresholding on all 7 coefficient matrixes, wherein the threshold value delta =10 in the example.
Step three, respectively sampling the 7 coefficient matrixes subjected to hard thresholding, wherein the specific mode is as follows: the low-frequency approximate coefficient matrix uniformly selects 4 × 4=16 sampling points in the 2 nd, 6 th, 9 th and 13 th column elements of the 2 nd, 6 th, 9 th and 13 th rows in a global uniform sampling mode; the first-level wavelet coefficient matrix and the second-level wavelet coefficient matrix adopt a middle local uniform sampling method, and the second-level wavelet coefficient matrix directly selects a 4 x 4 coefficient matrix in the middle of the matrix, namely elements in the 6 th to 9 th rows to obtain 16 sampling points; the one-level wavelet coefficient matrix firstly selects an 8 × 8 coefficient matrix in the middle of the matrix, namely 7 th to 14 th column elements of 7 th to 14 th rows, and then selects 1 st and 3 rd columns of 1 st, 3 rd, 5 th and 7 th rows to uniformly stagger the selected 8 × 8 matrixThe elements and the 5 th and 7 th column elements in the 2 nd, 4 th, 6 th and 8 th rows result in 4 × 4=16 sampling points, i.e., the 7 th and 9 th column elements in the 7 th, 9 th, 11 th and 13 th rows and the 11 th and 13 th column elements in the 8 th, 10 th, 12 th and 14 th rows of the original primary wavelet coefficient matrix. The sampling pattern is shown in fig. 3, where a small square represents a coefficient element, gray represents a sampling point, and white represents a non-sampling point. A total of 7 4 × 4 output matrices are obtained: low frequency approximation coefficient matrix cA 2 The sampling mode is as shown in fig. 2 (a), so that 4 × 4 coefficient matrixes are obtained; the two-level wavelet coefficient sampling mode is as shown in fig. 2 (b) to obtain 3 4 × 4 coefficient matrixes; the one-level wavelet coefficient sampling mode is as shown in fig. 2 (c) to obtain 3 4 × 4 coefficient matrices.
And step four, taking the sampling result of the low-frequency approximate coefficient matrix as the output of the first channel. And calculating wavelet energy for the remaining 6 first-level and second-level wavelet coefficient matrixes, and selecting the three sampling coefficient matrixes with the maximum energy as the outputs of the remaining three channels. Thus, an output result of 4 channels of 4 × 4 output values per channel is obtained. The sampled coefficient matrix is in the form of a gray scale image, as shown in fig. 3 (b), where each large square represents the output of one channel, each small square represents one coefficient in the channel, the lighter the color of the small square represents the coefficient value, the larger the color, and the darker the color represents the coefficient value.
And step five, correspondingly driving an impulse generator by the coefficient matrix of each output channel of the 4 channels obtained in the step four, wherein the impulse generator generates biphasic impulse square waves with different intensities, frequencies and pulse widths, stimulation currents are generated on different electrode arrays, the electrode arrays stimulate corresponding nerve cells, and the nerve cells can respond through electrical stimulation, so that visual illusions are formed in human brains, and images are restored.
Step six, carrying out simulation of the visual system on the obtained 4-channel output
To describe the recovered pattern in the human brain, the 4-channel output obtained was subjected to a simulation of the visual system, the process of which is shown in fig. 4: for 16 output coefficients per channel, interpolation is first performed,a bilinear interpolation mode is adopted for the output sampling coefficient 1 of the first channel to obtain a restored 14 multiplied by 14 low-frequency approximate coefficient matrix cA 2 And adopting zero padding interpolation for output sampling coefficients 2, 3 and 4 of the second channel, the third channel and the fourth channel according to the inverse operation of the sampling rule. Then, convolution operation is respectively carried out on the two-dimensional wavelet bases (the two-dimensional wavelet base is a basic composition unit of the two-dimensional wavelet, each wavelet coefficient corresponds to one wavelet base) of each level, all results are superposed to obtain a restored simulation image with 32 x 32 pixels, the result is shown in fig. 5 (b), and the original image which is compared with the result is shown in fig. 5 (a).
The embodiment combines the characteristics of human visual perception, refers to the successful experience of the artificial cochlea, applies two-dimensional discrete wavelet transformation, finally stimulates an input image of 32 multiplied by 32 pixels to restore in the human brain through 4 groups of electrodes on the retina or the optic nerve, and simulates the visual illusion in the human brain through a simulation method, thereby achieving the aim of the embodiment.

Claims (10)

1. An artificial visual image processing method based on wavelet transformation is characterized by comprising the following steps:
performing two-dimensional discrete wavelet transform and two-level wavelet analysis on an acquired image to obtain a wavelet coefficient;
step two, carrying out hard thresholding treatment on the wavelet coefficient obtained in the step one;
step three, sampling the coefficient after thresholding in the step two;
step four, comparing and selecting the energy of the sampled high-frequency coefficients in the step three, selecting three sampling coefficient matrixes with the maximum energy as the outputs of the remaining three channels, and obtaining the output result of 4 multiplied by 4 output values of each channel of 4 channels;
and step five, adopting the 4-channel output in the step four to drive a pulse generator to generate pulse waves, stimulating the optic nerve cells through an electrode array, and forming visual hallucinations in the human brain.
2. The method for processing an artificial visual image based on wavelet transform as claimed in claim 1, wherein said two-dimensional discrete wavelet transform is characterized by: and performing two-dimensional discrete wavelet transform by adopting a Maraite fast wavelet transform algorithm, and performing secondary wavelet analysis on the image to obtain a wavelet coefficient.
3. A wavelet transform-based artificial visual image processing method as claimed in claim 1 or 2, wherein said wavelet is a two-dimensional biorthogonal wavelet.
4. The method for processing an artificial visual image based on wavelet transform as claimed in claim 1 or 2, wherein said wavelet coefficients refer to: seven wavelet coefficient matrixes obtained by performing two-stage wavelet analysis on an image with 32 x 32 pixels, namely a low-frequency approximate coefficient matrix cA 2 (14 × 14) and six high frequency coefficient matrices, the six high frequency coefficient matrices including three secondary wavelet coefficient matrices (14 × 14): horizontal cD 2 (h) Vertical cD 2 (v) And diagonal coefficient matrix cD 2 (d) And three first-level wavelet coefficient matrices (20 × 20): horizontal cD 1 (h) Vertical cD 1 (v) And diagonal coefficient matrix cD 1 (d)
5. The method for processing artificial visual images based on wavelet transform as claimed in claim 1, wherein said hard thresholding is to consider the coefficients distributed around zero as being insufficient to cause the response of nerve cells to set them to 0 for the wavelet coefficients obtained by wavelet transform, namely:
Figure A2007100448980003C1
where δ is the selected threshold and w is the wavelet coefficient.
6. The method for processing an artificial visual image based on wavelet transform as claimed in claim 1, wherein said sampling specifically comprises: uniformly sampling a 4 x 4 coefficient matrix as the output of a low-frequency component channel by adopting a global uniform sampling method for low-frequency approximate coefficients according to the distribution property of retinal ganglion cells and a human visual mechanism; for the first-level second-level wavelet coefficient and the second-level wavelet coefficient, a middle local uniform sampling method is adopted, wherein a first-level horizontal, vertical and diagonal coefficient matrix is 20 x 20 in size, 4 x 4 wavelet coefficients are uniformly sampled for an 8 x 8 matrix in the middle of the first-level horizontal, vertical and diagonal coefficient matrix, the second-level horizontal, vertical and diagonal coefficient matrix is 14 x 14 in size, and the middle 4 x 4 matrix is directly sampled.
7. The method for processing an artificial visual image based on wavelet transform as claimed in claim 1, wherein said energy comparison selection is to use the sampling result of the low frequency approximate coefficient matrix as the output of the first channel, calculate the wavelet energy for the remaining 6 high frequency coefficient matrices, and select the three sampling coefficient matrices with the maximum energy as the output of the remaining three channels.
8. The method for processing an artificial visual image based on wavelet transform as claimed in claim 7, wherein said wavelet energy E is calculated by the formula:
Figure A2007100448980003C2
where w (i, j) is the wavelet coefficient for the sample point.
9. A wavelet transform-based artificial visual image processing method as claimed in claim 1, wherein the size of the output matrix of said 4 channels, each channel, is 4 x 4.
10. The method for processing artificial visual images based on wavelet transform as claimed in claim 1, wherein said pulse waves are generated by a set of pulse generators in a corresponding driving electrode array of coefficient matrix of each output channel in turn, the pulse waves stimulate corresponding nerve cells through the electrode array embedded on human retina or optic nerve, and the nerve cells respond by electrical stimulation, thereby forming visual illusion in human brain and restoring images.
CNA2007100448982A 2007-08-16 2007-08-16 Artificial visible sensation image processing process based on wavelet transforming Pending CN101114336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2007100448982A CN101114336A (en) 2007-08-16 2007-08-16 Artificial visible sensation image processing process based on wavelet transforming

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2007100448982A CN101114336A (en) 2007-08-16 2007-08-16 Artificial visible sensation image processing process based on wavelet transforming

Publications (1)

Publication Number Publication Date
CN101114336A true CN101114336A (en) 2008-01-30

Family

ID=39022669

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007100448982A Pending CN101114336A (en) 2007-08-16 2007-08-16 Artificial visible sensation image processing process based on wavelet transforming

Country Status (1)

Country Link
CN (1) CN101114336A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509283A (en) * 2011-09-30 2012-06-20 西安理工大学 DSP (digital signal processor)-based target perceiving and encoding method facing optic nerve prosthesis
US8873879B2 (en) 2010-11-15 2014-10-28 National Institute Of Japan Science And Technology Agency Illusion image generating apparatus, medium, image data, illusion image generating method, printing medium manufacturing method, and program
CN104471389A (en) * 2012-08-24 2015-03-25 富士施乐株式会社 Image processing device, program, image processing method, computer-readable medium, and image processing system
CN109157738A (en) * 2018-07-23 2019-01-08 浙江诺尔康神经电子科技股份有限公司 Artificial retina amplitude-frequency based on deep vision regulates and controls method and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8873879B2 (en) 2010-11-15 2014-10-28 National Institute Of Japan Science And Technology Agency Illusion image generating apparatus, medium, image data, illusion image generating method, printing medium manufacturing method, and program
RU2535430C1 (en) * 2010-11-15 2014-12-10 Нэшнл Инститьют Оф Джапэн Сайнс Энд Текнолоджи Эйдженси Illusion image generating apparatus, medium, image data, illusion image generating method, printing medium manufacturing method and programme
US9418452B2 (en) 2010-11-15 2016-08-16 National Institute Of Japan Science And Technology Agency Print medium displaying illusion image and non-transitory computer-readable recording medium holding illusion image data
CN102509283A (en) * 2011-09-30 2012-06-20 西安理工大学 DSP (digital signal processor)-based target perceiving and encoding method facing optic nerve prosthesis
CN104471389A (en) * 2012-08-24 2015-03-25 富士施乐株式会社 Image processing device, program, image processing method, computer-readable medium, and image processing system
US9704017B2 (en) 2012-08-24 2017-07-11 Fuji Xerox Xo., Ltd. Image processing device, program, image processing method, computer-readable medium, and image processing system
CN109157738A (en) * 2018-07-23 2019-01-08 浙江诺尔康神经电子科技股份有限公司 Artificial retina amplitude-frequency based on deep vision regulates and controls method and system
CN109157738B (en) * 2018-07-23 2022-02-15 浙江诺尔康神经电子科技股份有限公司 Artificial retina amplitude modulation control method and system based on depth vision

Similar Documents

Publication Publication Date Title
Perrinet et al. Coding static natural images using spiking event times: do neurons cooperate?
CN106137532B (en) A kind of image processing method
CN109784242A (en) EEG Noise Cancellation based on one-dimensional residual error convolutional neural networks
CN109924990A (en) A kind of EEG signals depression identifying system based on EMD algorithm
KR101905053B1 (en) Method and device for controlling a device for aiding vision
CN107194426A (en) A kind of image-recognizing method based on Spiking neutral nets
Oweiss A systems approach for data compression and latency reduction in cortically controlled brain machine interfaces
US20150112237A1 (en) Device for rehabilitating brain mechanism of visual perception using complementary sensual stimulations
CN105814911B (en) The feedback of energy signal for nerve stimulation gates
CN111584029B (en) Electroencephalogram self-adaptive model based on discriminant confrontation network and application of electroencephalogram self-adaptive model in rehabilitation
CN101114336A (en) Artificial visible sensation image processing process based on wavelet transforming
CN109247917A (en) A kind of spatial hearing induces P300 EEG signal identification method and device
CN114648048B (en) Electrocardiosignal noise reduction method based on variational self-coding and PixelCNN model
CN112233199A (en) fMRI visual reconstruction method based on discrete characterization and conditional autoregression
Li et al. Image recognition with a limited number of pixels for visual prostheses design
CN109034015A (en) The demodulating system and demodulating algorithm of FSK-SSVEP
CN113128353B (en) Emotion perception method and system oriented to natural man-machine interaction
Ming-Ai et al. Feature extraction and classification of mental EEG for motor imagery
Lazar et al. Reconstructing natural visual scenes from spike times
CN100481123C (en) Implementation method of retina encoder using space time filter
Granley et al. Adapting brain-like neural networks for modeling cortical visual prostheses
CN110796599A (en) Channel weighting generation type confrontation network method for retina image super-resolution reconstruction
Wang et al. NeuroSEE: A Neuromorphic Energy-Efficient Processing Framework for Visual Prostheses
Granley et al. A hybrid neural autoencoder for sensory neuroprostheses and its applications in bionic vision
CN112006682B (en) Left-right hand motor imagery electroencephalogram signal classification method based on multi-channel frequency characteristic customization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20080130