CN100481123C - Implementation method of retina encoder using space time filter - Google Patents

Implementation method of retina encoder using space time filter Download PDF

Info

Publication number
CN100481123C
CN100481123C CNB2007100378833A CN200710037883A CN100481123C CN 100481123 C CN100481123 C CN 100481123C CN B2007100378833 A CNB2007100378833 A CN B2007100378833A CN 200710037883 A CN200710037883 A CN 200710037883A CN 100481123 C CN100481123 C CN 100481123C
Authority
CN
China
Prior art keywords
space time
time filter
parameter
output
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2007100378833A
Other languages
Chinese (zh)
Other versions
CN101017535A (en
Inventor
朱贻盛
邱意弘
牛希娴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CNB2007100378833A priority Critical patent/CN100481123C/en
Publication of CN101017535A publication Critical patent/CN101017535A/en
Application granted granted Critical
Publication of CN100481123C publication Critical patent/CN100481123C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Prostheses (AREA)
  • Image Analysis (AREA)

Abstract

This invention relates to one visual film coding method in computer application field, which comprises the following steps: inputting specimen image into time and space filter to get the BP human neural network input, train BP human neural net and to determine human neural net weight value; then inputting specimen any image and taking one random set of filter parameters by use of particle set or strategy method in filter parameter range for optimization; through multiple overlap, outputting image converged to input image and the parameters of the filter are determined with output impulse excitation is relative to input image film codes.

Description

Adopt the retina encoder implementation method of space time filter
Technical field
The present invention relates to the method in a kind of Computer Applied Technology field, specifically is a kind of retina encoder implementation method that adopts space time filter.
Background technology
Retina encoder is an important component part of retina artificial vision prosthese, is from known physiological knowledge and experimental data, and between input picture and the boost pulse string corresponding relation proposes in order to solve.Vision is an important channel of human acquired information.Yet a lot of people's existence dysopia is in various degree arranged in the world, can not pass through the vision acquired information.According to World Health Organization (WHO) statistics, the whole world has nearly 4,000 ten thousand people blind approximately, and other 100,000,000 people that have an appointment have in various degree vision impairment or weakening.For the patient who causes because of retinal disease losing one's sight, still some retina cell and optic cell function are intact for they.So can attempt on retina, designing vision prosthesis, stimulate the next vision of partly rebuilding of int part on the retina by visual information being converted to electro photoluminescence.Visual performance how partially or completely to recover the blind person has become at present the heat subject of research both at home and abroad.
Though present research to the modeling of retina vision prosthesis has obtained noticeable achievement even has tested on the animal and human, yet these models still rest on the simulation to the retinal tissue structure, people do not solve this key problem of coding of retina signal Processing, so also can't help the blind person partly to recover vision effectively so far.
At present both at home and abroad amphiblestroid modeling there are artificial nerve network model based on method, formula simulation of the statistical model of experimental data and mathematical method etc. and hardware based model of simulating retinal structure by the CMOS chip.
Find through literature search prior art, ECKMILLER etc. deliver Tunableretina encoders for retina implants:why and how (adjustable retina vision prosthesis scrambler) on " JOURNAL OF NEURALENGINEERING " (neuroengineering magazine) (phase in February, 2,005 91 to 104 pages), adjustable retina coding method is proposed in this article, concrete grammar is: be divided into two modules of retina and central vision system, with space time filter image is encoded, with the state parameter of the circle that moves as two modules of sample training.Its deficiency is: the used space time filter of ECKMILLER retina coding method also is a kind of reconstruct to image, directly image and impulse stimulation is not connected.
Summary of the invention
The object of the invention is at the deficiencies in the prior art, solve the encoding relation between input picture and the boost pulse, a kind of retina encoder implementation method that adopts space time filter is provided, make it simulate the retinal ganglial cells signal processing with space time filter around the center, simulate brain processing visual signal with the BP artificial neural network nerve impulse is converted to image section, regulate the space time filter parameter to reach optimum output effect with improving population and evolution strategy parameter optimization method.
The present invention is achieved by the following technical solutions, and the present invention at first imports space time filter with sample image, and the output that obtains is as the input of BP artificial neural network, and BP ANN is determined the weights of BP artificial neural network.Import any one image in the sample image then, get one group of space time filter parameter at random, in the space time filter parameter area, carry out parameter optimization,, finally make output image converge on input picture by iteration repeatedly with population or evolution strategy method.After the parameter of space time filter was determined, the impulse stimulation of its output then was the retina coding corresponding to input picture.
Described space time filter is meant a kind of center space time filter of pattern on every side.This space time filter has spatial and temporal resolution preferably.Inseparable and the periphery of its space-time has been simulated the retina signal processing well than character such as center delays.The most important thing is, be different from the model that other research groups such as ECKMILLER propose, this space time filter directly interrelates the output nerve impulsion of input picture and retinal ganglial cells, has truly realized amphiblestroid coding.This space time filter has 7 parameters, comprising 3 space time filter time parameter λ c, λ s, d, represent that respectively the output of space time filter center and periphery reaches the delay to the center of time of peak value and space time filter periphery, 2 space time filter spatial parameter σ c, σ s, represent the field range of space time filter receptive field center and periphery, 2 weighting parameter α respectively c, α s, represent receptive field center and periphery weight respectively.Its mathematic(al) representation is:
CS ( x , t ) = α c K ( t , λ c ) Σ { G ( x , σ c ) pix ( x ) } - α s K ( t - d , λ s ) Σ { G ( x , σ s ) pix ( x ) }
K ( t , &lambda; ) = &lambda;texp ( - &lambda;t ) if t &GreaterEqual; 0 0 if t < 0
G(x,σ)=(2πσ 2) -1exp(-x 2/2σ 2)
This space time filter peripheral space scope is bigger than central space scope, i.e. σ c<σ sPeriphery postpones to some extent than the center time response time response, i.e. λ s<λ c, d〉and 0.The retina encoder that the present invention realized adopts 9 space time filters, retinal ganglial cells of each space time filter simulation.The receptive field of 9 space time filters can be overlapping, and the image of 729 picture elements is handled, and the input picture of 729 picture elements is converted to pulse output.
The present invention handles visual signal with BP artificial Neural Network Simulation brain.Before doing human body experimental conditions maturation, replace brain that nerve impulse is converted to image, can be described as the inverse mapping of space time filter.BP artificial neural network have three layers composition, i.e. input layer, hidden layer and output layer.The input layer of used BP network has 279 neurons in the retina encoder that the present invention realized, corresponding to the train of impulses of space time filter output, hidden layer has 35 neurons, and 729 neurons of output layer are corresponding to the output image of 729 picture elements.With of the inverse mapping of BP artificial neural network as space time filter, can freelyr must select sample space, can train the sample image diversified, that quantity is big, and be not limited to the used mobile circle of ECKMILLER.
Describedly in the space time filter parameter area, carry out parameter optimization, be specially: find the optimized parameter vector by error function, error function F (z with the evolution strategy method i) be Euler's distance of output image and input picture, parameter vector comprises 63 parameters of 9 described space time filters.Parameter area is: 0<σ c+ 0.1<σ s≤ 3, σ cAnd σ sCorrespond respectively to center and picture element receptive field scope on every side, 0.5≤α c<1, α c+ α s=1, α cAnd α sBe center and weights on every side, 17≤λ s<λ c≤ 25, λ c, λ s is relevant with periphery output pulse arrival time to peak with the space time filter center, 0.04≤d≤0.08, d is relevant time delay to the center with periphery.The initial former generation of picked at random vector z in parameter area at first i, i=1 ..., P.Produce the filial generation vector x by on each element of former generation's vector, adding a zero-mean Gaussian random variable i=z i+ N (0, σ i), i=1 ..., P, σ i=F (z i)/300.The variances sigma of gaussian variable is relevant with error function, can accelerate parameter convergence speed.Relative error function F (z i) and F (x i), i=1 ..., P, the less vector of Select Error is as the former generation of next iteration.Iteration is till the iteration stopping condition that satisfies method.The advantage of evolution strategy method is to be convenient to realize that speed is very fast, and to the parameter area less-restrictive, but randomness is bigger, lacks the convergent directivity.Have for simple sample image and comparatively fast to restrain effect preferably.
Describedly in the space time filter parameter area, carry out parameter optimization, be specially: realize that by fitness function fitness function is Euler's distance of output image and input picture with the population method.Population method used in the retina encoder that the present invention realized is formed population by 6 particles, and each particle comprises 63 parameters of 9 space time filters.Parameter area is: 0<σ c+ 0.4<σ s≤ 3, σ cAnd σ sCorrespond respectively to center and picture element covering domain on every side, 0.5≤α c<0.8, α c+ α s=1, α cAnd α sBe center and weights on every side, 17≤λ c≤ 25,14≤λ s<λ c, λ c, λ s is relevant with periphery output pulse arrival time to peak with the space time filter center, 0.04≤d≤0.08, d is relevant time delay to the center with periphery.At first all particle's velocity and position in the initialization particle population in parameter area.With fitness function all particles are estimated, according to the fitness function more individual extreme value p of each particle and whole extreme value 1 in the new population.Individual extreme value be single particle from beginning to search the optimal vector of current iteration, whole extreme value is that the particle population is from beginning to search the optimal vector of current iteration.According to the improvement population method formula that combines by traditional population method and evolution strategy particle rapidity and position are carried out iteration: v then i=error * randn * (p i-x i)+error * randn * (1 i-x i)+error * randn, x i(t+1)=x i(t)+v i, error is the output image that draws according to fitness function and the error of original image, randn is a Gaussian random variable.Iteration is till the iteration stopping condition that satisfies method.The advantage of improved population method is that Memorability is arranged, and the result of each search is in store, determines search speed according to individual extreme value and whole extreme value, and directivity is preferably arranged.Be different from traditional population method, the retina encoder of being realized in order more to be adapted to the present invention, the particle rapidity of improved population method is also relevant with Gaussian random variable with fitness function, like this can the relative requirement that must reduce to parameter area, and accelerate speed of convergence.
The present invention combines center periphery space time filter, BP artificial neural network and parameter optimization method, interlocks layer by layer, and contact closely can be simulated the retina signal processing effectively.It is different from existing retina encoder, can encode to image and directly produce corresponding boost pulse.Being used in combination of BP artificial neural network among the present invention and parameter optimization method, make this retina encoder to train to diversified large sample image space, and can get the exptended sample image space flexibly, increase space time filter parameter space scope.The dirigibility and the better variation that must satisfy individual difference and input picture of tunable performance of the retina encoder that the present invention realizes, and can embed use as a pith of retina vision prosthesis.
Description of drawings
Fig. 1 is for to be divided into 9 with the pixel zone, the synoptic diagram of picked at random receptive field center.
The space time filter spatial filtering output waveform synoptic diagram that Fig. 2 realizes for the present invention based on the retina encoder of space time filter; Horizontal ordinate is the distance from the receptive field center, and ordinate is a pulsed frequency.
The space time filter time filtering output waveform synoptic diagram that Fig. 3 realizes for the present invention based on the retina encoder of space time filter.Horizontal ordinate is the time, and ordinate is a pulsed frequency.
Fig. 4 is the part of selected sample image; Selected sample image is the different square or rectangular in big or small position.
The result that Fig. 5 obtains with the evolution strategy parameter optimization method for the embodiment of the invention.
The result that Fig. 6 obtains with improved population parameter optimization method for the embodiment of the invention.
Fig. 7 (a) is a sample image.Fig. 7 (b-d) is the corresponding space time filter output of Fig. 7 (a).
Embodiment
Below embodiments of the invention are elaborated: present embodiment has provided detailed embodiment and process being to implement under the prerequisite with the technical solution of the present invention, but protection scope of the present invention is not limited to following embodiment.
Present embodiment is finished by two parts, i.e. BP artificial neural network training process and parameter optimization method parameters optimization process.Space time filter as simulation retinal ganglial cells signal processing part through wherein.
1. as shown in Figure 1,729 picture element zones are divided into 9, receptive field center of picked at random in each piece zone.
2. select one group of space time filter initial parameter vector a.Vector a contains 63 param elements of 9 space time filters.7 parameters of each wave filter wherein, be respectively 2 weighting parameters, the weight of receptive field center and periphery is described, 2 spatial parameters, the scope of receptive field center and periphery is described, 3 time parameters are described the space time filter output pulse frequency and are reached the delay to the center of time of peak value and receptive field periphery.Fig. 2 and Fig. 3 have provided the time of space time filter and the output pulse in space respectively.The horizontal ordinate of Fig. 2 is the distance of pixel from the receptive field center, and ordinate is corresponding pulsed frequency.The horizontal ordinate of Fig. 2 is the time, and ordinate is a pulsed frequency.
3. be the space time filter of a with sample image by parameter vector, obtain exporting 279 * 77 matrix.As shown in Figure 4, sample image is size, the different square or rectangular in position.
4. the output of space time filter is trained as the input of BP artificial neural network.The input layer of the BP network that the present invention is used has 279 neurons, and hidden layer has 35 neurons, 729 neurons of output layer.This BP network has added biasing and momentum, and it is faster to make it convergence.
5. in sample image, choose an image x wantonly.
6. utilization evolution strategy or improved population parameter optimization method are come parameters optimization.
When using the evolution strategy method, choose the parameter vector that contains 63 elements, wherein each space time filter comprises 7 parameters, σ c, the field range of σ s corresponding space time filter receptive field center of difference and periphery, α c, α s is corresponding receptive field center and periphery weight respectively, λ c, λ s corresponding respectively space time filter center and periphery output reach the time of peak value, and the corresponding space time filter periphery of d is to the delay at center.Parameter area is: 0<σ c+ 0.1<σ s≤ 3,0.5≤α c<1, α c+ α s=1,17≤λ s<λ c≤ 25,0.04≤d≤0.08.In parameter area, choose one group of parameter initialization former generation parameter vector.During each iteration, on each element of former generation's parameter vector b, add a zero-mean Gaussian random variable and produce filial generation parameter vector b '.With image x is respectively the space time filter of b and b ' by parameter vector, and the BP network that its output is finished by training again obtains output image y and y '.With output image y, y ' and input picture x compare, and choose the former generation parameter vector of the pairing parameter vector of the less image of error function as next iteration.Iteration is till the iteration stopping condition that satisfies method.The variance of the gaussian variable of being got is relevant with error function.Error function is Euler's distance of input picture and output image.As shown in Figure 5, Fig. 5 (a) (c) is two input pictures, (b), (d) output image result for obtaining with the evolution strategy parameter optimization method: this as a result in the corresponding simulating process selected initial parameter be: σ c=1, σ s=3, α c=0.5, α s=0.5, λ c=22, λ s=19, d=0.06, the used parameter of initialization former generation parameter vector is: σ c=0.5, σ s=2.5, α c=0.6, α s=0.4, λ c=24, λ s=20, d=0.07.
When using improved population method, choose 6 particles as population, each particle position vector is the parameter vector of space time filter, each position vector contains 63 param elements of 9 space time filters, wherein each space time filter has 7 parameters, σ c, the field range of σ s corresponding space time filter receptive field center of difference and periphery, the corresponding receptive field of α center is to the periphery weight, λ c, λ s corresponding respectively space time filter center and periphery output reach the time of peak value, and the corresponding space time filter periphery of d is to the delay at center.Parameter area is: 0<σ c+ 0.4<σ s≤ 3,0.5≤α c<0.8, α c+ α s=1,17≤λ c≤ 25,14≤λ s<λ c, 0.04≤d≤0.08.At first all particle's velocity and position in the initialization particle population in parameter area.The particle position vector is the space time filter parameter vector.Is the space time filter of 6 particle positions with image x by parameter vector, and the BP network that its output is finished by training again obtains 6 output images.With fitness function all particles are estimated, according to the fitness function more individual extreme value p of each particle and whole extreme value 1 in the new population.Fitness function is Euler's distance of output image and input picture, individual extreme value be single particle from beginning to search the optimal vector of current iteration, whole extreme value is that the particle population is from beginning to search the optimal vector of current iteration.Upgrade particle's velocity and position: v according to formula then i=error * randn * (p i-x i)+error * randn * (1 i-x i)+error * randn, x i(t+1)=x i(t)+v i, error is a fitness function, randn is a Gaussian random variable.Iteration is till the iteration stopping condition that satisfies method.As shown in Figure 6, Fig. 6 (a) (c) is two input pictures, (b), and (d) output image result for obtaining with improved population parameter optimization method; This as a result in the corresponding simulating process selected initial parameter be: σ c=1, σ s=3, α c=0.5, α s=0.5, λ c=22, λ s=19, d=0.06, the used parameter of initialization particle is: σ c=0.5, σ s=2.5, α c=0.6, α s=0.4, λ c=24, λ s=20, d=0.07.
7. after finding satisfied space time filter parameter through the parameter optimization method, by space time filter, the boost pulse string of its output then is the retina coding of the output image of correspondence with image x.As shown in Figure 7, Fig. 7 (b-d) is for after finding optimized parameter by the parameter optimization method, with the pulse output of the corresponding sample image of 7 (a) by the pairing space time filter of optimized parameter.The designed retina encoder of present embodiment comprises 9 space time filters, corresponding to this image, has 3 space time filters that response is arranged, and all the other 6 space time filters are output not.Fig. 7 (b-d) is the output of 3 space time filters of response.Horizontal ordinate is the time, and ordinate is a pulsed frequency.
As seen from the above-described embodiment, the present invention has used space time filter to simulate the process of retina signal Processing, used evolution strategy and population method to find optimized parameter realizing the enantiomorphic relationship of image and boost pulse string, for the realization of artificial vision prosthesis provides the scrambler basis.

Claims (5)

1, a kind of retina encoder implementation method that adopts space time filter, it is characterized in that, at first sample image is imported space time filter, the output that obtains is as the input of BP artificial neural network, BP ANN is determined the weights of BP artificial neural network; Import any one image in the sample image then, get one group of space time filter parameter at random, in the space time filter parameter area, carry out parameter optimization with population or evolution strategy method, by iteration repeatedly, finally make output image converge on input picture, after the parameter of space time filter was determined, the impulse stimulation of its output then was the retina coding corresponding to input picture;
Described space time filter, directly the output nerve impulsion with input picture and retinal ganglial cells interrelates, and this space time filter has 7 parameters, comprising 3 space time filter time parameter λ c, λ s, d, represent that respectively the output of space time filter center and periphery reaches the delay to the center of time of peak value and space time filter periphery, 2 space time filter spatial parameter σ c, σ s, represent the field range of space time filter receptive field center and periphery, 2 weighting parameter α respectively c, α s, represent receptive field center and periphery weight respectively, its mathematic(al) representation is:
CS(x,t)=α cK(t,λ c)∑{G(x,σ c)pix(x)}-α sK(t-d,λ s)∑{G(x,σ s)pix(x)}
K ( t , &lambda; ) = &lambda;texp ( - &lambda;t ) if t &GreaterEqual; 0 0 if t < 0
G(x,σ)=(2πσ 2) -1exp(-x 2/2σ 2)
This space time filter peripheral space scope is bigger than central space scope, i.e. σ c<σ sPeriphery postpones to some extent than the center time response time response, i.e. λ s<λ c, d〉and 0;
Describedly in the space time filter parameter area, carry out parameter optimization, be specially: find the optimized parameter vector by error function, error function F (z with the evolution strategy method i) be Euler's distance of output image and input picture, parameter vector comprises 63 parameters of 9 described space time filters, at first the initial former generation's vector of picked at random z in parameter area i, i=1 ..., P produces the filial generation vector x by add a zero-mean Gaussian random variable on each element of former generation's vector i=z i+ N (0, σ i), i=1 ..., P, σ i=F (z i)/300, the variances sigma of gaussian variable is relevant with error function, can accelerate parameter convergence speed; Relative error function F (z i), F (x i), i=1 ..., P, the less vector of Select Error is as the former generation of next iteration, and iteration is till the iteration stopping condition that satisfies method;
Describedly in the space time filter parameter area, carry out parameter optimization with the population method, be specially: realize by fitness function, fitness function is Euler's distance of output image and input picture, the population method is formed population by 6 particles, each particle comprises 63 parameters of 9 space time filters, at first all particle's velocity and position in the initialization particle population in parameter area, with fitness function all particles are estimated, according to the fitness function more individual extreme value p of each particle and whole extreme value 1 in the new population, individual extreme value is that single particle is from beginning to search the optimal vector of current iteration, whole extreme value be the particle population from beginning to search the optimal vector of current iteration, according to the improvement population method formula that combines by traditional population method and evolution strategy particle rapidity and position are carried out iteration: v then i=error * randn * (p i-x i)+error * randn * (l i-x i)+error * randn, x i(t+1)=x i(t)+v i, error is the output image that draws according to fitness function and the error of original image, and randn is a Gaussian random variable, and iteration is till the iteration stopping condition that satisfies method.
2, the retina encoder implementation method of employing space time filter according to claim 1, it is characterized in that, described space time filter, adopt 9 altogether, retinal ganglial cells of each space time filter simulation, the receptive field of 9 described space time filters can be overlapping, and the image of 729 picture elements is handled, and the input picture of 729 picture elements is converted to pulse output.
3, the retina encoder implementation method of employing space time filter according to claim 1, it is characterized in that, described BP artificial neural network is used to simulate brain and handles visual signal, the BP artificial neural network composition that haves three layers, be input layer, hidden layer and output layer, input layer has 279 neurons, and corresponding to the train of impulses of space time filter output, hidden layer has 35 neurons, 729 neurons of output layer are corresponding to the output image of 729 picture elements.
4, the retina encoder implementation method of employing space time filter according to claim 1 is characterized in that, described parameter area when carrying out parameter optimization with the evolution strategy method in the space time filter parameter area is: 0<σ c+ 0.1<σ s≤ 3,0.5≤α c<1, α c+ α s=1,17≤λ s<λ c≤ 25,0.04≤d≤0.08.
5, the retina encoder implementation method of employing space time filter according to claim 1 is characterized in that, described parameter area when carrying out parameter optimization with the population method in the space time filter parameter area is: 0<σ c+ 0.4<σ s≤ 3,0.5≤α c<0.8, α c+ α s=1,17≤λ c≤ 25,14≤λ s<λ c, 0.04≤d≤0.08.
CNB2007100378833A 2007-03-08 2007-03-08 Implementation method of retina encoder using space time filter Expired - Fee Related CN100481123C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007100378833A CN100481123C (en) 2007-03-08 2007-03-08 Implementation method of retina encoder using space time filter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007100378833A CN100481123C (en) 2007-03-08 2007-03-08 Implementation method of retina encoder using space time filter

Publications (2)

Publication Number Publication Date
CN101017535A CN101017535A (en) 2007-08-15
CN100481123C true CN100481123C (en) 2009-04-22

Family

ID=38726533

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007100378833A Expired - Fee Related CN100481123C (en) 2007-03-08 2007-03-08 Implementation method of retina encoder using space time filter

Country Status (1)

Country Link
CN (1) CN100481123C (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102858402B (en) * 2010-02-26 2016-03-30 康奈尔大学 Retina prosthese
CN101882239A (en) * 2010-05-28 2010-11-10 华东理工大学 Ethylene cracking severity modeling method based on expert knowledge and neutral network
CA2883091C (en) 2011-08-25 2020-02-25 Cornell University Retinal encoder for machine vision
EP3291780A4 (en) 2015-04-20 2019-01-23 Cornell University Machine vision with dimensional data reduction
CN107547457A (en) * 2017-09-15 2018-01-05 重庆大学 A kind of approach for blind channel equalization based on Modified particle swarm optimization BP neural network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Tunable retina encoders for retina implants:why and how. Rolf Eckmiller,Dirk Neumann and Oliver Baruth.Journal of Neural Engineering,No.2. 2005
Tunable retina encoders for retina implants:why and how. Rolf Eckmiller,Dirk Neumann and Oliver Baruth.Journal of Neural Engineering,No.2. 2005 *
细胞神经网络生物视觉信息处理模型. 翁贻方,鞠磊,王坚.北京工商大学学报(自然科学版),第25卷第1期. 2007
细胞神经网络生物视觉信息处理模型. 翁贻方,鞠磊,王坚.北京工商大学学报(自然科学版),第25卷第1期. 2007 *
视觉信息处理的人工神经系统模型研究. 韩力群,涂序彦.微计算机信息(嵌入式与SOC),第22卷第3-2期. 2006
视觉信息处理的人工神经系统模型研究. 韩力群,涂序彦.微计算机信息(嵌入式与SOC),第22卷第3-2期. 2006 *

Also Published As

Publication number Publication date
CN101017535A (en) 2007-08-15

Similar Documents

Publication Publication Date Title
CN100481123C (en) Implementation method of retina encoder using space time filter
CN103732287B (en) The method and apparatus controlling visual acuity aid
Harmon et al. Neural modeling.
Luo et al. Real-time simulation of passage-of-time encoding in cerebellum using a scalable FPGA-based system
CN101770560B (en) Information processing method and device for simulating biological neuron information processing mechanism
CN107194426A (en) A kind of image-recognizing method based on Spiking neutral nets
CN108433721A (en) The training method and system of brain function network detection and regulation and control based on virtual reality
Shah et al. Computational challenges and opportunities for a bi-directional artificial retina
CN104545919A (en) Ultrasonic transcranial focusing method
CN109410149A (en) A kind of CNN denoising method extracted based on Concurrent Feature
Tagluk et al. Communication in nano devices: Electronic based biophysical model of a neuron
Nicolelis The true creator of everything: How the human brain shaped the universe as we know it
Volman et al. Generative modelling of regulated dynamical behavior in cultured neuronal networks
Wohrer Model and large-scale simulator of a biological retina, with contrast gain control
CN109620539A (en) A kind of device and method that visual information is directly inputted to brain visual cortex
Wodlinger et al. Recovery of neural activity from nerve cuff electrodes
CN101860357A (en) Method for weight control and information integration by utilizing time encoding
CN101114336A (en) Artificial visible sensation image processing process based on wavelet transforming
Tóth et al. Autoencoding sensory substitution
CN109999435B (en) EMS-based fitness method and system
Suhaimi et al. Design of movement sequences for arm rehabilitation of post-stroke
Argüello et al. New trends in computational modeling: a neuroid-based retina model
Hayashida et al. Retinal circuit emulator with spatiotemporal spike outputs at millisecond resolution in response to visual events
Niu et al. Application of particle swarm system as a novel parameter optimization technique on spatiotemporal retina model
Zhang et al. Design of virtual interactive platform based on MI-BCI for rehabilitation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090422

Termination date: 20120308