CN103679670A - A PCNN multisource image fusion method based on an improved model - Google Patents

A PCNN multisource image fusion method based on an improved model Download PDF

Info

Publication number
CN103679670A
CN103679670A CN201210362080.6A CN201210362080A CN103679670A CN 103679670 A CN103679670 A CN 103679670A CN 201210362080 A CN201210362080 A CN 201210362080A CN 103679670 A CN103679670 A CN 103679670A
Authority
CN
China
Prior art keywords
value
pcnn
image
pixel
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210362080.6A
Other languages
Chinese (zh)
Other versions
CN103679670B (en
Inventor
宋亚军
朱振福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
No207 Institute Second Academy Of China Aerospace Science & Industry Group
Original Assignee
No207 Institute Second Academy Of China Aerospace Science & Industry Group
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by No207 Institute Second Academy Of China Aerospace Science & Industry Group filed Critical No207 Institute Second Academy Of China Aerospace Science & Industry Group
Priority to CN201210362080.6A priority Critical patent/CN103679670B/en
Publication of CN103679670A publication Critical patent/CN103679670A/en
Application granted granted Critical
Publication of CN103679670B publication Critical patent/CN103679670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a PCNN multisource image fusion method based on an improved model. Improved content comprises that feedback inputs of various neurons in a PCNN just receive external stimulus inputs; that values of various parameters in a link domain are same to all neurons; that the values of various parameters in a variable threshold function are same to all neurons; that a threshold value look-up table and an index plan are introduced, wherein the threshold value look-up table records threshold values corresponding to network operation times, and the threshold values can be obtained by calculation prior to the network operates such that exponent arithmetic in network operation is prevented and the operation of the network is accelerated; the index plan records the ignition time of all pixels, is an integration result of spatially-adjacent similar pixels in an input image, and embodies an overall visual characteristic of the input image. The method introduces the index plan recording the ignition time of all pixels and the threshold value look-up table recording threshold values corresponding to network operation times, employs a fusion rule based on the index plan, and achieves an effect better than a conventional wavelet transform fusion method.

Description

A kind of PCNN multisource image anastomosing method based on improved model
Technical field
The present invention relates to a kind of PCNN multisource image anastomosing method based on improved model, particularly relate to a kind of PCNN multisource image anastomosing method based on improved model that visible ray, medium wave and three wave bands of LONG WAVE INFRARED merge simultaneously that is suitable for.
Background technology
Artificial neural network is a kind of novel computing model of attempting to imitate biological nervous system information processing manner.A neural network is comprised of Multilevel method unit or node, can adopt the whole bag of tricks to carry out interconnected.Oneself carries out multi-source image fusion through using artificial neural networks some scholar.At present, the application of neural network in image co-registration mainly contains: bimodal neuroid (Bimodal Neurons), multilayer perceptron (Multi-layered Perceptron) and Pulse Coupled Neural Network (Pulse-coupled Neural Network, PCNN) etc.Wherein PCNN is a kind of new neural network proposing in recent years, is referred to as in the world third generation artificial neural network.
1981, E.A.Newman, P.H.Hartline etc. have proposed 6 kinds of dissimilar bimodal neurons and (have comprised AND, OR, Visible-Enhanced Infrared, Visible-Suppressed-Infrared, Infrared-Enhanced-Visible and Infrared-Suppressed-Visible) for the fusion of visible ray and infrared image.Nineteen ninety-five, Fechner and Godlewski have proposed the image interfusion method based on multilayer perceptron neural network.By interested pixel in training multilayer perceptron identification FLIR (Forward-Looking Infrared) image, incorporated in visible images.Since the nineties in 20th century, the research by Eckhorn etc. to the visual cortex nerve impulse string synchronized oscillation phenomenon of cat, monkey, has obtained mammalian nervous meta-model, and development has formed Pulse-coupled Neural Network Model thus.This model has the advantages that the pixel similar to image two-dimensional space, gray scale is similar is divided into groups, and can reduce image local gray scale difference value, makes up the small interruption of image local.1999, BrocssardR.P. etc. proved the relation of the neuronic spark rate of PCNN and gradation of image, have confirmed the feasibility of PCNN for image co-registration.Based on this model, relevant scholar has proposed various improved models, and uses it for the fusion of various images.
At present, main the following aspects of concentrating of the research of the image interfusion method based on PCNN:
The robotization of network parameter is chosen: because the parameter that PCNN network relates to is more, and different parameters value all can affect final result.By householder method, automatically calculate the key parameter in PCNN network, can obtain better result.
Improvement to PCNN basic model: according to practical function, process the difference of object and the mode of thinking, different researchers have successively proposed different improved models.
Therefore need badly a kind of novel PCNN multisource image anastomosing method based on improved model is provided.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of raising multi-source image syncretizing effect, makes fused image target signature more obviously, more be conducive to the PCNN multisource image anastomosing method based on improved model of target identification.
For solving the problems of the technologies described above, a kind of PCNN multisource image anastomosing method based on improved model of the present invention, comprises the following steps successively:
Step 1, three width original image A, B and the C of input are spatially carried out to Pixel-level registration, guarantee that three width image sizes are X * Y;
Step 2, setting network parameter W, V l, β, V θ, α θvalue with Δ t;
V land V θbe respectively L ij[n] and θ ijintrinsic electromotive force in [n], θ ij[n] is dynamic threshold, L ij[n] is the linear input that connects;
α θfor θ ijthe damping time constant of [n]; Δ t is time sampling interval; β is strength of joint constant between cynapse; Y ij[n] is PCNN pulse output; Y kl[n-1] is the last pulse output of PCNN; W in inner connection matrix W ijklcorresponding L ijy in [n] klthe weighting coefficient of [n-1]; N is the number of run of network, n=1, and 2 ..., N-1, N, N is maximum number of run;
Step 3, in every width input picture, search S ij_max, S ij_min; S ij_max< V θ, S ij_min> 0;
Step 4, obtain the maximum number of run N of network and threshold value look-up table LT (s), s is the function variable of LT (s);
N = t 2 - t 1 &Delta;t + 1
t 1 = 1 &alpha; &theta; ln [ V &theta; S ij _ max ]
t 2 = 1 &alpha; &theta; ln [ V &theta; S ij _ min ]
LT ( s ) = V &theta; e ( - ( - s&Delta;t + t 2 ) &alpha; &theta; )
In formula: t 1and t 2be respectively the autogenous ignition time of gray-scale value maximum pixel and minimum pixel in image;
Step 5, utilize following formula moving model;
F ij[n]=S ij
L ij[n]=V L∑w ijklY kl[n-1]
U ij[n]=F ij[n](1+βL ij[n])
Figure BDA00002187821300035
Figure BDA00002187821300036
I ij[n]=N-n
In formula: U ij[n] is internal activity item, Y ij[n] is PCNN pulse output, I ij[n] is index value; When n=1, L ij[1]=0, U ij[1]=F ij[1]=S ij, θ ij[1]=LT (N-1)=S ij_max, corresponding feed back input intermediate value is S ij_maxneuron by autogenous ignition; After neuron firing, output Y ij[1]=1, θ ij[2] become V θ, the neuronic index value of lighting a fire is labeled as I ij=N-1;
By that analogy, when n=N is arrived in the network operation, threshold value θ ij[N]=LT (0)=S ij_min, for feed back input, be S ij_minneuron autogenous ignition, the neuronic index value of lighting a fire is labeled as I ij=0;
Step 6, obtain respectively the key map I of three width original image A, B and C a, I band I c;
Work as I a, I band I cthe absolute value of the mutual difference of index value of respective pixel is all less than or equal to representative value e, and the pixel value of fused images is got the weighted mean value of three width image respective pixel;
Work as I a, I band I cwhen the absolute value of the mutual difference of index value of respective pixel has the situation that is greater than representative value e, if wherein the mutual difference of index value of two width image respective pixel is less than or equal to representative value e, the pixel value of fused images is got the weighted mean value of above-mentioned two width image respective pixel;
In other situations, the pixel value of fused images is got the pixel value that index value is larger.
In step 2, guarantee the feed back input F of dendron ij[n] receives only outside input stimulus signal S ij; Guarantee W, V l, β, V θ, α θall identical to all neurons with the value of Δ t.
e=2。
Three width original image A, B and C are respectively visible ray, medium-wave infrared and LONG WAVE INFRARED image.
The present invention, analyzing on PCNN image interfusion method basis, simplifies accordingly and improves basic model, obtains a kind of new improved PCNN image interfusion method.Wherein simplify and improve in have: 1. in PCNN, each neuronic feed back input receives only outside stimulus input; 2. in link field, the value of each parameter is all identical to all neurons; 3. in variable threshold value function, the value of each parameter is all identical to all neurons; 4. introduce threshold value look-up table and key map, threshold value look-up table has recorded the threshold value corresponding with network operation number of times, and these threshold values can calculate in advance before the network operation, has avoided the exponent arithmetic in the network operation, has accelerated the operation of network.Key map has recorded the duration of ignition of whole pixels, is the integrated results of the adjacent similar pixel in space in input picture, embodiment be the whole visual signature of input picture.
The present invention has introduced key map and the record threshold value look-up table corresponding with network operation number of times of the duration of ignition of recording whole pixels in improved model, under identical fusion rule condition, than traditional wavelet transform fusion, has better effect.
Indices of the present invention is better than the indices of the former fusion rule of WT+ to a certain extent, and the index such as average and standard variance particularly improves very obviously, and the validity of improving one's methods has been described.
Accompanying drawing explanation
Fig. 1 is the PCNN fusion method schematic diagram of three width images.
Fig. 2 is certain seashore original image and the fusion results based on PCNN.
Embodiment
Below in conjunction with drawings and Examples, the present invention is further detailed explanation.
Basic thought of the present invention is: two width or several original images to input adopt respectively PCNN model to calculate corresponding key map, then key map and original images by using are merged to decision-making accordingly, finally obtain fused images.The PCNN fusion method schematic diagram of three width images as shown in Figure 1.
Specifically, the present invention comprises the following steps successively:
Step 1, three width original image A, B and the C of input are spatially carried out to Pixel-level registration, guarantee that three width image sizes are X * Y; Three width original image A, B and C are respectively visible ray, medium-wave infrared and LONG WAVE INFRARED image;
Step 2, setting network parameter W, V l, β, V θ, α θvalue with Δ t; Guarantee the feed back input F of dendron ij[n] receives only outside input stimulus signal S ij; Guarantee each parameter W, V in link field l, β value all identical to all neurons; Guarantee each V parameter in variable threshold value function θ, α θall identical to all neurons with the value of Δ t;
V land V θbe respectively L ij[n] and θ ijintrinsic electromotive force (amplification coefficient) in [n], θ ij[n] is dynamic threshold, L ij[n] is the linear input that connects;
α θfor θ ijthe damping time constant of [n]; Δ t is time sampling interval; β is strength of joint constant between cynapse; Y ij[n] is PCNN pulse output; Y kl[n-1] is the last pulse output of PCNN; W in inner connection matrix W ijklcorresponding L ijy in [n] klthe weighting coefficient of [n-1];
N is the number of run of network, n=1, and 2 ..., N-1, N, N is maximum number of run;
Step 3, in every width input picture, search S ij_max, S ij_min; V θ> S ij_max; If S ij_minbe less than or equal to 0, by linearity, adjust S ij_max, S ij_minand input picture, make S ij_minbe greater than 0;
Step 4, obtain the maximum number of run N of network and threshold value look-up table LT (s), s is the function variable of LT (s);
N = t 2 - t 1 &Delta;t + 1
t 1 = 1 &alpha; &theta; ln [ V &theta; S ij _ max ]
t 2 = 1 &alpha; &theta; ln [ V &theta; S ij _ min ]
LT ( s ) = V &theta; e ( - ( - s&Delta;t + t 2 ) &alpha; &theta; )
In formula: t 1and t 2be respectively the autogenous ignition time of gray-scale value maximum pixel and minimum pixel in image;
Step 5, utilize following formula moving model;
F ij[n]=S ij
L ij[n]=V L∑w ijklY kl[n-1]
U ij[n]=F ij[n](1+βL ij[n])
Figure BDA00002187821300065
Figure BDA00002187821300066
I ij[n]=N-n
In formula: U ij[n] is internal activity item, Y ij[n] is PCNN pulse output, I ij[n] is index value;
When n=1, L ij[1]=0, U ij[1]=F ij[1]=S ij, θ ij[1]=LT (N-1)=S ij_max, corresponding feed back input intermediate value is S ij_maxneuron by autogenous ignition; After neuron firing, output Y ij[1]=1, θ ij[2] become V θ, the neuron that makes to light a fire will can not lighted a fire again.Meanwhile, the neuronic index value of lighting a fire is labeled as I ij=N-1, no longer changes.Along with the increase of the number of run n of network, threshold value θ ij[n] reduces gradually, and the neuron of igniting will encourage adjacent neuron by link field.When n=N is arrived in the network operation, threshold value θ ij[N]=LT (0)=S ij_min, for feed back input, be S ij_minneuron, even if there is no adjacent neurons excitation in the situation that, also will autogenous ignition, the neuronic index value of lighting a fire is labeled as I ij=0; By the network operation of maximum N time, all neurons will be lighted a fire and only once.
Can find out, if neuron S ijwhen the n time operation, light a fire, its key map I ij[n] is fixed as N-n, and no longer the operation along with network changes.Key map I ij[n] recorded all neuronic durations of ignition, is the result that input picture space-time is integrated.In addition, because the variable threshold value in the network operation is to obtain by look-up table mode, do not need complicated exponent arithmetic, shortened the time of the network operation.The automatic acquisition of network operation maximum times, has not only guaranteed the determinacy of the network operation, and makes each neuron firing only once;
Step 6, adopt corresponding convergence strategy, obtain fused images.The convergence strategy that this method adopts is as follows: because the key map of every width original image has represented the whole visual signature of original image, so carry out synthetic setting convergence strategy by the key map of three width original images.
Obtain the key map I of three width original image A, B and C a, I band I c;
Work as I a, I band I cthe absolute value of the mutual difference of index value of respective pixel is all less than or equal to representative value e, and the pixel value of fused images is got the weighted mean value of three width image respective pixel;
Work as I a, I band I cwhen the absolute value of the mutual difference of index value of respective pixel has the situation that is greater than representative value e, if wherein the mutual difference of index value of two width image respective pixel is less than or equal to representative value e, the pixel value of fused images is got the weighted mean value of above-mentioned two width image respective pixel;
In other situations, the pixel value of fused images is got the pixel value that index value is larger.
Preferred e=2.
In order to verify the performance based on improving PCNN model image fusion method of proposition, selected three width visible rays, LONG WAVE INFRARED and the medium-wave infrared image through registration of certain seashore (image size is 320 * 256) as treating fused images.The (a) and (b) of Fig. 2 and (c) be respectively primary visible light to be merged, LONG WAVE INFRARED and the medium-wave infrared image of certain seashore, (d) of Fig. 2 adopts 4 layers of wavelet transformation (WT), low frequency coefficient is averaged, the fusion results that high frequency coefficient adopts the fusion rule of region energy operator to obtain; (e) of Fig. 2 adopts the fusion results of improving PCNN model.In order better fusion results to be analyzed relatively, adopted the product of average, standard deviation, entropy and structural information and transinformation content
Figure BDA00002187821300081
objective evaluation criteria calculate, the result obtaining is as shown in table 1.Can find out, the fusion results based on improving PCNN model is better than the fusion results based on the former fusion rule of WT+ on local detail, profile (personage on seashore limit) and overall brightness.From three tables of objective evaluation, can find out, the indices based on improving PCNN model is better than the indices, the particularly index such as average and standard variance of the former fusion rule of WT+ to a certain extent, improves very obviously, and the validity of improving one's methods has been described.
Table 1 certain seashore fusion results objective evaluation based on PCNN
Figure BDA00002187821300082

Claims (4)

1. the PCNN multisource image anastomosing method based on improved model, comprises the following steps successively:
Step 1, three width original image A, B and the C of input are spatially carried out to Pixel-level registration, guarantee that three width image sizes are X * Y;
Step 2, setting network parameter W, V l, β, V θ, α θvalue with Δ t;
V land V θbe respectively L ij[n] and θ ijintrinsic electromotive force in [n], θ ij[n] is dynamic threshold, L ij[n] is the linear input that connects;
α θfor θ ijthe damping time constant of [n]; Δ t is time sampling interval; β is strength of joint constant between cynapse; Y ij[n] is PCNN pulse output; Y kl[n-1] is the last pulse output of PCNN; W in inner connection matrix W ijklcorresponding L ijy in [n] klthe weighting coefficient of [n-1];
N is the number of run of network, n=1, and 2 ..., N-1, N, N is maximum number of run;
Step 3, in every width input picture, search S ij_max, S ij_min; S ij_max< V θ, S ij_min> 0;
Step 4, obtain the maximum number of run N of network and threshold value look-up table LT (s), s is the function variable of LT (s);
N = t 2 - t 1 &Delta;t + 1
t 1 = 1 &alpha; &theta; ln [ V &theta; S ij _ max ]
t 2 = 1 &alpha; &theta; ln [ V &theta; S ij _ min ]
LT ( s ) = V &theta; e ( - ( - s&Delta;t + t 2 ) &alpha; &theta; )
In formula: t 1and t 2be respectively the autogenous ignition time of gray-scale value maximum pixel and minimum pixel in image;
Step 5, utilize following formula moving model;
F ij[n]=S ij
L ij[n]=V L∑w ijklY kl[n-1]
U ij[n]=F ij[n](1+βL ij[n])
Figure FDA00002187821200021
Figure FDA00002187821200022
I ij[n]=N-n
In formula: U ij[n] is internal activity item, Y ij[n] is PCNN pulse output, I ij[n] is index value;
When n=1, L ij[1]=0, U ij[1]=F ij[1]=S ij, θ ij[1]=LT (N-1)=S ij_max, corresponding feed back input intermediate value is S ij_maxneuron by autogenous ignition; After neuron firing, output Y ij[1]=1, θ ij[2] become V θ, the neuronic index value of lighting a fire is labeled as I ij=N-1;
By that analogy, when n=N is arrived in the network operation, threshold value θ ij[N]=LT (0)=S ij_min, for feed back input, be S ij_minneuron autogenous ignition, the neuronic index value of lighting a fire is labeled as I ij=0;
Step 6, obtain respectively the key map I of three width original image A, B and C a, I band I c;
Work as I a, I band I cthe absolute value of the mutual difference of index value of respective pixel is all less than or equal to representative value e, and the pixel value of fused images is got the weighted mean value of three width image respective pixel;
Work as I a, I band I cwhen the absolute value of the mutual difference of index value of respective pixel has the situation that is greater than representative value e, if wherein the mutual difference of index value of two width image respective pixel is less than or equal to representative value e, the pixel value of fused images is got the weighted mean value of above-mentioned two width image respective pixel;
In other situations, the pixel value of fused images is got the pixel value that index value is larger.
2. a kind of PCNN multisource image anastomosing method based on improved model according to claim 1, is characterized in that: in described step 2, guarantee the feed back input F of dendron ij[n] receives only outside input stimulus signal S ij; Guarantee W, V l, β, V θ, α θall identical to all neurons with the value of Δ t.
3. a kind of PCNN multisource image anastomosing method based on improved model according to claim 1, is characterized in that: e=2.
4. a kind of PCNN multisource image anastomosing method based on improved model according to claim 1, is characterized in that: three width original image A, B and C are respectively visible ray, medium-wave infrared and LONG WAVE INFRARED image.
CN201210362080.6A 2012-09-25 2012-09-25 A kind of PCNN multisource image anastomosing method based on improved model Active CN103679670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210362080.6A CN103679670B (en) 2012-09-25 2012-09-25 A kind of PCNN multisource image anastomosing method based on improved model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210362080.6A CN103679670B (en) 2012-09-25 2012-09-25 A kind of PCNN multisource image anastomosing method based on improved model

Publications (2)

Publication Number Publication Date
CN103679670A true CN103679670A (en) 2014-03-26
CN103679670B CN103679670B (en) 2016-08-31

Family

ID=50317125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210362080.6A Active CN103679670B (en) 2012-09-25 2012-09-25 A kind of PCNN multisource image anastomosing method based on improved model

Country Status (1)

Country Link
CN (1) CN103679670B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376546A (en) * 2014-10-27 2015-02-25 北京环境特性研究所 Method for achieving three-path image pyramid fusion algorithm based on DM642
CN104463821A (en) * 2014-11-28 2015-03-25 中国航空无线电电子研究所 Method for fusing infrared image and visible light image
CN107292883A (en) * 2017-08-02 2017-10-24 国网电力科学研究院武汉南瑞有限责任公司 A kind of PCNN power failure method for detecting area based on local feature
CN108537790A (en) * 2018-04-13 2018-09-14 西安电子科技大学 Heterologous image change detection method based on coupling translation network
CN111161203A (en) * 2019-12-30 2020-05-15 国网北京市电力公司 Multi-focus image fusion method based on memristor pulse coupling neural network
WO2020133027A1 (en) * 2018-12-27 2020-07-02 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001004826A1 (en) * 1999-07-07 2001-01-18 Renishaw Plc Neural networks
CN101697231A (en) * 2009-10-29 2010-04-21 西北工业大学 Wavelet transformation and multi-channel PCNN-based hyperspectral image fusion method
CN101877125A (en) * 2009-12-25 2010-11-03 北京航空航天大学 Wavelet domain statistical signal-based image fusion processing method
CN101968882A (en) * 2010-09-21 2011-02-09 重庆大学 Multi-source image fusion method
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN103810682A (en) * 2012-11-06 2014-05-21 西安元朔科技有限公司 Novel image fusion method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001004826A1 (en) * 1999-07-07 2001-01-18 Renishaw Plc Neural networks
CN101697231A (en) * 2009-10-29 2010-04-21 西北工业大学 Wavelet transformation and multi-channel PCNN-based hyperspectral image fusion method
CN101877125A (en) * 2009-12-25 2010-11-03 北京航空航天大学 Wavelet domain statistical signal-based image fusion processing method
CN101968882A (en) * 2010-09-21 2011-02-09 重庆大学 Multi-source image fusion method
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN103810682A (en) * 2012-11-06 2014-05-21 西安元朔科技有限公司 Novel image fusion method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NA LIU,KUN GAO,YAJUN SONG GUOQIANG NI: "A Novel Super-resolution Image Fusion Algorithm based on Improved PCNN and Wavelet Transform", 《MIPPR 2009:PATTERN RECOGNITION AND COMPUTER VISION》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376546A (en) * 2014-10-27 2015-02-25 北京环境特性研究所 Method for achieving three-path image pyramid fusion algorithm based on DM642
CN104463821A (en) * 2014-11-28 2015-03-25 中国航空无线电电子研究所 Method for fusing infrared image and visible light image
CN107292883A (en) * 2017-08-02 2017-10-24 国网电力科学研究院武汉南瑞有限责任公司 A kind of PCNN power failure method for detecting area based on local feature
CN107292883B (en) * 2017-08-02 2019-10-25 国网电力科学研究院武汉南瑞有限责任公司 A kind of PCNN power failure method for detecting area based on local feature
CN108537790A (en) * 2018-04-13 2018-09-14 西安电子科技大学 Heterologous image change detection method based on coupling translation network
WO2020133027A1 (en) * 2018-12-27 2020-07-02 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image fusion
CN113228046A (en) * 2018-12-27 2021-08-06 浙江大华技术股份有限公司 System and method for image fusion
CN113228046B (en) * 2018-12-27 2024-03-05 浙江大华技术股份有限公司 System and method for image fusion
CN111161203A (en) * 2019-12-30 2020-05-15 国网北京市电力公司 Multi-focus image fusion method based on memristor pulse coupling neural network

Also Published As

Publication number Publication date
CN103679670B (en) 2016-08-31

Similar Documents

Publication Publication Date Title
CN111709902B (en) Infrared and visible light image fusion method based on self-attention mechanism
CN103679670A (en) A PCNN multisource image fusion method based on an improved model
CN110322423B (en) Multi-modal image target detection method based on image fusion
CN112614077B (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN105551010A (en) Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
CN1873693B (en) Method based on Contourlet transformation, modified type pulse coupling neural network, and image amalgamation
CN107154048A (en) The remote sensing image segmentation method and device of a kind of Pulse-coupled Neural Network Model
CN103890781A (en) Retinal encoder for machine vision
CN102930249A (en) Method for identifying and counting farmland pests based on colors and models
CN107563389A (en) A kind of corps diseases recognition methods based on deep learning
CN103971329A (en) Cellular nerve network with genetic algorithm (GACNN)-based multisource image fusion method
CN105139371A (en) Multi-focus image fusion method based on transformation between PCNN and LP
CN109493309A (en) A kind of infrared and visible images variation fusion method keeping conspicuousness information
CN103455990B (en) In conjunction with vision noticing mechanism and the image interfusion method of PCNN
CN112184646B (en) Image fusion method based on gradient domain oriented filtering and improved PCNN
Gu et al. Research on the improvement of image edge detection algorithm based on artificial neural network
CN104616252A (en) NSCT (Non Subsampled Contourlet Transform) and PCNN (Pulse Coupled Neural Network) based digital image enhancing method
CN103700118B (en) Based on the moving target detection method of pulse coupled neural network
CN108648180B (en) Full-reference image quality objective evaluation method based on visual multi-feature depth fusion processing
CN103985115A (en) Image multi-strength edge detection method having visual photosensitive layer simulation function
CN114647760A (en) Intelligent video image retrieval method based on neural network self-temperature cause and knowledge conduction mechanism
CN107705274B (en) Multi-scale low-light-level and infrared image fusion method based on mathematical morphology
CN103235937A (en) Pulse-coupled neural network-based traffic sign identification method
Wang et al. A simplified pulse-coupled neural network for cucumber image segmentation
Wang et al. Pseudo color image fusion based on rattlesnake's visual receptive field model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant