CN103679670B - A kind of PCNN multisource image anastomosing method based on improved model - Google Patents

A kind of PCNN multisource image anastomosing method based on improved model Download PDF

Info

Publication number
CN103679670B
CN103679670B CN201210362080.6A CN201210362080A CN103679670B CN 103679670 B CN103679670 B CN 103679670B CN 201210362080 A CN201210362080 A CN 201210362080A CN 103679670 B CN103679670 B CN 103679670B
Authority
CN
China
Prior art keywords
value
pcnn
image
pixel
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210362080.6A
Other languages
Chinese (zh)
Other versions
CN103679670A (en
Inventor
宋亚军
朱振福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
No207 Institute Second Academy Of China Aerospace Science & Industry Group
Original Assignee
No207 Institute Second Academy Of China Aerospace Science & Industry Group
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by No207 Institute Second Academy Of China Aerospace Science & Industry Group filed Critical No207 Institute Second Academy Of China Aerospace Science & Industry Group
Priority to CN201210362080.6A priority Critical patent/CN103679670B/en
Publication of CN103679670A publication Critical patent/CN103679670A/en
Application granted granted Critical
Publication of CN103679670B publication Critical patent/CN103679670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention relates to a kind of PCNN multisource image anastomosing method based on improved model.Have in improvement: in PCNN, the feed back input of each neuron only receives outside stimulus input;In link field, the value of each parameter is the most identical to all neurons;In variable threshold value function, the value of each parameter is the most identical to all neurons;Introduce threshold value look-up table and index map, threshold value look-up table have recorded the threshold value corresponding with network operation number of times, and these threshold values can precalculate before the network operation and obtain, it is to avoid the exponent arithmetic in the network operation, accelerates the operation of network.Index map have recorded the duration of ignition of whole pixel, is the integrated results of the similar pixel that space is adjacent in input picture, and embodiment is the overall visual signature of input picture.Invention introduces the threshold value look-up table that the index map of the duration of ignition recording whole pixel is corresponding with network operation number of times with record, have employed fusion rule based on index map, achieve than traditional more preferable effect of wavelet transform fusion.

Description

A kind of PCNN multisource image anastomosing method based on improved model
Technical field
The present invention relates to a kind of PCNN multisource image anastomosing method based on improved model, particularly relate to A kind of be suitable for that visible ray, medium wave and three wave bands of LONG WAVE INFRARED merge simultaneously based on improved model PCNN multisource image anastomosing method.
Background technology
Artificial neural network is at a kind of novel calculating attempting imitation biological nervous system information processing manner Reason model.One neutral net is made up of Multilevel method unit or node, and various method can be used to carry out Interconnection.Some scholar is own carries out multi-source image fusion through application artificial neural network.At present, nerve net Network application in image co-registration mainly has: bimodal neuroid (Bimodal Neurons), multilayer Perceptron (Multi-layered Perceptron) and Pulse Coupled Neural Network (Pulse-coupled Neural Network, PCNN) etc..Wherein PCNN is a kind of new neural network proposed in recent years, international Upper referred to as third generation artificial neural network.
1981, E.A.Newman, P.H.Hartline etc. proposed 6 kinds different types of pair Mode neuron (include AND, OR, Visible-Enhanced Infrared, Visible-Suppressed-Infrared, Infrared-Enhanced-Visible and Infrared-Suppressed-Visible) for visible ray and the fusion of infrared image.Nineteen ninety-five, Fechner Image interfusion method based on multilayer perceptron neutral net is proposed with Godlewski.Many by training Pixel interested in layer perceptron identification prebiotic synthesis, is incorporated in visible images.From 20 The nineties in century, existing to the visual cortex nerve impulse string synchronized oscillation of cat, monkey by Eckhorn etc. The research of elephant, has obtained mammalian nervous meta-model, and thus development has defined pulse coupled neural net Network model.This model has the advantages that to be grouped the pixel that two-dimensional image spatial similarity, gray scale are similar, And image local gray scale difference value can be reduced, make up the small interruption of image local.1999, Brocssard Etc. R.P. the spark rate of PCNN neuron is demonstrated with the relation of gradation of image it was confirmed PCNN Feasibility for image co-registration.Based on this model, relevant scholar proposes various improved model, and will It is for the fusion of various images.
At present, the image interfusion method of Based PC NN is studied and is mainly concentrated the following aspects:
The automation of network parameter is chosen: the parameter related to due to PCNN network is more, and different ginseng Numerical value all can affect final result.Automatically the key in PCNN network is calculated by householder method Parameter, can obtain more preferable result.
Improvement to PCNN basic model: according to realizing function, process object and the mode of thinking are not With, different researchers successively propose different improved models.
Therefore a kind of novel PCNN multisource image anastomosing method based on improved model of offer is provided badly.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of raising multi-source image syncretizing effect, after making fusion Image object feature becomes apparent from, is more beneficial for the PCNN multi-source image based on improved model of target identification Fusion method.
For solving above-mentioned technical problem, a kind of PCNN multi-source image based on improved model of the present invention merges Method, comprises the following steps successively:
Step one, to input three width original image A, B and C spatially carry out pixel level registration, Ensure that three width image sizes are X × Y;
Step 2, setting network parameter W, VL, β, Vθ, αθValue with Δ t;
VLAnd VθIt is respectively Lij[n] and θijIntrinsic electromotive force in [n], θij[n] is dynamic threshold, Lij[n] is Linearly connected inputs;
αθFor θijThe damping time constant of [n];Δ t is time sampling interval;β is bonding strength between cynapse Constant;Yij[n] is PCNN pulse output;Ykl[n-1] is PCNN last time pulse output;Inside connects Meet the w in matrix WijklCorresponding LijY in [n]klThe weight coefficient of [n-1]; N is the number of run of network, n=1,2 ..., N-1, N, N are maximum number of run;
Step 3, in every width input picture search Sij_max、Sij_min;Sij_max< Vθ, Sij_min> 0;
Step 4, obtaining network maximum number of run N and threshold value look-up table LT (s), s is the letter of LT (s) Number variable;
N = t 2 - t 1 Δt + 1
t 1 = 1 α θ ln [ V θ S ij _ max ]
t 2 = 1 α θ ln [ V θ S ij _ min ]
LT ( s ) = V θ e ( - ( - sΔt + t 2 ) α θ )
In formula: t1And t2It is respectively gray value maximum pixel and the autogenous ignition time of minimum pixel in image;
Step 5, utilize following equation moving model;
Fij[n]=Sij
Lij[n]=VL∑wijklYkl[n-1]
Uij[n]=Fij[n](1+βLij[n])
Iij[n]=N-n
In formula: Uij[n] is internal activity item, Yij[n] is PCNN pulse output, Iij[n] is index value; As n=1, Lij[1]=0, then Uij[1]=Fij[1]=Sij, θij[1]=LT(N-1)=Sij_max, corresponding Feed back input intermediate value is Sij_maxNeuron by autogenous ignition;After neuron firing, export Yij[1]=1, θij[2] V is becomeθ, the index value of igniting neuron is labeled as Iij=N-1;
By that analogy, when the network operation to n=N, threshold θij[N]=LT(0)=Sij_min, defeated for feedback Enter for Sij_minNeuron autogenous ignition, igniting neuron index value be labeled as Iij=0;
Step 6, respectively obtain the index map I of three width original image A, B and CA、IBAnd IC
Work as IA、IBAnd ICThe absolute value of the mutual difference of index value of respective pixel is both less than equal to representative value E, then the pixel value of fused images takes the weighted average of three width image respective pixel;
Work as IA、IBAnd ICThe absolute value of the mutual difference of index value of respective pixel has more than representative value e's During situation, if wherein the mutual difference of index value of two width image respective pixel is less than or equal to representative value e, then The pixel value of fused images takes the weighted average of above-mentioned two width image respective pixel;
In the case of other, then the pixel value of fused images takes the pixel value that index value is bigger.
In step 2, it is ensured that the feed back input F of dendronij[n] only receives outside input stimulus signal Sij;Ensure W、VL、β、Vθ、αθThe most identical to all neurons with the value of Δ t.
e=2。
Three width original image A, B and C are respectively visible ray, medium-wave infrared and LONG WAVE INFRARED image.
Basic model, on the basis of analyzing PCNN image interfusion method, is simplified by the present invention accordingly And improvement, obtain the PCNN image interfusion method of a kind of new improvement.Have in wherein simplifying and improving: 1. in PCNN, the feed back input of each neuron only receives outside stimulus input;2. each parameter in link field Value the most identical to all neurons;3. in variable threshold value function, the value of each parameter is to all neurons all Identical;4. introducing threshold value look-up table and index map, threshold value look-up table have recorded corresponding with network operation number of times Threshold value, these threshold values can precalculate before the network operation and obtain, it is to avoid the finger in the network operation Number computing, accelerates the operation of network.Index map have recorded the duration of ignition of whole pixel, is input figure The integrated results of the similar pixel that space is adjacent in Xiang, embodiment is the overall visual signature of input picture.
The present invention introduce in improved model the duration of ignition recording whole pixel index map and record with The threshold value look-up table that network operation number of times is corresponding, under the conditions of identical fusion rule, becomes than traditional small echo Change fusion method and have more preferable effect.
The indices of the present invention is better than the indices of the former fusion rule of WT+ to a certain extent, especially It is the index such as average and standard variance, improves clearly, illustrate the validity of improved method.
Accompanying drawing explanation
Fig. 1 is the PCNN fusion method schematic diagram of three width images.
Fig. 2 is certain seashore original image and fusion results of Based PC NN.
Detailed description of the invention
The present invention is further detailed explanation with embodiment below in conjunction with the accompanying drawings.
The basic thought of the present invention is: two width or several original images to input are respectively adopted PCNN mould Type is calculated the index map of correspondence, then uses index map and original image and merges decision-making accordingly, Finally give fused images.The PCNN fusion method schematic diagram of three width images is as shown in Figure 1.
Specifically, the present invention comprises the following steps successively:
Step one, to input three width original image A, B and C spatially carry out pixel level registration, Ensure that three width image sizes are X × Y;Three width original image A, B and C be respectively visible ray, in Ripple is infrared and LONG WAVE INFRARED image;
Step 2, setting network parameter W, VL, β, Vθ, αθValue with Δ t;Ensure the feedback of dendron Input Fij[n] only receives outside input stimulus signal Sij;Ensure each parameter W, V in link fieldL, the taking of β It is worth the most identical to all neurons;Ensure each parameter V in variable threshold value functionθ、αθWith the value of Δ t to all Neuron is the most identical;
VLAnd VθIt is respectively Lij[n] and θijIntrinsic electromotive force (amplification coefficient) in [n], θij[n] is dynamic threshold Value, Lij[n] is linearly connected input;
αθFor θijThe damping time constant of [n];Δ t is time sampling interval;β is bonding strength between cynapse Constant;Yij[n] is PCNN pulse output;Ykl[n-1] is PCNN last time pulse output;Inside connects Meet the w in matrix WijklCorresponding LijY in [n]klThe weight coefficient of [n-1];
N is the number of run of network, n=1,2 ..., N-1, N, N are maximum number of run;
Step 3, in every width input picture search Sij_max、Sij_min;Vθ> Sij_max;If Sij_minLittle In equal to 0, by Serial regulation Sij_max、Sij_minAnd input picture, make Sij_minMore than 0;
Step 4, obtaining network maximum number of run N and threshold value look-up table LT (s), s is the letter of LT (s) Number variable;
N = t 2 - t 1 Δt + 1
t 1 = 1 α θ ln [ V θ S ij _ max ]
t 2 = 1 α θ ln [ V θ S ij _ min ]
LT ( s ) = V θ e ( - ( - sΔt + t 2 ) α θ )
In formula: t1And t2It is respectively gray value maximum pixel and the autogenous ignition time of minimum pixel in image;
Step 5, utilize following equation moving model;
Fij[n]=Sij
Lij[n]=VL∑wijklYkl[n-1]
Uij[n]=Fij[n](1+βLij[n])
Iij[n]=N-n
In formula: Uij[n] is internal activity item, Yij[n] is PCNN pulse output, Iij[n] is index value;
As n=1, Lij[1]=0, then Uij[1]=Fij[1]=Sij, θij[1]=LT(N-1)=Sij_max, corresponding Feed back input intermediate value is Sij_maxNeuron by autogenous ignition;After neuron firing, export Yij[1]=1, θij[2] V is becomeθSo that igniting neuron will not be the most ignited.Meanwhile, the rope of igniting neuron Draw value and be labeled as Iij=N-1, no longer changes.Along with the increase of the number of run n of network, threshold θij[n] by The least, the neuron of igniting will encourage adjacent neuron by link field.When the network operation to n=N Time, threshold θij[N]=LT(0)=Sij_min, it is S for feed back inputij_minNeuron, even if there is no phase In the case of adjacent neuron excitation, also will autogenous ignition, the index value of igniting neuron is labeled as Iij=0; I.e. by the network operation of most n times, all of neuron will ignited and also only once.
If it can be seen that neuron SijLight a fire when n-th is run, then its index map Iij[n] is solid It is set to N-n, no longer changes along with the operation of network.Index map Iij[n] have recorded all neurons The duration of ignition, is the result of input picture space-time integration.Additionally, due to the variable threshold value in the network operation is Obtained by look-up table mode, it is not necessary to complicated exponent arithmetic, shorten the time of the network operation.Net Network runs the automatic acquisition of maximum times, not only ensure that the certainty of the network operation, and makes each Neuron firing is only once;
Step 6, use corresponding convergence strategy, obtain fused images.The convergence strategy that this method uses As follows: owing to the index map of every width original image represents the overall visual signature of original image, therefore to lead to The index map crossing three width original images carrys out synthetic setting convergence strategy.
Obtain the index map I of three width original image A, B and CA、IBAnd IC
Work as IA、IBAnd ICThe absolute value of the mutual difference of index value of respective pixel is both less than equal to representative value E, then the pixel value of fused images takes the weighted average of three width image respective pixel;
Work as IA、IBAnd ICThe absolute value of the mutual difference of index value of respective pixel has more than representative value e's During situation, if wherein the mutual difference of index value of two width image respective pixel is less than or equal to representative value e, then The pixel value of fused images takes the weighted average of above-mentioned two width image respective pixel;
In the case of other, then the pixel value of fused images takes the pixel value that index value is bigger.
Preferably e=2.
In order to verify the performance based on improvement PCNN model image fusion method of proposition, select certain sea The three width visible rays, LONG WAVE INFRARED and the medium-wave infrared figure that are registered of bank (image size is 320 × 256) As image to be fused.The (a) and (b) of Fig. 2 and (c) are respectively the to be fused original of certain seashore can Seeing light, LONG WAVE INFRARED and medium-wave infrared image, (d) of Fig. 2 is to use 4 layers of wavelet transformation (WT), Low frequency coefficient is averaged, and high frequency coefficient uses the fusion results that the fusion rule of region energy operator obtains; (e) of Fig. 2 is the fusion results using and improving PCNN model.In order to preferably fusion results be entered Row com-parison and analysis, have employed the product of average, standard deviation, entropy and structural information and transinformation contentObjective evaluation criteria calculated, the result obtained is as shown in table 1.It can be seen that Based on improve PCNN model fusion results local detail, profile (personage on seashore limit) and Fusion results based on the former fusion rule of WT+ it is better than on overall brightness.From the three of objective evaluation tables It can be seen that be better than the former fusion of WT+ to a certain extent based on the indices improving PCNN model The indices of rule, the particularly index such as average and standard variance, improves clearly, illustrates to change Enter the validity of method.
Certain seashore fusion results objective evaluation of table 1 Based PC NN

Claims (4)

1. a PCNN multisource image anastomosing method based on improved model, comprises the following steps successively:
Step one, to input three width original image A, B and C spatially carry out pixel level registration, Ensure that three width image sizes are X × Y;
Step 2, setting network parameter W, VL, β, Vθ, αθValue with Δ t;
VLAnd VθIt is respectively Lij[n] and θijIntrinsic electromotive force in [n], θij[n] is dynamic threshold, Lij[n] is Linearly connected inputs;
αθFor θijThe damping time constant of [n];Δ t is time sampling interval;β is bonding strength between cynapse Constant;Yij[n] is PCNN pulse output;Ykl[n-1] is PCNN last time pulse output;Inside connects Meet the w in matrix WijklCorresponding LijY in [n]klThe weight coefficient of [n-1];
N is the number of run of network, n=1,2 ..., N-1, N, N are maximum number of run;
Step 3, in every width input picture search Sij_max、Sij_min;Sij_max< Vθ, Sij_min> 0;
Step 4, obtaining network maximum number of run N and threshold value look-up table LT (s), s is the letter of LT (s) Number variable;
N = t 2 - t 1 Δt + 1
t 1 = 1 α θ ln [ V θ S ij _ max ]
t 2 = 1 α θ ln [ V θ S ij _ min ]
LT ( s ) = V θ e ( - ( - sΔt + t 2 ) α θ )
In formula: t1And t2It is respectively gray value maximum pixel and the autogenous ignition time of minimum pixel in image;
Step 5, utilize following equation moving model;
Fij[n]=Sij
Lij[n]=VL∑wijklYkl[n-1]
Uij[n]=Fij[n](1+βLij[n])
Iij[n]=N-n
In formula: Uij[n] is internal activity item, Yij[n] is PCNN pulse output, Iij[n] is index value;
As n=1, Lij[1]=0, then Uij[1]=Fij[1]=Sij, θij[1]=LT(N-1)=Sij_max, corresponding Feed back input intermediate value is Sij_maxNeuron by autogenous ignition;After neuron firing, export Yij[1]=1, θij[2] V is becomeθ, the index value of igniting neuron is labeled as Iij=N-1;
By that analogy, when the network operation to n=N, threshold θij[N]=LT(0)=Sij_min, defeated for feedback Enter for Sij_minNeuron autogenous ignition, igniting neuron index value be labeled as Iij=0;
Step 6, respectively obtain the index map I of three width original image A, B and CA、IBAnd IC
Work as IA、IBAnd ICThe absolute value of the mutual difference of index value of respective pixel is both less than equal to representative value E, then the pixel value of fused images takes the weighted average of three width image respective pixel;
Work as IA、IBAnd ICThe absolute value of the mutual difference of index value of respective pixel has more than representative value e's During situation, if wherein the mutual difference of index value of two width image respective pixel is less than or equal to representative value e, then The pixel value of fused images takes the weighted average of above-mentioned two width image respective pixel;
In the case of other, then the pixel value of fused images takes the pixel value that index value is bigger.
A kind of PCNN multisource image anastomosing method based on improved model the most according to claim 1, It is characterized in that: in described step 2, it is ensured that the feed back input F of dendronij[n] only receives outside input stimulus Signal Sij;Ensure W, VL、β、Vθ、αθThe most identical to all neurons with the value of Δ t.
A kind of PCNN multisource image anastomosing method based on improved model the most according to claim 1, It is characterized in that: e=2.
A kind of PCNN multisource image anastomosing method based on improved model the most according to claim 1, It is characterized in that: three width original image A, B and C are respectively visible ray, medium-wave infrared and LONG WAVE INFRARED Image.
CN201210362080.6A 2012-09-25 2012-09-25 A kind of PCNN multisource image anastomosing method based on improved model Active CN103679670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210362080.6A CN103679670B (en) 2012-09-25 2012-09-25 A kind of PCNN multisource image anastomosing method based on improved model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210362080.6A CN103679670B (en) 2012-09-25 2012-09-25 A kind of PCNN multisource image anastomosing method based on improved model

Publications (2)

Publication Number Publication Date
CN103679670A CN103679670A (en) 2014-03-26
CN103679670B true CN103679670B (en) 2016-08-31

Family

ID=50317125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210362080.6A Active CN103679670B (en) 2012-09-25 2012-09-25 A kind of PCNN multisource image anastomosing method based on improved model

Country Status (1)

Country Link
CN (1) CN103679670B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376546A (en) * 2014-10-27 2015-02-25 北京环境特性研究所 Method for achieving three-path image pyramid fusion algorithm based on DM642
CN104463821A (en) * 2014-11-28 2015-03-25 中国航空无线电电子研究所 Method for fusing infrared image and visible light image
CN107292883B (en) * 2017-08-02 2019-10-25 国网电力科学研究院武汉南瑞有限责任公司 A PCNN Power Fault Area Detection Method Based on Local Features
CN108537790B (en) * 2018-04-13 2021-09-03 西安电子科技大学 Different-source image change detection method based on coupling translation network
EP3871147B1 (en) 2018-12-27 2024-03-13 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image fusion
CN111161203A (en) * 2019-12-30 2020-05-15 国网北京市电力公司 Multi-focus image fusion method based on memristor pulse coupling neural network
CN111932440B (en) * 2020-07-09 2025-01-17 中国科学院微电子研究所 Image processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697231A (en) * 2009-10-29 2010-04-21 西北工业大学 Wavelet transformation and multi-channel PCNN-based hyperspectral image fusion method
CN101877125A (en) * 2009-12-25 2010-11-03 北京航空航天大学 An Image Fusion Processing Method Based on Statistical Signals in Wavelet Domain
CN101968882A (en) * 2010-09-21 2011-02-09 重庆大学 Multi-source image fusion method
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN103810682A (en) * 2012-11-06 2014-05-21 西安元朔科技有限公司 Novel image fusion method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1110168A1 (en) * 1999-07-07 2001-06-27 Renishaw plc Neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697231A (en) * 2009-10-29 2010-04-21 西北工业大学 Wavelet transformation and multi-channel PCNN-based hyperspectral image fusion method
CN101877125A (en) * 2009-12-25 2010-11-03 北京航空航天大学 An Image Fusion Processing Method Based on Statistical Signals in Wavelet Domain
CN101968882A (en) * 2010-09-21 2011-02-09 重庆大学 Multi-source image fusion method
CN102063713A (en) * 2010-11-11 2011-05-18 西北工业大学 Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN103810682A (en) * 2012-11-06 2014-05-21 西安元朔科技有限公司 Novel image fusion method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Novel Super-resolution Image Fusion Algorithm based on Improved PCNN and Wavelet Transform;Na Liu,Kun Gao,Yajun Song Guoqiang Ni;《MIPPR 2009:Pattern Recognition and Computer Vision》;20091030;第7496卷;第1-8页 *

Also Published As

Publication number Publication date
CN103679670A (en) 2014-03-26

Similar Documents

Publication Publication Date Title
CN103679670B (en) A kind of PCNN multisource image anastomosing method based on improved model
CN111709902B (en) Infrared and visible light image fusion method based on self-attention mechanism
CN103824054B (en) A kind of face character recognition methods based on cascade deep neural network
Hou et al. Fruit recognition based on convolution neural network
CN104361363B (en) Depth deconvolution feature learning network, generation method and image classification method
CN108665005B (en) Method for improving CNN-based image recognition performance by using DCGAN
CN108564611A (en) A kind of monocular image depth estimation method generating confrontation network based on condition
CN106503654A (en) A kind of face emotion identification method based on the sparse autoencoder network of depth
CN1873693B (en) Image Fusion Method Based on Contourlet Transform and Improved Pulse-Coupled Neural Network
CN108764298A (en) Electric power image-context based on single classifier influences recognition methods
CN109242928A (en) A kind of lightweight has the near-infrared image colorization deep learning model of fused layer
CN103455990B (en) In conjunction with vision noticing mechanism and the image interfusion method of PCNN
CN108791302A (en) Driving behavior modeling
Ni et al. Intelligent defect detection method of photovoltaic modules based on deep learning
CN117952845A (en) Robust infrared and visible light image fusion optimization method
CN108073978A (en) A kind of constructive method of the ultra-deep learning model of artificial intelligence
Wang et al. MDD-ShipNet: Math-data integrated defogging for fog-occlusion ship detection
CN104036242A (en) Object recognition method based on convolutional restricted Boltzmann machine combining Centering Trick
CN103235943A (en) Principal component analysis-based (PCA-based) three-dimensional (3D) face recognition system
CN114647760B (en) An intelligent video image retrieval method based on neural network self-review and knowledge transmission mechanism
CN110647905A (en) A Terrorist-related Scene Recognition Method Based on Pseudo-Brain Network Model
Yang et al. Recognizing image semantic information through multi-feature fusion and SSAE-based deep network
Zhu et al. Emotion Recognition in Learning Scenes Supported by Smart Classroom and Its Application.
CN108073985A (en) A kind of importing ultra-deep study method for voice recognition of artificial intelligence
CN112819143B (en) Working memory computing system and method based on graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant