WO2021012752A1 - 一种基于脉冲神经网络的短程跟踪方法及系统 - Google Patents
一种基于脉冲神经网络的短程跟踪方法及系统 Download PDFInfo
- Publication number
- WO2021012752A1 WO2021012752A1 PCT/CN2020/089907 CN2020089907W WO2021012752A1 WO 2021012752 A1 WO2021012752 A1 WO 2021012752A1 CN 2020089907 W CN2020089907 W CN 2020089907W WO 2021012752 A1 WO2021012752 A1 WO 2021012752A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- pulse
- layer
- convolutional
- convolutional neural
- Prior art date
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 143
- 238000012421 spiking Methods 0.000 title claims abstract description 76
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 94
- 238000004364 calculation method Methods 0.000 claims abstract description 46
- 230000007246 mechanism Effects 0.000 claims abstract description 15
- 238000011176 pooling Methods 0.000 claims description 86
- 230000004913 activation Effects 0.000 claims description 63
- 238000010304 firing Methods 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 26
- 230000008569 process Effects 0.000 claims description 25
- 230000001186 cumulative effect Effects 0.000 claims description 24
- 210000004205 output neuron Anatomy 0.000 claims description 24
- 238000012546 transfer Methods 0.000 claims description 21
- 230000000284 resting effect Effects 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 16
- 238000010276 construction Methods 0.000 claims description 15
- 210000002569 neuron Anatomy 0.000 claims description 14
- 239000012528 membrane Substances 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 11
- 230000005012 migration Effects 0.000 claims description 9
- 238000013508 migration Methods 0.000 claims description 9
- 230000004048 modification Effects 0.000 claims description 9
- 238000012986 modification Methods 0.000 claims description 9
- 238000007667 floating Methods 0.000 claims description 6
- 238000012163 sequencing technique Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000000605 extraction Methods 0.000 abstract description 6
- 238000013473 artificial intelligence Methods 0.000 abstract description 5
- 238000011156 evaluation Methods 0.000 description 8
- 238000011160 research Methods 0.000 description 6
- 238000013136 deep learning model Methods 0.000 description 4
- 210000004556 brain Anatomy 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 235000001968 nicotinic acid Nutrition 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/061—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
Definitions
- the invention relates to the technical field of artificial intelligence, and in particular to a short-range tracking method and system based on a spiking neural network.
- Spike neural network is called the "third generation neural network" and has become the focus of research in pattern recognition problems such as image classification.
- Spike neural network is a cutting-edge technology research topic in the field of artificial intelligence, and has high computational efficiency, low energy consumption, It takes up less resources and is easy to implement in hardware. It is an ideal choice for studying brain-like computing and coding strategies.
- spiking neural networks Through the theory and application research of spiking neural networks, it is of great significance to promote the development of artificial neural networks and can also promote Research on edge devices such as new artificial intelligence chips that are not based on von Neumann's computing architecture.
- tracking is a very important research direction in the field of computer vision, and has specific applications in many fields such as autonomous driving, safety, behavior recognition, and human-computer interaction.
- deep learning models based on convolutional neural networks and autoencoders have made a lot of progress in tracking technology. This is due to the significant feature extraction capabilities of deep learning models.
- Such deep learning models have a large amount of calculation and Occupying a lot of resources and relying on top-level graphics cards for acceleration, these models cannot be applied to edge devices.
- the pulse neural network Models are mostly used for classification problems, and do not need to perform specific processing on output pulses, and fields such as tracking that require additional operations after output have not been tried.
- the purpose of the present invention is to provide a short-range tracking method and system based on a spiking neural network.
- the reconstructed spiking neural network effectively combines the strong feature extraction characteristics of the convolutional neural network and the spiking neural network. High efficiency calculation feature.
- the invention provides a short-range tracking method based on a spiking neural network, which includes the following steps:
- the reconstructed spiking neural network is used to track the target in the input image.
- the said attention mechanism is based on the pulse coding of the input image, the specific steps include:
- p max is the maximum pixel value of the pixel in the feature map
- p min is the minimum pixel value of the pixel in the feature map
- p i,j is the gray value of the pixel in the feature map
- S is the pulse number of the feature map. number
- T is the total pulse time of the characteristic map.
- the input image is normalized
- the location of the activation function was originally used, and the activation function was replaced with the relu() activation function
- the network For the pooling layer, if the network uses single-pulse output neurons, the original Max-Pooling layer or Average-Pooling layer will remain in the pooling layer. If the network uses multi-pulse output neurons, the The Max-Pooling layer of the pooling layer is modified to the Average-Pooling layer;
- the parameters of the convolutional neural network are transferred to the spiking neural network, and the spiking neural network is reconstructed.
- the specific process is:
- For the convolutional layer construct the same number and size of the convolution kernel as the convolutional layer of the convolutional neural network, and then directly transfer the weight parameters of the convolutional neural network to construct the convolutional layer of the impulse neural network;
- the Max-Pooling layer of the convolutional neural network corresponds to the earliest pulse time in the 2 ⁇ 2 area input by the pulsed neural network pooling layer, and the convolutional neural network
- the Average-Pooling layer of the network corresponds to the average pulse time of the impulse neural network pooling layer; if the network uses multi-pulse output neurons, the Average-Pooling layer of the pooling layer is calculated by convolution;
- the activation layer of the migration convolutional neural network constitutes the activation layer of the pulse neural network
- the linear activation method in the pulse neural network is used to calculate the cumulative Voltage, when the cumulative voltage reaches the discharge threshold, the discharge generates an output pulse, and the membrane voltage is reset to the resting potential.
- the cumulative voltage is less than the discharge threshold, the current voltage value is recorded, and when the cumulative voltage is lower than the resting potential, the membrane The voltage is reset to the resting potential;
- For the fully connected layer construct the same number of neurons as the fully connected layer of the convolutional neural network, and directly transfer the weights of the fully connected layer of the convolutional neural network to form the fully connected layer of the impulse neural network.
- Is the firing time of the next pulse in the nth pulse code sequence at the current time t Is the firing time of the previous pulse in the nth pulse code sequence at the current time t, Is the distance between the current time t and the time of the next pulse in the pulse code sequence, Is the previous pulse firing time in the nth pulse code sequence sequence at the current time t.
- the reconstructed spiking neural network is used to track the target in the input image, and the specific steps include:
- the invention provides a short-range tracking system based on a spiking neural network, including:
- Encoding module which is used to pulse-encode the input image based on the attention mechanism
- the construction module is used to modify the structure of the convolutional neural network to transfer the convolutional neural network parameters to the impulse neural network and rebuild the impulse neural network;
- a calculation module which is used to calculate the impulse similarity between corresponding feature points in adjacent image frames of the input image to obtain the regional similarity
- the tracking module is used to track the target in the input image using the reconstructed spiking neural network.
- the encoding module performs pulse encoding on the input image based on the attention mechanism.
- the specific process is:
- p max is the maximum pixel value of the pixel in the feature map
- p min is the minimum pixel value of the pixel in the feature map
- p i,j is the gray value of the pixel in the feature map
- S is the pulse number of the feature map. number
- T is the total pulse time of the characteristic map.
- the building module modifies the structure of the convolutional neural network
- the specific modification process for the structure of the convolutional neural network is as follows:
- the input image is normalized
- the location of the activation function was originally used, and the activation function was replaced with the relu() activation function
- the network For the pooling layer, if the network uses single-pulse output neurons, the original Max-Pooling layer or Average-Pooling layer will remain in the pooling layer. If the network uses multi-pulse output neurons, the The Max-Pooling layer of the pooling layer is modified to the Average-Pooling layer;
- the construction module migrates the convolutional neural network parameters to the spiking neural network and rebuilding the spiking neural network.
- the specific process is:
- For the convolutional layer construct the same number and size of the convolution kernel as the convolutional layer of the convolutional neural network, and then directly transfer the weight parameters of the convolutional neural network to construct the convolutional layer of the impulse neural network;
- the Max-Pooling layer of the convolutional neural network corresponds to the earliest pulse time in the 2 ⁇ 2 area input by the pulsed neural network pooling layer, and the convolutional neural network
- the Average-Pooling layer of the network corresponds to the average pulse time of the impulse neural network pooling layer; if the network uses multi-pulse output neurons, the Average-Pooling layer of the pooling layer is calculated by convolution;
- the activation layer of the migration convolutional neural network constitutes the activation layer of the pulse neural network
- the linear activation method in the pulse neural network is used to calculate the cumulative Voltage, when the cumulative voltage reaches the discharge threshold, the discharge generates an output pulse, and the membrane voltage is reset to the resting potential.
- the cumulative voltage is less than the discharge threshold, the current voltage value is recorded, and when the cumulative voltage is lower than the resting potential, the membrane The voltage is reset to the resting potential;
- For the fully connected layer construct the same number of neurons as the fully connected layer of the convolutional neural network, and directly transfer the weights of the fully connected layer of the convolutional neural network to form the fully connected layer of the impulse neural network.
- the present invention has the advantages of modifying the structure of the convolutional neural network to migrate the convolutional neural network parameters to the spiking neural network, rebuilding the spiking neural network, and rebuilding the spiking neural network Combining the strong feature extraction characteristics of convolutional neural networks and the high-efficiency calculation characteristics of spiking neural networks, it has good tracking accuracy, and can reduce resource occupation and hardware dependence in the tracking calculation process
- Figure 1 is a flowchart of a short-range tracking method based on a spiking neural network in an embodiment of the present invention
- Figure 2 is the structure diagram of SiamFC network
- Figure 3 is a structural diagram of the reconstructed spiking neural network.
- the embodiment of the present invention provides a short-range tracking method based on a spiking neural network.
- the reconstructed spiking neural network combines the strong feature extraction characteristics of the convolutional neural network and the high-efficiency calculation characteristics of the spiking neural network, and has good tracking accuracy. And it can reduce resource occupation in the tracking calculation process.
- the embodiment of the present invention also correspondingly provides a short-range tracking system based on the spiking neural network.
- an embodiment of the present invention provides a short-range tracking method based on a spiking neural network, including:
- the encoding method in the embodiment of the present invention is a spiking neural network encoding method, which is an encoding scheme based on the attention mechanism and pulse firing rate. Based on the attention mechanism, pulse encoding the input image, the specific steps include:
- S101 Use a 3 ⁇ 3 receptive field region operator to perform a convolution operation on the input image to obtain a feature map; in a preferred embodiment, the receptive field region operator can be Of course, in specific applications, the size and specific value of the receptive field region paper operator can be adjusted according to the effect.
- S102 Sort the pixels in the feature map based on the descending order of the feature values, take a preset number of pixels according to the order, and set the feature value of the retrieved pixel as the first pixel
- the characteristic value can be specifically selected as the top 20% pixels. By setting the characteristic values of the top 20% pixels as the maximum characteristic value, a sufficient maximum pulse firing rate can be ensured.
- p max is the maximum pixel value of the pixel in the feature map
- p min is the minimum pixel value of the pixel in the feature map
- p i,j is the gray value of the pixel in the feature map
- S is the pulse number of the feature map. number
- T is the total pulse time of the characteristic map.
- S2 Modify the structure of the convolutional neural network to migrate the convolutional neural network parameters to the impulse neural network, and rebuild the impulse neural network.
- the structure of the convolutional neural network is modified, and the specific modification process for the structure of the convolutional neural network is as follows:
- the input image is normalized.
- an abs() layer needs to be added to ensure that the input value is positive.
- the location of the activation function was originally used, and the activation function was replaced with the relu() activation function, so as to avoid the subsequent introduction of negative numbers to reduce the loss of accuracy after conversion.
- the activation function was replaced with the relu() activation function, so as to avoid the subsequent introduction of negative numbers to reduce the loss of accuracy after conversion.
- the network For the pooling layer, if the network uses single-pulse output neurons, the original Max-Pooling layer or Average-Pooling layer will remain in the pooling layer. If the network uses multi-pulse output neurons, the The Max-Pooling layer of the pooling layer is modified to the Average-Pooling layer;
- the weights of the fully connected layer are all using the L2 regularization strategy in the training phase. So as to speed up the convergence of weights to a relatively small range;
- Delete the layer that cannot be directly represented and set the weight type in the convolutional neural network to a 16-bit floating point type, which improves the calculation efficiency after conversion and reduces resource occupation.
- Cannot directly express layers such as LRN layer, BN layer, etc.
- For the convolutional layer construct the same number and size of the convolution kernel as the convolutional layer of the convolutional neural network, and then directly transfer the weight parameters of the convolutional neural network to construct the convolutional layer of the impulse neural network;
- the Max-Pooling layer of the convolutional neural network corresponds to the earliest pulse time in the 2 ⁇ 2 area input by the pulsed neural network pooling layer, and the convolutional neural network
- the Average-Pooling layer of the network corresponds to the average pulse time of the pooling layer of the impulse neural network; if the network uses multi-pulse output neurons, the Average-Pooling layer of the pooling layer is calculated by convolution; calculated by convolution
- the specific process is: when the pooling area is 2 ⁇ 2, the average pooling operation is realized through the convolution operation with a step size of 2, and the convolution kernel size and parameters are set to The calculation process is equivalent to the calculation of the pulse convolutional layer.
- the activation layer of the migration convolutional neural network constitutes the activation layer of the pulse neural network
- the linear activation method in the pulse neural network is used to calculate the cumulative Voltage, when the cumulative voltage reaches the discharge threshold, the discharge generates an output pulse, and the membrane voltage is reset to the resting potential.
- the cumulative voltage is less than the discharge threshold, the current voltage value is recorded, and when the cumulative voltage is lower than the resting potential, the membrane The voltage is reset to the resting potential
- the layer migration in the embodiment of the present invention is the layer of the convolutional neural network after migration modification.
- For the fully connected layer construct the same number of neurons as the fully connected layer of the convolutional neural network, and directly transfer the weights of the fully connected layer of the convolutional neural network to form the fully connected layer of the impulse neural network.
- a template construction technique is proposed to modify the convolutional neural network
- a migration-based template construction technique is proposed to reconstruct the spiking neural network and perform weight normalization operations.
- Is the firing time of the next pulse in the nth pulse code sequence at the current time t Is the firing time of the previous pulse in the nth pulse code sequence at the current time t, Is the distance between the current time t and the time of the next pulse in the pulse code sequence, Is the previous pulse firing time in the nth pulse code sequence sequence at the current time t.
- the pulse code in the present invention is defined as AAP pulse code, which defines Is the WISI distance, It is the ISI distance.
- AAP pulse code which defines Is the WISI distance, It is the ISI distance.
- the condition must be met: That is to say, it is necessary to meet the following: the interval between the pulses before and after the latest pulse is the same and the time of the previous pulse of the two sequences is the same, and the precise time of the pulse itself is taken into account, which can meet the requirements of the evaluation method proposed above.
- the similarity of the two pulses can be finally obtained.
- the reconstructed pulse neural network is equivalent to the fusion of the above-mentioned pulse coding method, the modified convolutional neural network, and the WISI distance evaluation method to obtain the reconstructed spiking neural network according to the embodiment of the present invention.
- the neural network is shown in Figure 3. It is based on SiamFC (based on the fully convolutional twin network as the basic tracking algorithm). The structure of SiamFC is shown in Figure 2.
- the reconstructed spiking neural network adopts the Tensorflow deep learning framework in its implementation, and the SiamFC network is reproduced according to the convolution structure in Table 1 below, and the spiking neural network structure is constructed according to Figure 2.
- the reconstructed spiking neural network is used to track the target in the input image, and the specific steps include:
- S402 Select the first image in the input image as the template frame, and at the same time select the target frame area on the input image; when selecting the target frame area, the input image needs to be expanded if the area exceeds the area, and finally adjusted to a size of 127 ⁇ 127 .
- S403 When processing the current image frame, select 3 areas around the area where the target is located in the previous image frame as sub-candidate frames, and the size of each sub-candidate frame is 255 ⁇ 255.
- S404 Use the trained spiking neural network to predict and recognize the template frame and the sub-candidate frame to obtain three score response matrices, select the score response matrix with the largest response value, and perform interpolation through the bicubic interpolation method, and the interpolation returns to 272 ⁇
- the size of 272 determines the offset of the responsivity value from the central area of the input image, obtains the position of the target, and completes the tracking of the target in the input image.
- the reconstructed spiking neural network is trained, the training set is the ILSVRC15 data set, and the test set is the OTB100 data set.
- Training parameter setting batch size is 8 pictures; set exponential decay learning rate method, initial value is 0.01, decay coefficient 0.86; training algorithm chooses Momentum method, momentum coefficient chooses 0.9; for faster convergence, L2 is used Regularization constrains the weights; training up to 50 epochs, and adding early stopping strategies.
- the encoding simulation time is 200ms, the maximum pulse rate is 0.6, that is, 120 pulses can be generated at most.
- the weight is normalized.
- the weight normalization parameter is 99.9%
- the layer-by-layer voltage threshold is set to 1
- the BN layer is used in SiamFC
- the Norm-SiamFC obtained after standardizing the convolutional layer used in the middle is not Use BN layer.
- the short-range tracking method based on the spiking neural network in the embodiment of the present invention modifies the structure of the convolutional neural network to transfer the parameters of the convolutional neural network to the spiking neural network, rebuilding the spiking neural network, and rebuilding the spiking neural network.
- the network combines the strong feature extraction characteristics of the convolutional neural network and the high-efficiency calculation characteristics of the pulse neural network. It has good tracking accuracy, and can reduce resource occupation during the tracking calculation process, reduce hardware dependence, and further promote pulse.
- the invention provides a short-range tracking system based on spiking neural network, including:
- Encoding module which is used to pulse-encode the input image based on the attention mechanism
- the construction module is used to modify the structure of the convolutional neural network to transfer the convolutional neural network parameters to the impulse neural network and rebuild the impulse neural network;
- a calculation module which is used to calculate the impulse similarity between corresponding feature points in adjacent image frames of the input image to obtain the regional similarity
- the tracking module is used to track the target in the input image using the reconstructed spiking neural network.
- the encoding module pulse-encodes the input image based on the attention mechanism.
- the specific process is:
- p max is the maximum pixel value of the pixel in the feature map
- p min is the minimum pixel value of the pixel in the feature map
- p i,j is the gray value of the pixel in the feature map
- S is the pulse number of the feature map. number
- T is the total pulse time of the characteristic map.
- the construction module modifies the structure of the convolutional neural network.
- the specific modification process of the convolutional neural network structure is:
- the input image is normalized
- the location of the activation function was originally used, and the activation function was replaced with the relu() activation function
- the network For the pooling layer, if the network uses single-pulse output neurons, the original Max-Pooling layer or Average-Pooling layer will remain in the pooling layer. If the network uses multi-pulse output neurons, the The Max-Pooling layer of the pooling layer is modified to the Average-Pooling layer;
- the construction module transfers the convolutional neural network parameters to the spiking neural network and rebuilding the spiking neural network.
- the specific process is:
- For the convolutional layer construct the same number and size of the convolution kernel as the convolutional layer of the convolutional neural network, and then directly transfer the weight parameters of the convolutional neural network to construct the convolutional layer of the impulse neural network;
- the Max-Pooling layer of the convolutional neural network corresponds to the earliest pulse time in the 2 ⁇ 2 area input by the pulsed neural network pooling layer, and the convolutional neural network
- the Average-Pooling layer of the network corresponds to the average pulse time of the impulse neural network pooling layer; if the network uses multi-pulse output neurons, the Average-Pooling layer of the pooling layer is calculated by convolution;
- the activation layer of the migration convolutional neural network constitutes the activation layer of the pulse neural network
- the linear activation method in the pulse neural network is used to calculate the cumulative Voltage, when the cumulative voltage reaches the discharge threshold, the discharge generates an output pulse, and the membrane voltage is reset to the resting potential.
- the cumulative voltage is less than the discharge threshold, the current voltage value is recorded, and when the cumulative voltage is lower than the resting potential, the membrane The voltage is reset to the resting potential;
- For the fully connected layer construct the same number of neurons as the fully connected layer of the convolutional neural network, and directly transfer the weights of the fully connected layer of the convolutional neural network to form the fully connected layer of the impulse neural network.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Neurology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 一种基于脉冲神经网络的短程跟踪方法,其特征在于,包括以下步骤:基于注意力机制,对输入图像进行脉冲编码;对卷积神经网络的结构进行修改,以将卷积神经网络参数迁移至脉冲神经网络中,重新构建脉冲神经网络;计算输入图像的相邻图像帧中对应特征点间的脉冲相似度,得到区域相似性;使用重新构建的脉冲神经网络对输入图像中的目标进行跟踪。
- 如权利要求1所述的一种基于脉冲神经网络的短程跟踪方法,其特征在于,所述基于注意力机制,对输入图像进行脉冲编码,具体步骤包括:使用3×3的感受野区域算子对输入图像进行卷积操作,得到特征图;基于特征值的由大至小顺序,对特征图中的像素点进行排序,按照排序,取预设数量的像素点,并将取出的像素点的特征值设置为排名第一像素点的特征值;计算特征图中每个像素点的脉冲发放个数s i,j,计算公式为:其中,p max为特征图中像素点的最大像素值,p min为特征图中像素点的最小像素值,p i,j为特征图中像素点的灰度值,S为特征图的脉冲个数;计算特征图中每个像素点的频率f i,j,并基于计算得到的每个像素点的脉冲发放个数s i,j,生成脉冲编码序列,所述计算特征图中每个 像素点的频率,计算公式为:f i,j=T/s i,j其中,T为特征图总的脉冲时间。
- 如权利要求1所述的一种基于脉冲神经网络的短程跟踪方法,其特征在于,所述对卷积神经网络的结构进行修改,对卷积神经网络结构具体的修改过程为:对于输入层,输入的图像进行归一化处理;对于卷积层,卷积层中的所有偏置设为0,其它的原有核大小及初始化设置均不变;对于激活层,原先需使用激活函数的位置,将激活函数替换为relu()激活函数;对于池化层,若网络使用的是单脉冲输出的神经元,则池化层中保持原有的Max-Pooling层或Average-Pooling层,若网络使用的是多脉冲输出的神经元,则将池化层的Max-Pooling层修改为Average-Pooling层;对于全连接层,将全连接层中的所有偏置设为0,且全连接层的原有神经元个数及初始化均不变,全连接层的权重均在训练阶段使用L2正则化策略;以及删除无法直接表示层,并将卷积神经网络中所有权重的类型设置为16位的浮点型。
- 如权利要求3所述的一种基于脉冲神经网络的短程跟踪方法,其特征在于,所述将卷积神经网络参数迁移至脉冲神经网络中,重新构建脉冲神经网络,对于脉冲神经网络结构的构建,具体过程为:对于卷积层,构建与卷积神经网络的卷积层相同个数,相同大小的卷积核,然后直接迁移卷积神经网络的权重参数,构建脉冲神经网络的卷积层;对于池化层,若网络使用的是单脉冲输出的神经元,则卷积神经网络的Max-Pooling层对应脉冲神经网络池化层输入的2×2区域内最早发出脉冲的时间,卷积神经网络的Average-Pooling层对应脉冲神经网络池化层的平均脉冲时间;若网络使用的是多脉冲输出的神经元,则对池化层的Average-Pooling层采用卷积方式计算;对于激活层,迁移卷积神经网络的激活层构成脉冲神经网络的激活层,且对于迁移过来的激活层中使用relu()激活函数的位置,均使用脉冲神经网络中的线性激活方式,计算累计电压,当累计电压达到发放阈值时,发放产生输出脉冲,膜电压重置为静息电位,当累计电压小于发放阈值时,记录当前的电压值,并当累积电压低于静息电位时,将膜电压重置为静息电位;对于全连接层,构建与卷积神经网络全连接层相同个数的神经元,并直接迁移卷积神经网络全连接层的权重,构成脉冲神经网络的全连接层。
- 如权利要求1所述的一种基于脉冲神经网络的短程跟踪方法,其特征在于,所述计算输入图像的相邻图像帧中对应特征点间的脉冲相似度,得到区域相似性中,两特征点间脉冲相似度的计算过程为:计算当前时刻t与脉冲编码序列中后一个发放脉冲时间的距离Δt P(t),计算公式为:计算两个脉冲编码序列的当前时刻t之后一个脉冲发放时间差Δt F(t),计算公式为:计算当前时刻t,两个脉冲编码序列间的距离s WISI,计算公式为:
- 如权利要求1所述的一种基于脉冲神经网络的短程跟踪方法,其特征在于,所述使用重新构建的脉冲神经网络对输入图像中的目标进行跟踪,具体步骤包括:使用训练集对重新构建的脉冲神经网络进行训练,得到训练完成的脉冲神经网络;选择输入图像中的第一帧图像作为模板帧,同时在输入图像上选择目标框区域;对当前图像帧进行处理时,在上一图像帧中目标所在区域周围选择3个区域作为子候选框;使用训练完成的脉冲神经网络对模板帧和子候选框进行预测识别,得到三个得分响应度矩阵,选择响应度值最大的得分响应度矩阵,通过双三次插值方法进行插值,确定响应度值离输入图像中心区域的偏移量,得到目标的位置,完成对输入图像中目标的跟踪。
- 一种基于脉冲神经网络的短程跟踪系统,其特征在于,包括:编码模块,其用于基于注意力机制,对输入图像进行脉冲编码;构建模块,其用于对卷积神经网络的结构进行修改,以将卷积神经网络参数迁移至脉冲神经网络中,重新构建脉冲神经网络;计算模块,其用于计算输入图像的相邻图像帧中对应特征点间的脉冲相似度,得到区域相似性;跟踪模块,其用于使用重新构建的脉冲神经网络对输入图像中的目标进行跟踪。
- 如权利要求7所述的一种基于脉冲神经网络的短程跟踪系统,其特征在于,所述编码模块基于注意力机制,对输入图像进行脉冲编码,具体过程为:使用3×3的感受野区域算子对输入图像进行卷积操作,得到特征图;基于特征值的由大至小顺序,对特征图中的像素点进行排序,按照排序,取预设数量的像素点,并将取出的像素点的特征值设置为排名第一像素点的特征值;计算特征图中每个像素点的脉冲发放个数s i,j,计算公式为:其中,p max为特征图中像素点的最大像素值,p min为特征图中像素点的最小像素值,p i,j为特征图中像素点的灰度值,S为特征图的脉冲个数;计算特征图中每个像素点的频率f i,j,并基于计算得到的每个像素点的脉冲发放个数s i,j,生成脉冲编码序列,所述计算特征图中每个像素点的频率,计算公式为:f i,j=T/s i,j其中,T为特征图总的脉冲时间。
- 如权利要求7所述的一种基于脉冲神经网络的短程跟踪系统,其特征在于,所述构建模块对卷积神经网络的结构进行修改,对卷积神经网络结构具体的修改过程为:对于输入层,输入的图像进行归一化处理;对于卷积层,卷积层中的所有偏置设为0,其它的原有核大小及初始化设置均不变;对于激活层,原先需使用激活函数的位置,将激活函数替换为relu()激活函数;对于池化层,若网络使用的是单脉冲输出的神经元,则池化层中保持原有的Max-Pooling层或Average-Pooling层,若网络使用的是多脉冲输出的神经元,则将池化层的Max-Pooling层修改为Average-Pooling层;对于全连接层,将全连接层中的所有偏置设为0,且全连接层的原有神经元个数及初始化均不变,全连接层的权重均在训练阶段使用L2正则化策略;以及删除无法直接表示层,并将卷积神经网络中所有权重的类型设置 为16位的浮点型。
- 如权利要求9所述的一种基于脉冲神经网络的短程跟踪系统,其特征在于,所述构建模块将卷积神经网络参数迁移至脉冲神经网络中,重新构建脉冲神经网络,对于脉冲神经网络结构的构建,具体过程为:对于卷积层,构建与卷积神经网络的卷积层相同个数,相同大小的卷积核,然后直接迁移卷积神经网络的权重参数,构建脉冲神经网络的卷积层;对于池化层,若网络使用的是单脉冲输出的神经元,则卷积神经网络的Max-Pooling层对应脉冲神经网络池化层输入的2×2区域内最早发出脉冲的时间,卷积神经网络的Average-Pooling层对应脉冲神经网络池化层的平均脉冲时间;若网络使用的是多脉冲输出的神经元,则对池化层的Average-Pooling层采用卷积方式计算;对于激活层,迁移卷积神经网络的激活层构成脉冲神经网络的激活层,且对于迁移过来的激活层中使用relu()激活函数的位置,均使用脉冲神经网络中的线性激活方式,计算累计电压,当累计电压达到发放阈值时,发放产生输出脉冲,膜电压重置为静息电位,当累计电压小于发放阈值时,记录当前的电压值,并当累积电压低于静息电位时,将膜电压重置为静息电位;对于全连接层,构建与卷积神经网络全连接层相同个数的神经元,并直接迁移卷积神经网络全连接层的权重,构成脉冲神经网络的全连接层。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910668450.0 | 2019-07-23 | ||
CN201910668450.0A CN110555523B (zh) | 2019-07-23 | 2019-07-23 | 一种基于脉冲神经网络的短程跟踪方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021012752A1 true WO2021012752A1 (zh) | 2021-01-28 |
Family
ID=68735812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/089907 WO2021012752A1 (zh) | 2019-07-23 | 2020-05-13 | 一种基于脉冲神经网络的短程跟踪方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110555523B (zh) |
WO (1) | WO2021012752A1 (zh) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112835844A (zh) * | 2021-03-03 | 2021-05-25 | 苏州蓝甲虫机器人科技有限公司 | 一种脉冲神经网络计算负载的通信稀疏化方法 |
CN113034542A (zh) * | 2021-03-09 | 2021-06-25 | 北京大学 | 一种运动目标检测跟踪方法 |
CN113077017A (zh) * | 2021-05-24 | 2021-07-06 | 河南大学 | 基于脉冲神经网络的合成孔径图像分类方法 |
CN113435246A (zh) * | 2021-05-18 | 2021-09-24 | 西安电子科技大学 | 一种辐射源个体智能识别方法、系统及终端 |
CN113673310A (zh) * | 2021-07-05 | 2021-11-19 | 西安电子科技大学 | 一种基于增强孪生网络的舰船追踪方法 |
CN113807421A (zh) * | 2021-09-07 | 2021-12-17 | 华中科技大学 | 基于脉冲发送皮层模型的注意力模块的特征图处理方法 |
CN113887645A (zh) * | 2021-10-13 | 2022-01-04 | 西北工业大学 | 一种基于联合注意力孪生网络的遥感图像融合分类方法 |
CN114037050A (zh) * | 2021-10-21 | 2022-02-11 | 大连理工大学 | 一种基于脉冲神经网络内在可塑性的机器人退化环境避障方法 |
CN114118168A (zh) * | 2021-12-08 | 2022-03-01 | 中国人民解放军96901部队26分队 | 多站联合的电磁脉冲事件识别方法、系统和设备 |
CN114282647A (zh) * | 2021-12-09 | 2022-04-05 | 上海应用技术大学 | 基于脉冲神经网络的神经形态视觉传感器目标检测方法 |
CN114359200A (zh) * | 2021-12-28 | 2022-04-15 | 中国科学院西安光学精密机械研究所 | 基于脉冲耦合神经网络的图像清晰度评估方法及终端设备 |
CN114386578A (zh) * | 2022-01-12 | 2022-04-22 | 西安石油大学 | 一种海思无npu硬件上实现的卷积神经网络方法 |
CN114489095A (zh) * | 2021-12-11 | 2022-05-13 | 西北工业大学 | 一种应用于变体飞行器的类脑脉冲神经网络控制方法 |
CN114519847A (zh) * | 2022-01-13 | 2022-05-20 | 东南大学 | 一种适用于车路协同感知系统的目标一致性判别方法 |
CN114549973A (zh) * | 2022-01-25 | 2022-05-27 | 河南大学 | 面向软件定义卫星的高光谱图像类脑分类方法 |
CN114627154A (zh) * | 2022-03-18 | 2022-06-14 | 中国电子科技集团公司第十研究所 | 一种在频域部署的目标跟踪方法、电子设备及存储介质 |
CN114708639A (zh) * | 2022-04-07 | 2022-07-05 | 重庆大学 | 一种基于异构脉冲神经网络的人脸识别的fpga芯片 |
CN114970829A (zh) * | 2022-06-08 | 2022-08-30 | 中国电信股份有限公司 | 脉冲信号处理方法、装置、设备及存储 |
CN114972435A (zh) * | 2022-06-10 | 2022-08-30 | 东南大学 | 基于长短时集成外观更新机制的目标跟踪方法 |
CN115586254A (zh) * | 2022-09-30 | 2023-01-10 | 陕西师范大学 | 一种基于卷积神经网络识别金属材料的方法及系统 |
CN115723280A (zh) * | 2022-12-07 | 2023-03-03 | 河北科技大学 | 厚度可调节的聚酰亚胺薄膜的生产设备 |
CN117237604A (zh) * | 2023-09-14 | 2023-12-15 | 电子科技大学重庆微电子产业技术研究院 | 一种目标跟踪方法、装置、计算机设备及存储介质 |
CN117314972A (zh) * | 2023-11-21 | 2023-12-29 | 安徽大学 | 一种基于多类注意力机制的脉冲神经网络的目标跟踪方法 |
CN118072079A (zh) * | 2024-01-29 | 2024-05-24 | 中国科学院自动化研究所 | 基于脉冲神经网络的小目标物体识别方法及装置 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110555523B (zh) * | 2019-07-23 | 2022-03-29 | 中建三局智能技术有限公司 | 一种基于脉冲神经网络的短程跟踪方法及系统 |
CN111444936A (zh) * | 2020-01-14 | 2020-07-24 | 中南大学 | 基于脉冲神经网络的高光谱遥感影像分类方法 |
CN111460906B (zh) * | 2020-03-05 | 2023-05-26 | 重庆大学 | 一种基于集成学习的脉冲神经网络模式识别方法及系统 |
CN111858989B (zh) * | 2020-06-09 | 2023-11-10 | 西安工程大学 | 一种基于注意力机制的脉冲卷积神经网络的图像分类方法 |
CN112116010B (zh) * | 2020-09-21 | 2023-12-12 | 中国科学院自动化研究所 | 基于膜电势预处理的ann-snn转换的分类方法 |
CN112381857A (zh) * | 2020-11-12 | 2021-02-19 | 天津大学 | 一种基于脉冲神经网络的类脑目标跟踪方法 |
CN112464807A (zh) * | 2020-11-26 | 2021-03-09 | 北京灵汐科技有限公司 | 视频动作识别方法、装置、电子设备和存储介质 |
CN112633497B (zh) * | 2020-12-21 | 2023-08-18 | 中山大学 | 一种基于重加权膜电压的卷积脉冲神经网络的训练方法 |
CN112906884B (zh) * | 2021-02-05 | 2023-04-18 | 鹏城实验室 | 一种基于脉冲连续吸引子网络的类脑预测跟踪方法 |
CN113159276B (zh) * | 2021-03-09 | 2024-04-16 | 北京大学 | 模型优化部署方法、系统、设备及存储介质 |
CN112953972A (zh) * | 2021-04-08 | 2021-06-11 | 周士博 | 一种单脉冲神经网络时域编码神经元的网络入侵检测方法 |
CN113641292B (zh) * | 2021-07-09 | 2022-08-12 | 荣耀终端有限公司 | 在触摸屏上进行操作的方法和电子设备 |
CN113313119B (zh) * | 2021-07-30 | 2021-11-09 | 深圳市海清视讯科技有限公司 | 图像识别方法、装置、设备、介质及产品 |
CN114549852B (zh) * | 2022-02-24 | 2023-04-18 | 四川大学 | 基于颜色拮抗与注意力机制的脉冲神经网络训练方法 |
CN114429491B (zh) * | 2022-04-07 | 2022-07-08 | 之江实验室 | 一种基于事件相机的脉冲神经网络目标跟踪方法和系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106407990A (zh) * | 2016-09-10 | 2017-02-15 | 天津大学 | 基于事件驱动的仿生目标识别系统 |
CN106845541A (zh) * | 2017-01-17 | 2017-06-13 | 杭州电子科技大学 | 一种基于生物视觉与精确脉冲驱动神经网络的图像识别方法 |
CN107292915A (zh) * | 2017-06-15 | 2017-10-24 | 国家新闻出版广电总局广播科学研究院 | 基于卷积神经网络的目标跟踪方法 |
WO2018052496A1 (en) * | 2016-09-19 | 2018-03-22 | Hrl Laboratories, Llc | Method for object detection in digital image and video using spiking neural networks |
CN109816026A (zh) * | 2019-01-29 | 2019-05-28 | 清华大学 | 卷积神经网络和脉冲神经网络的融合结构及方法 |
CN110555523A (zh) * | 2019-07-23 | 2019-12-10 | 中建三局智能技术有限公司 | 一种基于脉冲神经网络的短程跟踪方法及系统 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102346489A (zh) * | 2010-07-28 | 2012-02-08 | 中国科学院自动化研究所 | 基于脉冲神经网络的机器人跟踪目标的控制方法 |
US10248675B2 (en) * | 2013-10-16 | 2019-04-02 | University Of Tennessee Research Foundation | Method and apparatus for providing real-time monitoring of an artifical neural network |
CN106250981B (zh) * | 2015-06-10 | 2022-04-01 | 三星电子株式会社 | 减少存储器访问和网络内带宽消耗的脉冲神经网络 |
CN107333040B (zh) * | 2017-07-13 | 2020-02-21 | 中国科学院半导体研究所 | 仿生视觉成像与处理装置 |
CN108830157B (zh) * | 2018-05-15 | 2021-01-22 | 华北电力大学(保定) | 基于注意力机制和3d卷积神经网络的人体行为识别方法 |
WO2019246487A1 (en) * | 2018-06-21 | 2019-12-26 | Trustees Of Boston University | Auditory signal processor using spiking neural network and stimulus reconstruction with top-down attention control |
CN109214395A (zh) * | 2018-08-21 | 2019-01-15 | 电子科技大学 | 一种基于脉冲神经网络的图像特征描述方法 |
CN113111758B (zh) * | 2021-04-06 | 2024-01-12 | 中山大学 | 一种基于脉冲神经网络的sar图像舰船目标识别方法 |
-
2019
- 2019-07-23 CN CN201910668450.0A patent/CN110555523B/zh active Active
-
2020
- 2020-05-13 WO PCT/CN2020/089907 patent/WO2021012752A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106407990A (zh) * | 2016-09-10 | 2017-02-15 | 天津大学 | 基于事件驱动的仿生目标识别系统 |
WO2018052496A1 (en) * | 2016-09-19 | 2018-03-22 | Hrl Laboratories, Llc | Method for object detection in digital image and video using spiking neural networks |
CN106845541A (zh) * | 2017-01-17 | 2017-06-13 | 杭州电子科技大学 | 一种基于生物视觉与精确脉冲驱动神经网络的图像识别方法 |
CN107292915A (zh) * | 2017-06-15 | 2017-10-24 | 国家新闻出版广电总局广播科学研究院 | 基于卷积神经网络的目标跟踪方法 |
CN109816026A (zh) * | 2019-01-29 | 2019-05-28 | 清华大学 | 卷积神经网络和脉冲神经网络的融合结构及方法 |
CN110555523A (zh) * | 2019-07-23 | 2019-12-10 | 中建三局智能技术有限公司 | 一种基于脉冲神经网络的短程跟踪方法及系统 |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112835844A (zh) * | 2021-03-03 | 2021-05-25 | 苏州蓝甲虫机器人科技有限公司 | 一种脉冲神经网络计算负载的通信稀疏化方法 |
CN112835844B (zh) * | 2021-03-03 | 2024-03-19 | 苏州蓝甲虫机器人科技有限公司 | 一种脉冲神经网络计算负载的通信稀疏化方法 |
CN113034542A (zh) * | 2021-03-09 | 2021-06-25 | 北京大学 | 一种运动目标检测跟踪方法 |
CN113034542B (zh) * | 2021-03-09 | 2023-10-10 | 北京大学 | 一种运动目标检测跟踪方法 |
CN113435246A (zh) * | 2021-05-18 | 2021-09-24 | 西安电子科技大学 | 一种辐射源个体智能识别方法、系统及终端 |
CN113435246B (zh) * | 2021-05-18 | 2024-04-05 | 西安电子科技大学 | 一种辐射源个体智能识别方法、系统及终端 |
CN113077017A (zh) * | 2021-05-24 | 2021-07-06 | 河南大学 | 基于脉冲神经网络的合成孔径图像分类方法 |
CN113077017B (zh) * | 2021-05-24 | 2022-12-13 | 河南大学 | 基于脉冲神经网络的合成孔径图像分类方法 |
CN113673310B (zh) * | 2021-07-05 | 2024-06-11 | 西安电子科技大学 | 一种基于增强孪生网络的舰船追踪方法 |
CN113673310A (zh) * | 2021-07-05 | 2021-11-19 | 西安电子科技大学 | 一种基于增强孪生网络的舰船追踪方法 |
CN113807421B (zh) * | 2021-09-07 | 2024-03-19 | 华中科技大学 | 基于脉冲发送皮层模型的注意力模块的特征图处理方法 |
CN113807421A (zh) * | 2021-09-07 | 2021-12-17 | 华中科技大学 | 基于脉冲发送皮层模型的注意力模块的特征图处理方法 |
CN113887645B (zh) * | 2021-10-13 | 2024-02-13 | 西北工业大学 | 一种基于联合注意力孪生网络的遥感图像融合分类方法 |
CN113887645A (zh) * | 2021-10-13 | 2022-01-04 | 西北工业大学 | 一种基于联合注意力孪生网络的遥感图像融合分类方法 |
CN114037050A (zh) * | 2021-10-21 | 2022-02-11 | 大连理工大学 | 一种基于脉冲神经网络内在可塑性的机器人退化环境避障方法 |
CN114037050B (zh) * | 2021-10-21 | 2022-08-16 | 大连理工大学 | 一种基于脉冲神经网络内在可塑性的机器人退化环境避障方法 |
CN114118168A (zh) * | 2021-12-08 | 2022-03-01 | 中国人民解放军96901部队26分队 | 多站联合的电磁脉冲事件识别方法、系统和设备 |
CN114282647A (zh) * | 2021-12-09 | 2022-04-05 | 上海应用技术大学 | 基于脉冲神经网络的神经形态视觉传感器目标检测方法 |
CN114282647B (zh) * | 2021-12-09 | 2024-02-02 | 上海应用技术大学 | 基于脉冲神经网络的神经形态视觉传感器目标检测方法 |
CN114489095B (zh) * | 2021-12-11 | 2023-12-26 | 西北工业大学 | 一种应用于变体飞行器的类脑脉冲神经网络控制方法 |
CN114489095A (zh) * | 2021-12-11 | 2022-05-13 | 西北工业大学 | 一种应用于变体飞行器的类脑脉冲神经网络控制方法 |
CN114359200A (zh) * | 2021-12-28 | 2022-04-15 | 中国科学院西安光学精密机械研究所 | 基于脉冲耦合神经网络的图像清晰度评估方法及终端设备 |
CN114359200B (zh) * | 2021-12-28 | 2023-04-18 | 中国科学院西安光学精密机械研究所 | 基于脉冲耦合神经网络的图像清晰度评估方法及终端设备 |
CN114386578A (zh) * | 2022-01-12 | 2022-04-22 | 西安石油大学 | 一种海思无npu硬件上实现的卷积神经网络方法 |
CN114519847A (zh) * | 2022-01-13 | 2022-05-20 | 东南大学 | 一种适用于车路协同感知系统的目标一致性判别方法 |
CN114549973A (zh) * | 2022-01-25 | 2022-05-27 | 河南大学 | 面向软件定义卫星的高光谱图像类脑分类方法 |
CN114627154A (zh) * | 2022-03-18 | 2022-06-14 | 中国电子科技集团公司第十研究所 | 一种在频域部署的目标跟踪方法、电子设备及存储介质 |
CN114708639B (zh) * | 2022-04-07 | 2024-05-14 | 重庆大学 | 一种基于异构脉冲神经网络的人脸识别的fpga芯片 |
CN114708639A (zh) * | 2022-04-07 | 2022-07-05 | 重庆大学 | 一种基于异构脉冲神经网络的人脸识别的fpga芯片 |
CN114970829A (zh) * | 2022-06-08 | 2022-08-30 | 中国电信股份有限公司 | 脉冲信号处理方法、装置、设备及存储 |
CN114970829B (zh) * | 2022-06-08 | 2023-11-17 | 中国电信股份有限公司 | 脉冲信号处理方法、装置、设备及存储 |
CN114972435A (zh) * | 2022-06-10 | 2022-08-30 | 东南大学 | 基于长短时集成外观更新机制的目标跟踪方法 |
CN115586254B (zh) * | 2022-09-30 | 2024-05-03 | 陕西师范大学 | 一种基于卷积神经网络识别金属材料的方法及系统 |
CN115586254A (zh) * | 2022-09-30 | 2023-01-10 | 陕西师范大学 | 一种基于卷积神经网络识别金属材料的方法及系统 |
CN115723280A (zh) * | 2022-12-07 | 2023-03-03 | 河北科技大学 | 厚度可调节的聚酰亚胺薄膜的生产设备 |
CN117237604A (zh) * | 2023-09-14 | 2023-12-15 | 电子科技大学重庆微电子产业技术研究院 | 一种目标跟踪方法、装置、计算机设备及存储介质 |
CN117314972B (zh) * | 2023-11-21 | 2024-02-13 | 安徽大学 | 一种基于多类注意力机制的脉冲神经网络的目标跟踪方法 |
CN117314972A (zh) * | 2023-11-21 | 2023-12-29 | 安徽大学 | 一种基于多类注意力机制的脉冲神经网络的目标跟踪方法 |
CN118072079A (zh) * | 2024-01-29 | 2024-05-24 | 中国科学院自动化研究所 | 基于脉冲神经网络的小目标物体识别方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN110555523A (zh) | 2019-12-10 |
CN110555523B (zh) | 2022-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021012752A1 (zh) | 一种基于脉冲神经网络的短程跟踪方法及系统 | |
WO2021244079A1 (zh) | 智能家居环境中图像目标检测方法 | |
CN110427875B (zh) | 基于深度迁移学习和极限学习机的红外图像目标检测方法 | |
CN109741318B (zh) | 基于有效感受野的单阶段多尺度特定目标的实时检测方法 | |
EP4080416A1 (en) | Adaptive search method and apparatus for neural network | |
CN109447034A (zh) | 基于YOLOv3网络的自动驾驶中交通标识检测方法 | |
CN110647991B (zh) | 一种基于无监督领域自适应的三维人体姿态估计方法 | |
CN108427921A (zh) | 一种基于卷积神经网络的人脸识别方法 | |
CN112418330A (zh) | 一种基于改进型ssd的小目标物体高精度检测方法 | |
CN110543906B (zh) | 基于Mask R-CNN模型的肤质自动识别方法 | |
CN111612136B (zh) | 一种神经形态视觉目标分类方法及系统 | |
CN109086653A (zh) | 手写模型训练方法、手写字识别方法、装置、设备及介质 | |
CN108446676A (zh) | 基于有序编码及多层随机投影的人脸图像年龄判别方法 | |
CN111275171A (zh) | 一种基于参数共享的多尺度超分重建的小目标检测方法 | |
CN114612660A (zh) | 一种基于多特征融合点云分割的三维建模方法 | |
CN117454124A (zh) | 一种基于深度学习的船舶运动预测方法及系统 | |
WO2024016739A1 (zh) | 训练神经网络模型的方法、电子设备、云端、集群及介质 | |
CN110111365A (zh) | 基于深度学习的训练方法和装置以及目标跟踪方法和装置 | |
CN107633196A (zh) | 一种基于卷积神经网络的眼球移动预测方案 | |
CN114529949A (zh) | 一种基于深度学习的轻量级手势识别方法 | |
CN118628736A (zh) | 基于聚类思想的弱监督室内点云语义分割方法、装置及介质 | |
CN110334747A (zh) | 基于改进卷积神经网络的图像识别方法及应用 | |
WO2021258482A1 (zh) | 基于迁移与弱监督的美丽预测方法、装置及存储介质 | |
Mallet et al. | Hybrid Deepfake Detection Utilizing MLP and LSTM | |
CN116188870A (zh) | 一种基于脉冲卷积神经网络的钢材表面缺陷图像分类方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20843998 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20843998 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 250523) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20843998 Country of ref document: EP Kind code of ref document: A1 |