CN112580737A - Pulse neural network feature extraction method based on multi-scale feature fusion - Google Patents

Pulse neural network feature extraction method based on multi-scale feature fusion Download PDF

Info

Publication number
CN112580737A
CN112580737A CN202011573605.1A CN202011573605A CN112580737A CN 112580737 A CN112580737 A CN 112580737A CN 202011573605 A CN202011573605 A CN 202011573605A CN 112580737 A CN112580737 A CN 112580737A
Authority
CN
China
Prior art keywords
image
neural network
pulse
feature fusion
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011573605.1A
Other languages
Chinese (zh)
Inventor
刘佳雯
崔向阳
牛慧博
王楠
孟庆磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Science And Technology Network Information Development Co ltd
Original Assignee
Aerospace Science And Technology Network Information Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Science And Technology Network Information Development Co ltd filed Critical Aerospace Science And Technology Network Information Development Co ltd
Priority to CN202011573605.1A priority Critical patent/CN112580737A/en
Publication of CN112580737A publication Critical patent/CN112580737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a pulse neural network feature extraction method based on multi-scale feature fusion, which comprises the following steps: preprocessing an input binary image by using a multi-scale feature fusion method, then correspondingly converting pixels of the input image into an oscillation cosine function of membrane potential by using a pulse coding method-phase coding with a biological basis, then obtaining a pulse issuing moment corresponding to a current pixel according to a relation met between the pulse issuing moment and the phase, and finally coding image pixel information into time information on a time dimension and inputting the time information into a pulse neural network; the preprocessing of the input binary image by using the multi-scale feature fusion method comprises the following steps: and performing Gaussian filtering operation on the image, and performing sliding filtering on the image by Gaussian filters with different sizes, wherein when the filtering result obtained by a certain area in the image is higher, then performing fusion operation on the image characteristics obtained by the filters with different sizes.

Description

Pulse neural network feature extraction method based on multi-scale feature fusion
Technical Field
The invention relates to an image recognition technology, in particular to a pulse neural network feature extraction method based on multi-scale feature fusion.
Background
In the process of using the impulse neural network to carry out image retrieval and image identification and classification, a key step is a neural information coding process. The effect of this process is on one hand feature extraction, i.e. extracting features contained in an image, and on the other hand pulse sequence generation, i.e. converting a visual stimulation signal into a temporal pulse signal to be received by a neural network later, and the finally encoded temporal sequence will affect the performance of the network. The characteristic extraction process is biologically expressed that neurons can extract and sample key characteristics of input information in a certain mode, the process is called as characteristic extraction of images in deep learning, the detailed principle and related algorithm of calculation in the brain are still not mature at present, and therefore the method for extracting the image characteristics does not have a unified standard.
The existing widely used impulse neural network coding method still has a lot of redundant information when performing feature extraction on input information, wherein in the problem of applying the impulse neural network to image retrieval, the existing method only encodes input image pixels into time series by using phase coding as the input of the network, and the inadequacy of feature extraction in the process limits the performance of the algorithm. Therefore, in order to improve the efficiency of neural information coding and fully extract the key features of the information, the invention does not directly use the result of the neural coding when coding the information, but proposes a feature extraction method, and tries to select the feature extraction method to extract the key features of the input information before coding.
In the aspect of solving the image retrieval problem by using the impulse neural network, the existing method only encodes the input image information into the impulse time sequence by the impulse encoding method, and the operation can cause a large amount of redundant information in the process of feature extraction, and simultaneously neglects the importance of key feature extraction. The insufficient feature extraction for the input information will further cause the precision of the impulse neural network retrieval algorithm to be limited.
Disclosure of Invention
The invention aims to provide a pulse neural network feature extraction method based on multi-scale feature fusion, which is used for solving the problems in the prior art.
The invention discloses a pulse neural network feature extraction method based on multi-scale feature fusion, which comprises the following steps: preprocessing an input binary image by using a multi-scale feature fusion method, then correspondingly converting pixels of the input image into an oscillation cosine function of membrane potential by using a pulse coding method-phase coding with a biological basis, then obtaining a pulse issuing moment corresponding to a current pixel according to a relation met between the pulse issuing moment and the phase, and finally coding image pixel information into time information on a time dimension and inputting the time information into a pulse neural network; the preprocessing of the input binary image by using the multi-scale feature fusion method comprises the following steps: the image is subjected to Gaussian filtering operation, when Gaussian filters with different sizes are used for carrying out sliding filtering on the image, when the filtering result obtained by a certain area in the image is higher, the certain area and the feature detected by the filter have higher correlation, otherwise, the certain area and the feature are not similar, and then the image features obtained by the filters with different sizes are subjected to fusion operation.
According to an embodiment of the method for extracting features of the pulse neural network based on multi-scale feature fusion, three filters with different sizes, namely 3 × 3, 5 × 5 and 7 × 7, are adopted for multi-scale feature fusion to further extract the features.
According to an embodiment of the multi-scale feature fusion-based impulse neural network feature extraction method, an input gray scale map is firstly subjected to feature extraction by three filters with different sizes, and then the obtained three filtered images are subjected to feature fusion.
According to an embodiment of the method for extracting the features of the impulse neural network based on the multi-scale feature fusion, three pictures obtained by filtering are respectively converted into binary images, then a voting mechanism is adopted to operate corresponding pixel positions of the binary images, and when three feature images (X) are obtainedi,Yi,Zi) And if the number of the white pixels at the same pixel position exceeds half, the corresponding position of the fused feature map is also the white pixel, otherwise, the corresponding position after fusion is the black pixel, and the feature map of the binary image corresponding to the gray image is obtained and used as the input of the final impulse neural network.
According to an embodiment of the method for extracting features of the impulse neural network based on multi-scale feature fusion, in which,
Figure BDA0002858821830000031
wherein Xi,Yi,ZiThe ith pixel, x, after converting the feature maps obtained for the filters of different sizes into binary imagesiAs input to the final spiking neural network.
According to an embodiment of the method for extracting the characteristics of the pulse neural network based on the multi-scale characteristic fusion, the pulse neurons select a widely used LIF leakage accumulation issuing model, and learn the network by using a PSD supervised learning algorithm, so that pulse sequences corresponding to different numbers of output neurons are obtained.
According to an embodiment of the method for extracting the features of the impulse neural network based on the multi-scale feature fusion, similarity measurement is performed on the output impulse time series by adopting Euclidean distance to further determine the retrieved similar images.
According to an embodiment of the method for extracting features of the impulse neural network based on multi-scale feature fusion, the input image pixels are subjected to formula
Figure BDA0002858821830000032
Is correspondingly convertedThe oscillation cosine function of the membrane potential is used for obtaining the corresponding pulse release moment t of the current pixel according to the satisfied relation between the pulse release moment and the phaseiWherein
Figure BDA0002858821830000033
Represents the corresponding oscillation function of the ith coding neuron, A represents the amplitude, omega represents the angular velocity of oscillation, phiiRepresenting the phase offset of the ith neuron in the image in the membrane potential oscillation model.
According to an embodiment of the method for extracting features of the impulse neural network based on multi-scale feature fusion, the phase ω t is set to be a white pixeliiEqual to 0, otherwise equal to pi.
The invention provides a multi-scale feature fusion method for fully extracting features of input image information, so that the problem of accuracy caused by insufficient feature extraction and more redundant information when an image retrieval problem is completed by using a pulse neural network is solved, the robustness under the condition of high noise is realized, the effectiveness of pulse neural network information coding is improved, and the identification accuracy of the pulse neural network is improved.
Drawings
FIG. 1 is an exemplary multi-scale feature fusion diagram;
FIG. 2 is a graph showing the effect of adding multi-scale feature fusion on the average accuracy mean of the algorithm after feature extraction.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
The present invention is based on the fact that on the biological visual system, humans can see things because the numerous photoreceptive organelles on the retina play a key role therein. The photoreceptors can further convert the input optical signals into nerve electrical pulse signals, so that nerve cells related to vision in cerebral cortex are stimulated, and then the human brain can display images of things.
The invention relates to a pulse neural network feature extraction method based on multi-scale feature fusion, which comprises the following steps: firstly, an input binary image is preprocessed by a multi-scale feature fusion method, and then image pixel information is encoded into time information on a time dimension by using a phase encoding method with a biological principle and is input into an impulse neural network. The coding method is provided according to the relationship between action potential in human brain and subthreshold membrane potential, and the specific implementation mode is as follows: each input image pixel value of the image is converted into an oscillating cosine signal by a coding neuron, and the conversion process can be described by the following formula:
Figure BDA0002858821830000041
wherein
Figure BDA0002858821830000051
Representing the concussion function, T, corresponding to the ith coding neuronmaxRepresenting the period of oscillation (setting T)max200ms), a stands for amplitude, ω for oscillating angular velocity, ΦiRepresenting the phase offset of the ith neuron in the image in the membrane potential oscillation model, and the calculation formula is as follows:
φi=φ0+(i-1)·Δφ
wherein phi0Which represents the initial displacement of the displacement,
Figure BDA0002858821830000052
for the minimum offset, n is the number of pixels in the picture, i.e., the total number of coding neurons. Depending on the input of different binary pixels, causing the membrane oscillation to move up or down, spikes are generated when the membrane potential exceeds a threshold. And finally setting the spike pulse as the oscillation peak value through fine adjustment of the parameter A, the displacement and the threshold value. Thus, the coding unit outputs a phase for the white pixelA spike, another shifted phase of 180 degrees is output for the black pixels, whereby the image pixels can be finally encoded as temporal information.
And then, learning the input pulse sequence obtained by coding through a PSD supervised learning algorithm, processing the input pulse sequence by using a convolution function in the learning process to finally obtain a continuous function, adjusting synaptic weight according to the error between the target output pulse sequence and the actual output pulse sequence, and generating an inhibition effect when the error value is less than 0 and generating an enhancement effect when the error value is more than 0. Finally, similarity measurement is carried out on the pulse time sequences obtained by network learning to further determine the finally retrieved similar images, and the most common measurement method is selected from the similarity measurement, namely, each peak time X in the two pulse release sequences X and Y is directly calculatedi,yiThe euclidean distance between them, as follows:
Figure BDA0002858821830000053
when the multi-scale feature fusion method operation is carried out on the binary image, the most common filtering is considered, and Gaussian filtering operation with different sizes is firstly carried out on the image. When the Gaussian filters with different sizes are used for performing sliding filtering on the image, when the filtering result obtained by a certain region in the image is higher, the region is indicated to have higher correlation with the feature detected by the filter, otherwise, the region is not similar, and the process is equivalent to a simple cell in a visual cell. And then, carrying out fusion operation on the image features obtained by the filters with different sizes. In view of the uncertainty of the size and number of the filters, it is found that the size of the filters used in most neural networks is usually 1 × 1, 3 × 3, 5 × 5, 7 × 7, so that the present invention preferably uses three filters of different sizes, 3 × 3, 5 × 5, and 7 × 7, to perform multi-scale feature fusion to extract features, and the process is shown in fig. 1.
The specific operation flow is as follows: the input gray scale map is firstly extracted by three filters with different sizes, and then the specific characteristics are obtainedThe three filtered images are then subjected to feature fusion, and the specific operation is to respectively convert the three filtered images into binary images, and then to operate corresponding pixel positions of the binary images by adopting a voting mechanism, namely when the three feature images (X) are usedi,Yi,Zi) If the number of the white pixels at the same pixel position exceeds half, the corresponding position of the feature map obtained by fusion is also a white pixel (1), otherwise, the corresponding position after fusion is a black pixel. And finally obtaining a characteristic diagram of the binary image corresponding to the gray level image as the input of the final impulse neural network. Can be represented by the following formula:
Figure BDA0002858821830000061
wherein Xi,Yi,ZiThe ith pixel, x, after converting the feature maps obtained for the filters of different sizes into binary imagesiAs input to the final spiking neural network.
The key points of the invention are as follows: aiming at the problem of image retrieval by using a pulse neural network, in the aspect of feature extraction, a method for extracting features of information according to different receptive fields in biology is provided, and a multi-scale feature fusion method for binary images is provided, so that key features of the images are extracted more effectively, and the efficiency of neural information coding is improved.
The invention is experimentally verified on MNIST data set, and the accuracy is shown in figure 2. The multi-scale feature fusion method provided by the feature extraction part can be found to mainly influence the accuracy of image retrieval of the algorithm. When the output hash length is 5, the algorithm precision for optimizing the feature extraction mode is 73.77%, which is improved by 6% compared with the current algorithm precision of 67.81%, and when the output hash length is 35, the algorithm precision for optimizing the feature extraction mode is 92.75%, which is improved by 2.52% compared with the current algorithm precision of 90.23%, thereby effectively enhancing the expressive force of features and improving the retrieval precision.
Aiming at solving the problem of image retrieval by using a pulse neural network, the invention provides a multi-scale feature fusion method aiming at optimizing a feature extraction part in the information coding process of the neural network. Fully considering that the sizes of the receptive fields of different nerve cells are different, and the sizes of the receptive fields of different sizes are different relative to the size of the image receptive range, the detected image characteristics are also different. Therefore, due to the limitation of the size of a specific receptive field in a neural network, the single feature cannot effectively identify the features of the image, so that a method for performing multi-scale feature fusion on different features extracted from the receptive fields with different sizes is provided to supplement the detection of the filters with different sizes on the image, and the image features are effectively extracted.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (9)

1. A pulse neural network feature extraction method based on multi-scale feature fusion is characterized by comprising the following steps:
preprocessing an input binary image by using a multi-scale feature fusion method, then correspondingly converting pixels of the input image into an oscillation cosine function of membrane potential by using a pulse coding method-phase coding with a biological basis, then obtaining a pulse issuing moment corresponding to a current pixel according to a relation met between the pulse issuing moment and the phase, and finally coding image pixel information into time information on a time dimension and inputting the time information into a pulse neural network;
the preprocessing of the input binary image by using the multi-scale feature fusion method comprises the following steps: the image is subjected to Gaussian filtering operation, when Gaussian filters with different sizes are used for carrying out sliding filtering on the image, when the filtering result obtained by a certain area in the image is higher, the certain area and the feature detected by the filter have higher correlation, otherwise, the certain area and the feature are not similar, and then the image features obtained by the filters with different sizes are subjected to fusion operation.
2. The method for extracting features of the pulse neural network based on multi-scale feature fusion as claimed in claim 1, wherein three filters with different sizes of 3 x 3, 5 x 5 and 7 x 7 are adopted for multi-scale feature fusion to further extract features.
3. The multi-scale feature fusion-based impulse neural network feature extraction method as claimed in claim 1 or 2, wherein the input gray-scale image is firstly extracted for its specific features by three filters with different sizes, and then the obtained three filtered images are subjected to feature fusion.
4. The method for extracting features of the spiking neural network based on the multi-scale feature fusion as claimed in claim 3, wherein three pictures obtained by filtering are respectively converted into binary images, and then a voting mechanism is used to operate corresponding pixel positions of the binary images when three feature maps (X) are usedi,Yi,Zi) And if the number of the white pixels at the same pixel position exceeds half, the corresponding position of the fused feature map is also the white pixel, otherwise, the corresponding position after fusion is the black pixel, and the feature map of the binary image corresponding to the gray image is obtained and used as the input of the final impulse neural network.
5. The method of claim 4, wherein the multi-scale feature fusion-based impulse neural network feature extraction method,
Figure FDA0002858821820000021
wherein Xi,Yi,ZiThe ith pixel, x, after converting the feature maps obtained for the filters of different sizes into binary imagesiAs input to the final spiking neural network.
6. The method for extracting features of the spiking neural network based on the multi-scale feature fusion as claimed in claim 1, wherein the spiking neurons select a widely used LIF leakage accumulation distribution model, and learn the network by using a PSD supervised learning algorithm, thereby obtaining the pulse sequences corresponding to different numbers of output neurons.
7. The method according to claim 1, wherein the similarity measure is performed on the output pulse time series by using Euclidean distance to further determine the retrieved similar images.
8. The method of claim 1, wherein the input image pixels are formulated using a formula
Figure FDA0002858821820000022
Correspondingly converting the current pixel into an oscillation cosine function of the membrane potential, and then obtaining the corresponding pulse release moment t of the current pixel according to the satisfied relation between the pulse release moment and the phaseiWherein
Figure FDA0002858821820000023
Represents the corresponding oscillation function of the ith coding neuron, A represents the amplitude, omega represents the angular velocity of oscillation, phiiRepresenting the phase offset of the ith neuron in the image in the membrane potential oscillation model.
9. The method for extracting features of the impulse neural network based on multi-scale feature fusion as claimed in claim 8, wherein the phase ω t is the phase ω t when the white pixel is formediiEqual to 0, otherwise equal to pi.
CN202011573605.1A 2020-12-25 2020-12-25 Pulse neural network feature extraction method based on multi-scale feature fusion Pending CN112580737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011573605.1A CN112580737A (en) 2020-12-25 2020-12-25 Pulse neural network feature extraction method based on multi-scale feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011573605.1A CN112580737A (en) 2020-12-25 2020-12-25 Pulse neural network feature extraction method based on multi-scale feature fusion

Publications (1)

Publication Number Publication Date
CN112580737A true CN112580737A (en) 2021-03-30

Family

ID=75139981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011573605.1A Pending CN112580737A (en) 2020-12-25 2020-12-25 Pulse neural network feature extraction method based on multi-scale feature fusion

Country Status (1)

Country Link
CN (1) CN112580737A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845541A (en) * 2017-01-17 2017-06-13 杭州电子科技大学 A kind of image-recognizing method based on biological vision and precision pulse driving neutral net
CN110097090A (en) * 2019-04-10 2019-08-06 东南大学 A kind of image fine granularity recognition methods based on multi-scale feature fusion
CN110298266A (en) * 2019-06-10 2019-10-01 天津大学 Deep neural network object detection method based on multiple dimensioned receptive field Fusion Features

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845541A (en) * 2017-01-17 2017-06-13 杭州电子科技大学 A kind of image-recognizing method based on biological vision and precision pulse driving neutral net
CN110097090A (en) * 2019-04-10 2019-08-06 东南大学 A kind of image fine granularity recognition methods based on multi-scale feature fusion
CN110298266A (en) * 2019-06-10 2019-10-01 天津大学 Deep neural network object detection method based on multiple dimensioned receptive field Fusion Features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐雪松: "多尺度融合的立体匹配算法", 《模式识别与人工智能》, vol. 33, no. 2 *

Similar Documents

Publication Publication Date Title
CN110084156B (en) Gait feature extraction method and pedestrian identity recognition method based on gait features
CN108520216B (en) Gait image-based identity recognition method
CN112052886A (en) Human body action attitude intelligent estimation method and device based on convolutional neural network
CN108985252B (en) Improved image classification method of pulse depth neural network
CN104008370A (en) Video face identifying method
Rahman et al. Person identification using ear biometrics
CN103218621B (en) The recognition methods of multiple dimensioned vehicle in a kind of life outdoor videos monitoring
CN112418041B (en) Multi-pose face recognition method based on face orthogonalization
CN112016402B (en) Self-adaptive method and device for pedestrian re-recognition field based on unsupervised learning
CN110110668B (en) Gait recognition method based on feedback weight convolutional neural network and capsule neural network
CN110458903B (en) Image processing method of coding pulse sequence
CN109635726B (en) Landslide identification method based on combination of symmetric deep network and multi-scale pooling
CN113111758B (en) SAR image ship target recognition method based on impulse neural network
CN113516005B (en) Dance action evaluation system based on deep learning and gesture estimation
CN111401303A (en) Cross-visual angle gait recognition method with separated identity and visual angle characteristics
CN114186672A (en) Efficient high-precision training algorithm for impulse neural network
CN115294655A (en) Method, device and equipment for countermeasures generation pedestrian re-recognition based on multilevel module features of non-local mechanism
CN114282647B (en) Pulse neural network-based target detection method for neuromorphic vision sensor
CN110222568B (en) Cross-visual-angle gait recognition method based on space-time diagram
Huang et al. Human emotion recognition based on face and facial expression detection using deep belief network under complicated backgrounds
CN115188066A (en) Moving target detection system and method based on cooperative attention and multi-scale fusion
CN112580737A (en) Pulse neural network feature extraction method based on multi-scale feature fusion
CN114360058A (en) Cross-visual angle gait recognition method based on walking visual angle prediction
CN114332536A (en) Forged image detection method, system and storage medium based on posterior probability
Yuan et al. SCTransNet: Spatial-channel Cross Transformer Network for Infrared Small Target Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination