CN111401226B - Rapid identification method for radiation source - Google Patents

Rapid identification method for radiation source Download PDF

Info

Publication number
CN111401226B
CN111401226B CN202010174283.7A CN202010174283A CN111401226B CN 111401226 B CN111401226 B CN 111401226B CN 202010174283 A CN202010174283 A CN 202010174283A CN 111401226 B CN111401226 B CN 111401226B
Authority
CN
China
Prior art keywords
radiation source
learning rate
neural network
network model
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010174283.7A
Other languages
Chinese (zh)
Other versions
CN111401226A (en
Inventor
苟嫣
邵怀宗
王沙飞
林静然
利强
潘晔
胡全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Peng Cheng Laboratory
Original Assignee
University of Electronic Science and Technology of China
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China, Peng Cheng Laboratory filed Critical University of Electronic Science and Technology of China
Priority to CN202010174283.7A priority Critical patent/CN111401226B/en
Publication of CN111401226A publication Critical patent/CN111401226A/en
Application granted granted Critical
Publication of CN111401226B publication Critical patent/CN111401226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for quickly identifying a radiation source, which comprises the following steps: s1, reading a radiation source signal with a label; s2, carrying out short-time Fourier transform on the radiation source signal with the label, and converting the one-dimensional signal into a two-dimensional time-frequency image of two channels; s3, constructing a deep convolutional neural network model; s4, inputting two-channel two-dimensional time-frequency images into a deep convolution neural network model, and training by adopting a self-adaptive learning rate algorithm to obtain a trained model; and S5, identifying the target to be identified by adopting the trained model to finish the rapid identification of the radiation source. The method combines the loss function to realize the self-adaptive change of the learning rate, greatly improves the convergence rate and the recognition precision of the neural network model compared with the existing learning rate, optimizes the recognition performance of the radiation source and does not need artificial parameter adjustment at the same time.

Description

Rapid identification method for radiation source
Technical Field
The invention relates to the field of radiation source identification, in particular to a rapid identification method of a radiation source.
Background
In the application of utilizing the neural network to carry out radiation source identification, the learning rate plays an extremely important role in the training of the neural network model, and the learning rate obviously influences the convergence performance of the model.
The existing learning rate strategies are mainly divided into an attenuation type learning rate and a self-adaptive change learning rate. The attenuation type learning rate mainly comprises piecewise constant attenuation, exponential attenuation, cosine attenuation and the like. The piecewise constant attenuation is that different learning rate constants are set in different intervals according to training times on a defined training interval, the attenuation strategy requires debugging personnel to set different learning rates for different tasks, the debugging personnel needs to deeply know models and data sets, matched learning rates are difficult to set in a short time through manual parameter adjustment, and parameter adjustment requirements are high; the methods such as exponential decay and cosine decay are that in the training process, along with the increase of the training times, the learning rate is gradually reduced, so that the training times become the main regulation basis of the learning rate, and therefore the problems of low convergence rate, serious time consumption, crossing of an optimal value and the like often occur.
The learning rate of the self-adaptive change mainly comprises an AdaGrad algorithm, a RMSProp algorithm, an Adam algorithm and the like. The basic idea is as follows: different learning rates are set for each parameter of the neural network, and the learning rates automatically adapt to the parameters of the model through dynamic changes. The AdaGrad algorithm adjusts the learning rate by accumulating the square of the gradient from the beginning of training, but for the training of the deep neural network model, the effective learning rate is reduced too early and excessively, so that the training of the model is stopped, and therefore, the AdaGrad algorithm can achieve good effect only on part of the neural network model and lacks certain robustness. The Adam algorithm has the advantages of both AdaGrad and RMSProp algorithms, the initial default value of the learning rate is 0.001, but the Adam algorithm needs to be combined with an attenuation type learning rate, the training effect of the model can be better, only the default learning rate value provided by the Adam algorithm is used, and the convergence value of the model is not the optimal solution, so that when the Adam algorithm is used, the value of the learning rate still needs to be adjusted according to the condition of an actual model.
In summary, the existing learning rate algorithm used in the field of radiation source identification mainly has the problems that manual frequent parameter adjustment is needed, the time and labor cost is high, the learning rate of part of adaptive changes can avoid manual parameter adjustment, but the convergence rate of a neural network model is low, the optimal solution is difficult to achieve, and the robustness of the model is poor.
Disclosure of Invention
Aiming at the defects in the prior art, the rapid radiation source identification method provided by the invention solves the problems that the existing method needs manual parameter adjustment or has poor robustness.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
a method for quickly identifying a radiation source is provided, which comprises the following steps:
s1, reading a radiation source signal with a label;
s2, carrying out short-time Fourier transform on the radiation source signal with the label, and converting the one-dimensional signal into a two-dimensional time-frequency image of two channels;
s3, constructing a deep convolutional neural network model;
s4, inputting two-channel two-dimensional time-frequency images into a deep convolution neural network model, and training by adopting a self-adaptive learning rate algorithm to obtain a trained model; the self-adaptive learning rate algorithm comprises the following steps:
lr(t)=ptβ0sigmoid(L(t)L(t-1)-d);
t is the training iteration number of the deep convolutional neural network; lr (t) is the learning rate at the t-th iteration; p is the attenuation factor, beta0Taking the initial amplitude, L (-) as a loss function, sigmoid (-) as a sigmoid function, and d as the number of units shifted to the right by the sigmoid function;
and S5, identifying the target to be identified by adopting the trained model to finish the rapid identification of the radiation source.
Further, the specific method of step S1 includes:
and detecting the frame head and the frame tail of the radiation source signal by an energy detection method, and further reading the radiation source signal.
Further, the specific method of step S2 includes:
and setting the number of points of the short-time Fourier transform to 512, and carrying out short-time Fourier transform on the radiation source signal with the label to obtain two-dimensional time-frequency images of two channels.
Further, the specific method of step S3 includes:
and setting the sizes of convolution kernels of the deep convolution neural network model to be 3*3, and constructing 12 layers of convolution and pooling layers.
Further, the specific method for inputting the two-dimensional time-frequency image into the deep convolution neural network model in step S4 includes:
and inputting the two-channel two-dimensional time-frequency images into a deep convolution neural network model after batch normalization operation.
Further, the value of the attenuation factor p in step S4 is 0.999; initial amplitude beta0Has a value of 1; loss function
Figure BDA0002410249230000031
p (x) is the probability distribution of the standard result, q (x) is the probability distribution of the prediction result of the current network model, and x is the input corresponding to the current network model; the value of the parameter d is 8.
Further, the step S4 is performed by using a self-adaptive learning rate algorithm, and the specific method for obtaining the trained model includes the following substeps:
s4-1, in the convolutional layer, according to the formula:
Figure BDA0002410249230000032
Figure BDA0002410249230000033
respectively acquiring the variation delta k of the convolution kernel parameters and the variation delta b of the addition offsetaFurther updating the convolution kernel parameter and the addition offset; wherein i represents an input neuron; j represents the output neuron; l is the number of network layers; u and v represent the position coordinates of the convolution or pooling operation in the input feature map; δ is the partial derivative of the loss function to the net input; lr (T) is the current learning rate; p represents a local area participating in convolution operation each time in the input characteristic diagram;
Figure BDA0002410249230000034
the point product of the partial derivative of the j output neuron net input of the l layer of the loss function and the feature map of the i input neuron input of the l-1 layer is obtained;
s4-2, in the pooling layer, according to a formula:
Figure BDA0002410249230000041
Figure BDA0002410249230000042
respectively obtaining the variation delta beta of multiplication offset and the variation delta b of addition offset of the pooling layerbFurther updating multiplication bias and addition bias of the pooling layer; wherein X is a feature map; down (·) is a down-sampling function; and (4) a symbol. Representing a dot product;
s4-3, in the full connection layer, according to a formula:
Δw=-lr(T)δlxl-1
Δbc=-lr(T)δl
respectively acquiring the variation delta w of the weight of the full connection layer and the variation delta b of the biasc(ii) a Further updating the weight and bias of the full connection layer;
s4-4, judging whether the convolution kernel parameters and the addition bias of the convolution layer, the multiplication bias and the addition bias of the pooling layer and the weight and the bias of the full-connection layer in the deep convolution neural network model reach a convergence target or not, if so, finishing the training, and obtaining the trained model; otherwise, updating the learning rate through the self-adaptive learning rate algorithm and returning to the step S4-1.
The invention has the beneficial effects that: the method solves the problems of low convergence speed, serious time consumption and optimal value crossing of a neural network model in the existing learning rate algorithm in the field of radiation source identification. The invention adjusts the value of the learning rate according to the change of the loss function value, effectively changes the learning rate, improves the convergence rate of the neural network model, reduces the training time of the neural network model, improves the model identification precision, optimizes the identification performance of the radiation source and simultaneously does not need artificial parameter adjustment.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined by the appended claims, and all changes that can be made by the invention using the inventive concept are intended to be protected.
As shown in fig. 1, the method for quickly identifying a radiation source includes the following steps:
s1, reading a radiation source signal with a label;
s2, carrying out short-time Fourier transform on the radiation source signal with the label, and converting the one-dimensional signal into a two-dimensional time-frequency image of two channels;
s3, constructing a deep convolution neural network model;
s4, inputting two-channel two-dimensional time-frequency images into a deep convolution neural network model, and training by adopting a self-adaptive learning rate algorithm to obtain a trained model; the self-adaptive learning rate algorithm comprises the following steps:
lr(t)=ptβ0sigmoid(L(t)L(t-1)-d);
t is the training iteration number of the deep convolutional neural network; lr (t) is the learning rate at the t-th iteration; p is the attenuation factor, beta0Taking the initial amplitude, L (-) as a loss function, sigmoid (-) as a sigmoid function, and d as the number of units shifted to the right by the sigmoid function;
and S5, identifying the target to be identified by adopting the trained model to finish the rapid identification of the radiation source.
The specific method of the step S1 comprises the following steps: and detecting the frame head and the frame tail of the radiation source signal by an energy detection method, and further reading the radiation source signal.
The specific method of step S2 includes: and setting the number of points of the short-time Fourier transform to 512, and carrying out short-time Fourier transform on the radiation source signal with the label to obtain two-dimensional time-frequency images of two channels.
The specific method of step S3 includes: and setting the sizes of convolution kernels of the deep convolution neural network model to be 3*3, and constructing 12 layers of convolution and pooling layers.
The specific method for inputting the two-dimensional time-frequency image into the depth convolution neural network model in the step S4 comprises the following steps: and inputting the two-channel two-dimensional time-frequency images into a deep convolution neural network model after batch normalization operation.
The value of the attenuation factor p in step S4 is 0.999; initial amplitude beta0The value of (b) is 1; loss function
Figure BDA0002410249230000061
p (x) is the probability distribution of the standard result, q (x) is the probability distribution of the prediction result of the current network model, and x is the input corresponding to the current network model; the value of the parameter d is 8. In the step S4, a self-adaptive learning rate algorithm is adopted for training, and the specific method for obtaining the trained model comprises the following substeps:
s4-1, in the convolutional layer, according to the formula:
Figure BDA0002410249230000062
Figure BDA0002410249230000063
respectively acquiring the variation delta k of the convolution kernel parameters and the variation delta b of the addition offsetaFurther updating the convolution kernel parameter and the addition offset; wherein i represents an input neuron; j represents the output neuron; l is the number of network layers; u and v represent the position coordinates of the convolution or pooling operation in the input feature map; δ is the partial derivative of the loss function with respect to the net input; lr (T) is the current learning rate; p represents a local area participating in convolution operation each time in the input characteristic diagram;
Figure BDA0002410249230000064
the point product of the partial derivative of the net input of the jth output neuron of the ith layer to the loss function and the characteristic diagram of the input of the ith input neuron of the l-1 layer is obtained;
s4-2, in the pooling layer, according to a formula:
Figure BDA0002410249230000065
Figure BDA0002410249230000066
respectively obtaining the variation delta beta of multiplicative bias and the variation delta b of additive bias of the pooling layerbFurther updating multiplication bias and addition bias of the pooling layer; wherein X is a feature map; down (·) is a down-sampling function; and (4) a symbol. Representing a dot product;
s4-3, in the full connection layer, according to a formula:
Δw=-lr(T)δlxl-1
Δbc=-lr(T)δl
respectively obtaining the weight variation delta w of the full connection layer and the bias variation delta bc(ii) a Further updating the weight and bias of the full connection layer;
s4-4, judging whether the convolution kernel parameters and the addition bias of the convolution layer, the multiplication bias and the addition bias of the pooling layer and the weight and the bias of the full-connection layer in the deep convolution neural network model reach a convergence target or not, if so, finishing the training, and obtaining the trained model; otherwise, updating the learning rate through the self-adaptive learning rate algorithm and returning to the step S4-1.
In summary, the invention introduces the loss function based on the exponential decay, and adjusts the learning rate according to the loss function value. In the initial stage of training, parameters such as neural network weight and the like are far away from the optimal value, and the loss function value is large, so that the learning rate is high, and the algorithm can be quickly converged; when the loss function value is reduced to be close to the optimal value, the learning rate value is gradually reduced, so that the model can be converged to the optimal solution. The method combines the loss function, realizes that the learning rate can be changed in a self-adaptive manner, greatly improves the convergence rate and the identification precision of the neural network model compared with the conventional learning rate, optimizes the identification performance of the radiation source, and simultaneously does not need artificial parameter adjustment.

Claims (7)

1. A method for quickly identifying a radiation source is characterized by comprising the following steps:
s1, reading a radiation source signal with a label;
s2, carrying out short-time Fourier transform on the radiation source signal with the label, and converting the one-dimensional signal into a two-dimensional time-frequency image of two channels;
s3, constructing a deep convolutional neural network model;
s4, inputting two-channel two-dimensional time-frequency images into a deep convolution neural network model, and training by adopting a self-adaptive learning rate algorithm to obtain a trained model; the self-adaptive learning rate algorithm comprises the following steps:
lr(t)=ptβ0sigmoid(L(t)L(t-1)-d);
t is the training iteration number of the deep convolutional neural network; lr (t) is the learning rate at the t-th iteration; p is the attenuation factor, beta0Taking the initial amplitude, L (-) as a loss function, sigmoid (-) as a sigmoid function, and d as the number of units shifted to the right by the sigmoid function;
and S5, identifying the target to be identified by adopting the trained model to finish the rapid identification of the radiation source.
2. The method for quickly identifying a radiation source according to claim 1, wherein the specific method of the step S1 comprises:
and detecting the frame head and the frame tail of the radiation source signal by an energy detection method so as to read the radiation source signal.
3. The method for rapidly identifying a radiation source according to claim 1, wherein the specific method of the step S2 comprises:
and setting the number of points of the short-time Fourier transform to 512, and carrying out short-time Fourier transform on the radiation source signal with the label to obtain two-dimensional time-frequency images of two channels.
4. The method for rapidly identifying a radiation source according to claim 1, wherein the specific method of the step S3 comprises:
and setting the sizes of convolution kernels of the deep convolution neural network model to be 3*3, and constructing 12 layers of convolution and pooling layers.
5. The method for rapidly identifying a radiation source according to claim 1, wherein the specific method for inputting the two-dimensional time-frequency image into the deep convolutional neural network model in step S4 comprises:
and inputting the two-channel two-dimensional time-frequency images into a deep convolution neural network model after batch normalization operation.
6. The method for rapidly identifying a radiation source according to claim 1, wherein the value of the attenuation factor p in the step S4 is 0.999; initial amplitude beta0The value of (b) is 1; loss function
Figure FDA0003857608750000021
p (x) is the probability distribution of the standard result, q (x) is the probability distribution of the predicted result of the current network model, and x is the corresponding probability distribution of the current network modelThe input of (1); the value of the parameter d is 8.
7. The method for rapidly identifying a radiation source according to claim 1, wherein the step S4 is performed by using an adaptive learning rate algorithm, and the specific method for obtaining the trained model comprises the following substeps:
s4-1, in the convolutional layer, according to the formula:
Figure FDA0003857608750000022
Figure FDA0003857608750000023
respectively acquiring the variation delta k of the convolution kernel parameters and the variation delta b of the addition offsetaFurther updating the convolution kernel parameter and the addition offset; wherein i represents an input neuron; j represents the output neuron; l is the number of network layers; u and v represent the position coordinates of the convolution or pooling operation in the input feature map; δ is the partial derivative of the loss function to the net input; lr (t) is the current learning rate; p represents a local area participating in convolution operation in the input feature diagram each time;
Figure FDA0003857608750000024
the point product of the partial derivative of the j output neuron net input of the l layer of the loss function and the feature map of the i input neuron input of the l-1 layer is obtained;
s4-2, in the pooling layer, according to a formula:
Figure FDA0003857608750000025
Figure FDA0003857608750000031
respectively obtaining the variation delta beta of multiplication offset and the variation delta b of addition offset of the pooling layerbFurther updating multiplication bias and addition bias of the pooling layer; wherein X is a feature map; down (·) is a down-sampling function; symbol(s)
Figure FDA0003857608750000032
Representing a dot product;
s4-3, in the full connection layer, according to a formula:
Δw=-lr(t)δlXl-1
Δbc=-lr(t)δl
respectively acquiring the variation delta w of the weight of the full connection layer and the variation delta b of the biasc(ii) a Further updating the weight and bias of the full connection layer;
s4-4, judging whether the convolution kernel parameters and the addition bias of the convolution layer, the multiplication bias and the addition bias of the pooling layer and the weight and the bias of the full-connection layer in the deep convolution neural network model reach a convergence target or not, if so, finishing the training, and obtaining the trained model; otherwise, updating the learning rate through the self-adaptive learning rate algorithm and returning to the step S4-1.
CN202010174283.7A 2020-03-13 2020-03-13 Rapid identification method for radiation source Active CN111401226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010174283.7A CN111401226B (en) 2020-03-13 2020-03-13 Rapid identification method for radiation source

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010174283.7A CN111401226B (en) 2020-03-13 2020-03-13 Rapid identification method for radiation source

Publications (2)

Publication Number Publication Date
CN111401226A CN111401226A (en) 2020-07-10
CN111401226B true CN111401226B (en) 2022-11-01

Family

ID=71430812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010174283.7A Active CN111401226B (en) 2020-03-13 2020-03-13 Rapid identification method for radiation source

Country Status (1)

Country Link
CN (1) CN111401226B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183279B (en) * 2020-09-21 2022-06-10 中国人民解放军国防科技大学 Communication radiation source individual identification method based on IQ graph characteristics
CN112381176B (en) * 2020-12-03 2022-06-10 天津大学 Image classification method based on binocular feature fusion network
CN112801003B (en) * 2021-02-05 2022-06-17 江苏方天电力技术有限公司 Unmanned aerial vehicle radiation source modulation pattern recognition method
CN114626418A (en) * 2022-03-18 2022-06-14 中国人民解放军32802部队 Radiation source identification method and device based on multi-center complex residual error network

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657243A (en) * 2017-10-11 2018-02-02 电子科技大学 Neutral net Radar range profile's target identification method based on genetic algorithm optimization
CN108090412A (en) * 2017-11-17 2018-05-29 西北工业大学 A kind of radar emission source category recognition methods based on deep learning
CN108171318A (en) * 2017-11-30 2018-06-15 河南大学 One kind is based on the convolutional neural networks integrated approach of simulated annealing-Gaussian function
CN109086700A (en) * 2018-07-20 2018-12-25 杭州电子科技大学 Radar range profile's target identification method based on depth convolutional neural networks
CN109190476A (en) * 2018-08-02 2019-01-11 福建工程学院 A kind of method and device of vegetables identification
CN109271926A (en) * 2018-09-14 2019-01-25 西安电子科技大学 Intelligent Radiation source discrimination based on GRU depth convolutional network
CN109740154A (en) * 2018-12-26 2019-05-10 西安电子科技大学 A kind of online comment fine granularity sentiment analysis method based on multi-task learning
CN110378205A (en) * 2019-06-06 2019-10-25 西安电子科技大学 A kind of Complex Radar Radar recognition algorithm based on modified CNN network
US10510002B1 (en) * 2019-02-14 2019-12-17 Capital One Services, Llc Stochastic gradient boosting for deep neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10963783B2 (en) * 2017-02-19 2021-03-30 Intel Corporation Technologies for optimized machine learning training

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657243A (en) * 2017-10-11 2018-02-02 电子科技大学 Neutral net Radar range profile's target identification method based on genetic algorithm optimization
CN108090412A (en) * 2017-11-17 2018-05-29 西北工业大学 A kind of radar emission source category recognition methods based on deep learning
CN108171318A (en) * 2017-11-30 2018-06-15 河南大学 One kind is based on the convolutional neural networks integrated approach of simulated annealing-Gaussian function
CN109086700A (en) * 2018-07-20 2018-12-25 杭州电子科技大学 Radar range profile's target identification method based on depth convolutional neural networks
CN109190476A (en) * 2018-08-02 2019-01-11 福建工程学院 A kind of method and device of vegetables identification
CN109271926A (en) * 2018-09-14 2019-01-25 西安电子科技大学 Intelligent Radiation source discrimination based on GRU depth convolutional network
CN109740154A (en) * 2018-12-26 2019-05-10 西安电子科技大学 A kind of online comment fine granularity sentiment analysis method based on multi-task learning
US10510002B1 (en) * 2019-02-14 2019-12-17 Capital One Services, Llc Stochastic gradient boosting for deep neural networks
CN110378205A (en) * 2019-06-06 2019-10-25 西安电子科技大学 A kind of Complex Radar Radar recognition algorithm based on modified CNN network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《Specific Emitter Identification Using Convolutional Neural Network-Based IQ Imbalance Estimators》;LAUREN J. WONG等;《IEEE Access》;20191231;第7卷;第33544-33555页 *
《基于卷积神经网络的木材缺陷识别方法研究》;崔明光;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20190915(第09期);第B024-345页 *
《基于多层感知器调制信号识别的研究》;李翠芳;《中国优秀硕士学位论文全文数据库 信息科技辑》;20061115(第11期);第I136-72页 *

Also Published As

Publication number Publication date
CN111401226A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN111401226B (en) Rapid identification method for radiation source
CN106485251B (en) Egg embryo classification based on deep learning
CN106228185B (en) A kind of general image classifying and identifying system neural network based and method
CN111354017A (en) Target tracking method based on twin neural network and parallel attention module
CN116052064B (en) Method and device for identifying feeding strength of fish shoal, electronic equipment and bait casting machine
CN108764470A (en) A kind of processing method of artificial neural network operation
CN112818873A (en) Lane line detection method and system and electronic equipment
CN109151332A (en) Camera coding based on fitness function exposes optimal code word sequence search method
CN113349111A (en) Dynamic feeding method, system and storage medium for aquaculture
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
CN115564983A (en) Target detection method and device, electronic equipment, storage medium and application thereof
CN116206194A (en) Method, device, system and storage medium for shoal feeding
CN110084834B (en) Target tracking method based on rapid tensor singular value decomposition feature dimension reduction
CN113205102B (en) Vehicle mark identification method based on memristor neural network
CN115239760A (en) Target tracking method, system, equipment and storage medium
CN117349622A (en) Wind power plant wind speed prediction method based on hybrid deep learning mechanism
CN117079095A (en) Deep learning-based high-altitude parabolic detection method, system, medium and equipment
CN115601301B (en) Fish phenotype characteristic measurement method, system, electronic equipment and storage medium
CN114581470B (en) Image edge detection method based on plant community behaviors
CN114140495A (en) Single target tracking method based on multi-scale Transformer
Ruan et al. Aquatic image segmentation method based on hs-PCNN for automatic operation boat in crab farming
CN111210009A (en) Information entropy-based multi-model adaptive deep neural network filter grafting method, device and system and storage medium
CN116992944B (en) Image processing method and device based on leavable importance judging standard pruning
CN117934798B (en) Child behavior online identification system based on computer vision
CN117649635B (en) Method, system and storage medium for detecting shadow eliminating point of narrow water channel scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant