CN113537452A - Automatic model compression method for communication signal modulation recognition - Google Patents

Automatic model compression method for communication signal modulation recognition Download PDF

Info

Publication number
CN113537452A
CN113537452A CN202110214138.1A CN202110214138A CN113537452A CN 113537452 A CN113537452 A CN 113537452A CN 202110214138 A CN202110214138 A CN 202110214138A CN 113537452 A CN113537452 A CN 113537452A
Authority
CN
China
Prior art keywords
model
network
communication signal
layer
signal modulation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110214138.1A
Other languages
Chinese (zh)
Inventor
方宇强
张喜涛
宋万均
陈维高
殷智勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Original Assignee
Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peoples Liberation Army Strategic Support Force Aerospace Engineering University filed Critical Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Priority to CN202110214138.1A priority Critical patent/CN113537452A/en
Publication of CN113537452A publication Critical patent/CN113537452A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/0012Modulated-carrier systems arrangements for identifying the type of modulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

An automatic model compression method for communication signal modulation recognition is characterized in that certain data are accumulated from data to meet the requirement of model learning, a data characteristic expression mode is mined from enough data, a depth certainty strategy gradient algorithm is adopted, an optimal pruning proportion is calculated for each convolutional layer, the inference speed can be accelerated, meanwhile, reconstruction errors can be minimized, automatic compression of a deep learning network model in the communication signal modulation recognition process is facilitated, and manual parameter adjustment can be replaced.

Description

Automatic model compression method for communication signal modulation recognition
Technical Field
The invention relates to an automatic model compression method for communication signal modulation recognition, and belongs to the technical field of communication signal modulation recognition.
Background
The communication signal modulation and identification can provide a basis for selecting a proper signal demodulator by carrying out a series of signal processing on a received signal to obtain a modulation mode of the signal, and the automatic modulation and identification has certain significance and value in the military and civil fields. When a deep learning method is adopted to search a neural network model for communication signal modulation recognition, a residual error network modulation recognition model can well complete recognition tasks, but the deployment and the application of the model at some portable equipment ends are limited due to high calculation complexity and large model parameter quantity. In the practical application process, compression cutting is carried out on the basis of the existing model, so that a faster and smaller model can be obtained while the precision is ensured.
Disclosure of Invention
The technical problem solved by the invention is as follows: aiming at the problems that the parameter quantity of a traditional residual error network modulation recognition model is large and the application of the model at a portable equipment end is easily limited in the prior art, the automatic model compression method for communication signal modulation recognition is provided.
The technical scheme for solving the technical problems is as follows:
an automatic model compression method for communication signal modulation recognition comprises the following steps:
(1) arranging received communication signals into code element images according to a specified number of code elements in sequence, and converting the communication signals into modulation pattern texture images for identification;
(2) constructing a residual error network structure model;
(3) setting an optimal sparse proportion parameter, and performing channel pruning on each layer of convolution network of the residual network structure model according to the reconstruction error;
(4) and (4) fine-tuning the clipping model obtained in the step (3) after the channel is clipped, and obtaining the final weight of the obtained model.
In the step (3), the method for obtaining the optimal sparse proportion parameter specifically comprises the following steps:
defining a depth certainty strategy gradient state space according to parameters of each layer of convolutional network of a residual error network structure model and the weight of a current filter, constructing an incentive function according to the product of a target error rate and the number of single floating point operations, determining a certainty strategy when the maximum incentive function is determined through a DDPG algorithm, and obtaining an optimal sparse proportion parameter atValue range of [ 2 ]0,1]。
In the step (3), the channel pruning of each layer of convolutional network of the residual network structure model specifically comprises:
and according to the optimal sparse proportion parameter, cutting channel selection is carried out on each layer of convolution network sequence, the feature layer to be cut is preset, the channel number of the feature layer is cut, a target function is defined according to the minimum reconstruction error for judgment, and the original output and the compression output change are kept within the minimum range.
The feature layer comprises c channels and n channels with the size of c x kh×kwThe characteristic layers of the filter generate N convolution filtering objects, and the convolution network where the characteristic layers are located generates N multiplied by N output results.
The minimum reconstruction error definition objective function is specifically as follows:
Figure RE-GDA0003273501150000021
where α is a weighting coefficient, W is a filter full weight matrix, N is a total number of samples, Y is an output response matrix, X is an input matrix, | | | | | luminanceFIs the F norm, c is the number of channels, and c' is the expected number of channels.
Compared with the prior art, the invention has the advantages that:
the automatic model compression method for communication signal modulation recognition provided by the invention adopts a depth certainty strategy gradient algorithm to respectively calculate the optimal pruning proportion for each convolution layer, can accelerate the reasoning speed and simultaneously minimize the reconstruction error, is beneficial to the automatic compression of a deep learning network model in the communication signal modulation recognition process, replaces the manual parameter adjustment, can be used for the electromagnetic characteristic analysis of the deep learning network model for communication signal modulation recognition under portable equipment, can ensure that the model still has the same or similar expression capability through model pruning, reduces the volume of the model as much as possible, reduces the calculation amount and meets the requirement of light-weight deployable.
Drawings
FIG. 1 is a schematic diagram of the automatic network pruning process based on DDPG provided by the invention;
FIG. 2 is a schematic diagram of simulation results of different modulation pattern signals of the AWGN channel provided by the present invention;
figure 3 is a schematic diagram of symbol images of different modulation pattern signal streams of the AWGN channel provided by the present invention;
FIG. 4 is a schematic diagram of a residual neural network modulation identification network provided by the present invention;
FIG. 5 is a basic diagram provided by the present invention;
FIG. 6 is a schematic diagram of a neural network modulation recognition result of the structure schematic neural network of the residual error network provided by the invention;
FIG. 7 is a schematic diagram of a structured pruning scheme provided by the present invention;
FIG. 8 is a schematic diagram of a DDPG adjustment verification set identification accuracy process provided by the present invention;
FIG. 9 is a schematic diagram of a learning process of DDPG adjusting FLOPs provided by the present invention;
Detailed Description
An automatic model compression method for communication signal modulation recognition is beneficial to automatic compression of a deep learning network model in a communication signal modulation recognition process, replaces manual parameter adjustment, can be used for electromagnetic characteristic analysis of the deep learning network model for communication signal modulation recognition in portable equipment, can ensure that the model still has the same or similar expression capability, reduces the volume of the model as much as possible, reduces the calculation amount, and meets the requirement of light-weight deployment, and the specific flow of the compression method is as follows:
(1) arranging received communication signals into code element images according to a specified number of code elements in sequence, and converting the communication signals into modulation pattern texture images for identification;
(2) constructing a residual error network structure model;
(3) setting an optimal sparse proportion parameter, and performing channel pruning on each layer of convolution network of the residual network structure model according to the reconstruction error;
wherein, a depth certainty strategy gradient state space is defined according to the parameters of each layer of convolution network of the residual error network structure model and the weight value of the current filter, and the target error rate and the single floating are adoptedConstructing a reward function by multiplying the point operation times, determining a certainty strategy when the maximum reward function is determined by a DDPG algorithm, and acquiring an optimal sparse proportion parameter atThe value range is [0,1 ]];
The channel pruning of each layer of convolution network of the residual error network structure model specifically comprises the following steps:
according to the optimal sparse proportion parameter, cutting channel selection is carried out on each layer of convolution network sequence, a feature layer to be cut is preset, the number of channels of the feature layer is cut, a target function is defined according to the minimum reconstruction error for judgment, and the original output and the compression output change are kept within the minimum range;
the feature layer comprises c channels and n channels with the size of c x kh×kwThe characteristic layers generate N convolution filtering objects, and the convolution network where the characteristic layers are located generates N multiplied by N output results;
(4) and (4) fine-tuning the clipping model obtained in the step (3) after the channel is clipped, and obtaining the final weight of the obtained model.
The following is further illustrated according to specific examples:
in this embodiment, certain data is accumulated from the data to satisfy the requirement of model learning, and the expression of the data feature is extracted from sufficient data. Firstly, a simulation mode is utilized to collect and arrange a corresponding test database. The problem of single signal modulation identification under an additive white Gaussian noise AWGN channel is the basis of modulation identification research, and therefore, different modulation mode data sets under the condition of non-ideal signal to noise ratio environments are established aiming at the application scene of single signal modulation identification under the AWGN channel.
The mathematical model of the received signal under the AWGN channel is as follows: y (t) ═ s (t) + n (t)
Wherein n (t) has a mean value of 0 and a variance of
Figure RE-GDA0003273501150000042
White gaussian noise and independent of the transmitted signal; s (t) the transmitted modulated signal, using a sine wave as the modulated carrier, the baseband waveform control taking into account amplitude keying (ASK), Frequency Shift Keying (FSK) and Phase Shift Keying (PSK),in order to meet the requirement of a general modulation mode, a typical digital communication signal {2 ASK; 4 ASK; 8 ASK; BPSK; QPSK; 8 PSK; 2 FSK; 4 FSK; 8FSK, as the object of study. Meanwhile, various simulation parameter settings of the generated signal are specified according to the requirements of ETSI (European Telecommunications Standards Institute) on GSM/EDGE base station technical Standards, the carrier frequency of the radio frequency signal is 900MHz (Europe) or 2000MHz (America), the intermediate frequency is 150-190 MHz, and the sampling frequency is 50-70 MHz, so that simulation parameters are set without loss of generality as shown in Table 1, and the frequency set of FSK is { f } (f [) at this timec±kfi,k=1,3,5,7};
Table 1 simulation parameter set-up situation
Figure RE-GDA0003273501150000041
Figure RE-GDA0003273501150000051
The resulting signal is shown in fig. 2.
Training and testing data sets of different modulation patterns are constructed through simulation signals, and training data support is provided for end-to-end neural network model training.
The automatic model compression method comprises the following specific steps:
step one, generating a modulation pattern texture map. The modulation mode identification is generally carried out after a fixed number of code elements are read, and when the number is large, the training difficulty of the neural network is increased due to overlong length of the sequence signal, so that the time and space complexity of the training is greatly improved; meanwhile, when the 1D convolutional neural network is adopted, in order to establish a longer context, a larger 1D convolutional kernel or a deeper network layer number is required, which is not beneficial to establishing a lightweight network model. For this purpose, the received signal is arranged in a symbol image in a certain number of symbol sequences, as shown in fig. 3. The image is subjected to spatial convolution, so that the related range of the neighborhood can be effectively expanded, the characteristic representation capability is improved, the 2D space domain convolution considers the horizontal and vertical direction change, and the characteristic of a longer signal modulation mode is naturally described compared with the 1D convolution, so that high-performance identification can be completed under the condition of fewer layers. Based on the above conversion method, the sequence signal is converted into a "texture" image related to the modulation method for recognition.
And step two, training a reference model. And designing a residual error network structure model, such as ResNet-20, aiming at the modulation mode identification. As shown in fig. 4, the model includes three BLOCKs, each BLOCK is composed of convolution kernels of 1 × 8, 1 × 5 and 1 × 3, and the BLOCK includes Skip Connect module to realize residual transfer due to the increase of the number of layers and avoid the problem of gradient disappearance in training, and the last layer is also output finally by using average pooling and full connection.
Example 1: as shown in fig. 5, ResNet-20 is a convolutional network structure composed of 20 layers of residual modules, and adopts Skip-connect to realize residual transfer and enhance the learning effect of directional propagation. The network identification result is shown in fig. 6. It can be seen that the training curve is rapidly converged due to the adoption of the residual error module, the verification set precision is continuously reduced, and the average recognition rate of 9 classes reaches 96%. Although the number of ResNet-20 layers is more, the FLOPs is only 81.5M due to the adoption of the convolutional layer, the identification rate is high and reaches 98.8%, the inference speed reaches about 1400 times/second at the display card end of GTX960M, and the ResNet-20 model identification accuracy results are shown in the following table:
Figure RE-GDA0003273501150000061
and step three, automatic sparse pruning. In order to realize automatic pruning parameter setting, firstly, a state space of a depth Deterministic Policy Gradient, namely Deep Deterministic Policy Gradient, needs to be defined, and for each layer of convolutional network, the state space is defined as follows:
st=(t,n,c,h,w,stride,k,FLOPs[t],reduced,rest,at-1)
where t is the layer number index, FLOPs [ t [ ]]The number of floating point operations which represent one forward propagation of the current layer is used for measuring the calculated amount. In order to realize the reinforcement learning action of the continuous domain, the sparse proportion parameter is expressed as an action atThe value range is [0,1 ]]. Meanwhile, a clipped normal distribution is adopted in motion generation to represent a noise process:
Figure RE-GDA0003273501150000062
the reward function is defined as R ═ Error · log (FLOPs), and the product of the Error rate and the number of single floating point operations is used as reward to ensure a certain accuracy and reduce the calculation amount.
In the DDPG algorithm, the action a of the agent at each time steptBy the formula at=μ(st) And calculating to obtain the function mu, which is called a deterministic action strategy. To effectively approximate the policy function, a parameter θ is usedμIs approximated, referred to as a policy network. In a state stTake action atThe value obtained, performed according to a deterministic action strategy μ, is expected to be defined using bellman's equation as:
Qμ(st,at)=E[r(st,at)+γQμ(st,μ(st+1))]
where r is the reward function. In the DDPG algorithm, the parameter theta is respectively adoptedμTo represent a deterministic policy a ═ μ (s | θ) by the policy network (actor) of (a)μ) (ii) a Using a parameter of thetaQTo represent the action value function Q (s, a | θ) by the value network of (critic)Q) For solving the bellman equation. The objective function of the DDPG algorithm is defined as the expectation of discounting the jackpot:
Jβ(μ)=Εμ(r0+γr12r2+…γnrn)
to find outTo optimal deterministic behavior strategy mu*Equivalent to maximizing the objective function JβStrategy in (mu)
Figure RE-GDA0003273501150000071
Meanwhile, in order to overcome the Gradient of frequently updating the value network in the DDPG algorithm, two neural networks are respectively established for the Policy network and the value network, an Online network (Online network) and a Target network (Target network) are updated, a Deep Deterministic Policy Gradient algorithm is adopted, and the specific algorithm process is as follows:
initialization: randomly initializing the online network Q (s, a | θ)Q) And μ (s | θ)μ) The weight parameter of (2); initializing a target network Q 'and mu', wherein the network weight parameters are the same as those of the online network; initializing a pool of experience resources
Figure RE-GDA0003273501150000074
For t=1 to T
Act a of calculating the current time stept=μ(st;θμ)+NtWherein is NtA noise model;
performing action atAnd recording the prize rtAnd a new state st+1
Will experience { s }t,at,rt,st+1Existence in an experience pool;
randomly sampling small batches of sample sets in an experience pool si,ai,ri,si+1}, calculating
yi=ri+γQ′(si+1,μ′(si+1μ′)|θQ′)
The minimum loss function updates the value network:
Figure RE-GDA0003273501150000072
updating the policy network using a gradient policy algorithm:
Figure RE-GDA0003273501150000073
updating the target network:
θQ′←τθQ+(1-τ)θQ′,θμ′←τθμ+(1-τ)θμ′
End
and after initialization is finished, performing clipping channel search on each layer of sequence according to a pruning algorithm based on sparsity constraint.
Sorting by calculating filter specific norms or selecting based on a sparse regression problem. Assume that the feature layer to be clipped contains c channels, with n having a size of c × kh×kwActs on the layer. The layer produces a total of N convolution filtered objects, and the result of the layer convolution produces N × N outputs. Our goal is to reduce the number of channels to c' (0 ≦ c) while keeping the variations in the original and compressed outputs as small as possible, so that the following objective function can be defined by minimizing the reconstruction error:
Figure RE-GDA0003273501150000081
where Y is an Nxn output matrix, XiFor Nxk in the ith channelhkwInput sample matrix of WiFor the ith channel with the size of n x khkwThe filter matrix of (2). As shown in fig. 7, the parameter vector α is a sparsity constraint vector, and is constrained to have a 0 norm smaller than c ', i.e. the minimum c' of the number of non-zero elements in the vector. Therefore, the number of channels in convolution operation can be effectively reduced through sparsification constraint, and the Frobenius norm between the whole input and the whole output is kept to be minimized so as to ensure that the original important filter is not changed.
To solve the above problem, it is necessary to solve the problem0The constrained optimization problem is relaxed and converted into l1CanonicalSolving the optimization problem:
Figure RE-GDA0003273501150000082
Figure RE-GDA0003273501150000083
the relaxed problem comprises two variables in the optimization process, so that an iterative optimization strategy can be adopted for solving, and firstly, the variable W is fixed to solve the variable alpha. This problem can be degenerated to the classical Lasso problem, namely:
Figure RE-GDA0003273501150000084
optimal solution of the solution
Figure RE-GDA0003273501150000085
The channel filtering is clipped and the solution of the sub-problem can be solved using a first order approximation gradient descent method such as FISTA. Second, we need to fix the α update parameter matrix W. At this point, the optimization problem becomes a least squares problem:
Figure RE-GDA0003273501150000086
and (4) obtaining a final clipping channel and an updated weight value by iteratively optimizing the two formulas.
And step four, outputting the cutting model. Fine-tuning the whole model after cutting to obtain the final weight
Figure RE-GDA0003273501150000091
Example 2: the expected pruning proportion in the algorithm is set to be 25%, 50% and 75% respectively, and the pruning proportion of each layer is automatically distributed by means of the proposed algorithm, so that the overall pruning proportion of the model reaches an expected value. In the DDPG algorithm, the variable size of an Actor is set to be 64, the learning rate is 0.01, and the maximum iteration number is 200 steps.
As shown in fig. 8, the automatic pruning method continuously adjusts the pruning proportion of each layer under the adjustment of the DDPG algorithm, and the identification accuracy in the verification set is continuously improved as the number of iterations increases. Meanwhile, after the expected pruning proportion is constrained, the average proportion of each layer is kept to meet the requirement of the expected proportion during dynamic adjustment of DDPG, model FLOPs gradually and stably reach the expected pruning requirement from the dynamic change process, as shown in FIG. 9, the identification accuracy results of the automatic pruning method are as follows:
Figure RE-GDA0003273501150000092
it can be seen that the automatic pruning method based on the DDPG can automatically select the optimal pruning proportion of each layer under different expected pruning proportions, and obtain better identification precision. Under the condition of a low pruning proportion (25 percent and 50 percent), the testing precision of the model after pruning is not greatly reduced, simultaneously the FLOPs are reduced by more than 50 percent, the reasoning speed is improved by 2 times, and the precision is reduced by no more than 0.2 percent. The model FLOPs is reduced by 78.2% under a larger pruning proportion (75%), the reasoning speed is increased by 4 times, and the test precision is kept to be reduced by no more than 2% and reaches 96.87%.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. Those skilled in the art will appreciate that those matters not described in detail in the present specification are well known in the art.

Claims (5)

1. An automatic model compression method for communication signal modulation recognition is characterized by comprising the following steps:
(1) arranging received communication signals into code element images according to a specified number of code elements in sequence, and converting the communication signals into modulation pattern texture images for identification;
(2) constructing a residual error network structure model;
(3) setting an optimal sparse proportion parameter, and performing channel pruning on each layer of convolution network of the residual network structure model according to the reconstruction error;
(4) and (4) fine-tuning the clipping model obtained in the step (3) after the channel is clipped, and obtaining the final weight of the obtained model.
2. The method of claim 1, wherein the automatic model compression method for communication signal modulation recognition is characterized by:
in the step (3), the method for obtaining the optimal sparse proportion parameter specifically comprises the following steps:
defining a depth certainty strategy gradient state space according to parameters of each layer of convolutional network of a residual error network structure model and the weight of a current filter, constructing an incentive function according to the product of a target error rate and the number of single floating point operations, determining a certainty strategy when the maximum incentive function is determined through a DDPG algorithm, and obtaining an optimal sparse proportion parameter atThe value range is [0,1 ]]。
3. The method of claim 1, wherein the automatic model compression method for communication signal modulation recognition is characterized by:
in the step (3), the channel pruning of each layer of convolutional network of the residual network structure model specifically comprises:
and according to the optimal sparse proportion parameter, cutting channel selection is carried out on each layer of convolution network sequence, the feature layer to be cut is preset, the channel number of the feature layer is cut, a target function is defined according to the minimum reconstruction error for judgment, and the original output and the compression output change are kept within the minimum range.
4. The method of claim 1, wherein the automatic model compression method for communication signal modulation recognition is characterized by:
the feature layer comprises c channels and n channels with the size of c x kh×kwThe characteristic layers are formed into NAnd (4) convolution filtering the object, wherein the convolution network where the characteristic layer is located generates N multiplied by N output results.
5. The method of claim 1, wherein the automatic model compression method for communication signal modulation recognition is characterized by:
the minimum reconstruction error definition objective function is specifically as follows:
Figure FDA0002952502850000021
where α is a weighting coefficient, W is a filter full weight matrix, N is a total number of samples, Y is an output response matrix, X is an input matrix, | | | | | luminanceFIs the F norm, c is the number of channels, and c' is the expected number of channels.
CN202110214138.1A 2021-02-25 2021-02-25 Automatic model compression method for communication signal modulation recognition Pending CN113537452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110214138.1A CN113537452A (en) 2021-02-25 2021-02-25 Automatic model compression method for communication signal modulation recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110214138.1A CN113537452A (en) 2021-02-25 2021-02-25 Automatic model compression method for communication signal modulation recognition

Publications (1)

Publication Number Publication Date
CN113537452A true CN113537452A (en) 2021-10-22

Family

ID=78094418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110214138.1A Pending CN113537452A (en) 2021-02-25 2021-02-25 Automatic model compression method for communication signal modulation recognition

Country Status (1)

Country Link
CN (1) CN113537452A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340225A (en) * 2020-02-28 2020-06-26 中云智慧(北京)科技有限公司 Deep convolution neural network model compression and acceleration method
CN111898591A (en) * 2020-08-28 2020-11-06 电子科技大学 Modulation signal identification method based on pruning residual error network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340225A (en) * 2020-02-28 2020-06-26 中云智慧(北京)科技有限公司 Deep convolution neural network model compression and acceleration method
CN111898591A (en) * 2020-08-28 2020-11-06 电子科技大学 Modulation signal identification method based on pruning residual error network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUIXIN ZHAN等: "Deep Model Compression via Deep Reinforcement Learning", ARXIV:1912.02254V1, pages 1 - 12 *
YIHUI HE等: "AMC: AutoML for Model Compression and Acceleration on Mobile Devices", PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION(ECCV), pages 784 - 800 *
宋万均等: "一种通信信号调制识别的自动神经网络结构搜索算法", 电视技术, vol. 44, no. 09, pages 58 - 65 *

Similar Documents

Publication Publication Date Title
CN110516596B (en) Octave convolution-based spatial spectrum attention hyperspectral image classification method
CN110971279B (en) Intelligent beam training method and precoding system in millimeter wave communication system
CN108564006B (en) Polarized SAR terrain classification method based on self-learning convolutional neural network
CN103927531B (en) It is a kind of based on local binary and the face identification method of particle group optimizing BP neural network
CN112153616B (en) Power control method in millimeter wave communication system based on deep learning
CN112512069B (en) Network intelligent optimization method and device based on channel beam pattern
Teganya et al. Data-driven spectrum cartography via deep completion autoencoders
CN113115344B (en) Unmanned aerial vehicle base station communication resource allocation strategy prediction method based on noise optimization
CN112149721A (en) Target detection method for reducing labeling requirements based on active learning
CN115641583B (en) Point cloud detection method, system and medium based on self-supervision and active learning
CN111693993B (en) Self-adaptive 1-bit data radar imaging method
CN110689065A (en) Hyperspectral image classification method based on flat mixed convolution neural network
CN112468249A (en) 5G wireless channel multipath clustering algorithm based on adaptive nuclear power density
CN112836822B (en) Federal learning strategy optimization method and device based on width learning
CN114912486A (en) Modulation mode intelligent identification method based on lightweight network
CN115169575A (en) Personalized federal learning method, electronic device and computer readable storage medium
CN114912489A (en) Signal modulation identification method
Girish et al. Lilnetx: Lightweight networks with extreme model compression and structured sparsification
Sotiroudis et al. Ensemble Learning for 5G Flying Base Station Path Loss Modelling
CN113537452A (en) Automatic model compression method for communication signal modulation recognition
CN117295090A (en) Resource allocation method for Unmanned Aerial Vehicle (UAV) through-sense integrated system
CN116112934A (en) End-to-end network slice resource allocation method based on machine learning
CN113033653B (en) Edge-cloud cooperative deep neural network model training method
CN116017280A (en) Rapid indoor path tracking method of target portable-free equipment
CN112966812A (en) Automatic neural network structure searching method for communication signal modulation recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination