CN114692774A - Method for adjusting compression ratio of encoder and method and device for feeding back data - Google Patents

Method for adjusting compression ratio of encoder and method and device for feeding back data Download PDF

Info

Publication number
CN114692774A
CN114692774A CN202210369304.XA CN202210369304A CN114692774A CN 114692774 A CN114692774 A CN 114692774A CN 202210369304 A CN202210369304 A CN 202210369304A CN 114692774 A CN114692774 A CN 114692774A
Authority
CN
China
Prior art keywords
target data
compression ratio
encoder
compression rate
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210369304.XA
Other languages
Chinese (zh)
Inventor
蒋玉骅
叶宏远
高飞飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210369304.XA priority Critical patent/CN114692774A/en
Publication of CN114692774A publication Critical patent/CN114692774A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention relates to the field of data processing, and provides a method for adjusting the compression ratio of an encoder and a method and a device for feeding back data, wherein the method comprises the following steps: inputting target data into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label; the compression ratio of the encoder is adjusted to the optimal compression ratio. The target data are input into the first neural network model, so that the optimal compression ratio of the target data can be obtained, the compression ratio of the encoder is adjusted to the optimal compression ratio, self-adaptive adjustment of the compression ratio is achieved, the data compression process is more flexible and reliable, and the problems that the data compression process cannot adapt to dynamic changes of the environment and the compression process is not flexible and reliable in the existing data compression mode are solved.

Description

Method for adjusting compression ratio of encoder and method and device for feeding back data
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method for adjusting a compression ratio of an encoder, and a method and an apparatus for feeding back data.
Background
In a traditional self-encoder, an encoder part can only realize data compression through a fixed compression ratio, and the compression ratio can not be adjusted, so that the data compression process can not adapt to the dynamic change of the environment, and the compression process is not flexible and reliable enough.
Disclosure of Invention
The invention provides a method for adjusting the compression ratio of an encoder, a method and a device for feeding back data, which are used for solving the defects that the compression ratio cannot be adjusted, the data compression process cannot adapt to the dynamic change of the environment, and the compression process is not flexible and reliable in the prior art.
In a first aspect, the present invention provides a method for adjusting a compression ratio of an encoder, the method comprising:
inputting target data into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label;
adjusting a compression rate of an encoder to the optimal compression rate.
According to the method for adjusting the compression ratio of the encoder provided by the invention, the target data is input into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data, and the method comprises the following steps:
analyzing the sparsity of the target data to obtain a sparsity analysis result;
and determining the optimal compression rate of the target data based on the sparsity analysis result.
According to the method for adjusting the compression rate of the encoder provided by the invention, the determining the optimal compression rate of the target data based on the sparsity analysis result comprises the following steps:
and based on the sparse degree analysis result, taking the theoretical compression ratio of which the feedback precision is higher than a preset precision threshold value and the feedback bit number is the minimum in a plurality of preset theoretical compression ratios as the optimal compression ratio.
According to the method for adjusting the compression rate of the encoder provided by the invention, the training process of the first neural network model comprises the following steps:
acquiring a plurality of target data samples with different sparsity, and respectively determining an optimal compression rate label corresponding to each target data sample;
and training the pre-constructed neural network through the target data sample marked with the optimal compression rate label to obtain a first neural network model.
According to the method for adjusting the compression rate of the encoder provided by the invention, the training of the pre-constructed neural network is carried out through the target data sample marked with the optimal compression rate label, and the method comprises the following steps:
introducing a weighted term into the loss function of the neural network to obtain a weighted loss function;
and training the neural network through a target data sample marked with an optimal compression rate label based on the weighted loss function.
According to the method for adjusting the compression rate of the encoder provided by the invention, the adjusting the compression rate of the encoder to the optimal compression rate comprises the following steps:
and adjusting the number of nodes of a full connection layer in a second neural network model corresponding to an encoder so as to adjust the compression rate of the encoder to the optimal compression rate.
In a second aspect, the present invention further provides a method for feeding back data, including:
adjusting the compression rate of the encoder to the optimal compression rate according to any one of the above methods for adjusting the compression rate of the encoder;
compressing the target data according to the optimal compression rate;
feeding back the compressed target data and preset identification information corresponding to the optimal compression ratio to a base station;
the base station is configured to determine the optimal compression ratio according to the preset identification information, and decompress the compressed target data according to the optimal compression ratio.
According to the feedback method of the data provided by the invention, the target data is the state information of the downlink channel.
In a third aspect, the present invention further provides an apparatus for adjusting a compression ratio of an encoder, the apparatus comprising:
the first processing module is used for inputting target data into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label;
and the adjusting module is used for adjusting the compression rate of the encoder to the optimal compression rate.
In a fourth aspect, the present invention further provides an apparatus for feeding back data, the apparatus comprising:
a second processing module, configured to adjust the compression ratio of the encoder to the optimal compression ratio according to any of the above methods for adjusting the compression ratio of the encoder;
the compression module is used for compressing the target data according to the optimal compression rate;
the feedback module is used for feeding back the compressed target data and preset identification information corresponding to the optimal compression ratio to a base station;
the base station is configured to determine the optimal compression ratio according to the preset identification information, and decompress the compressed target data according to the optimal compression ratio.
According to the method for adjusting the compression ratio of the encoder, the method and the device for feeding back the data, provided by the invention, the target data are input into the first neural network model, so that the optimal compression ratio of the target data can be obtained, and then the compression ratio of the encoder is adjusted to the optimal compression ratio, so that the self-adaptive adjustment of the compression ratio is realized, and the data compression process is more flexible and reliable.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a method for adjusting the compression ratio of an encoder according to the present invention;
FIG. 2 is a schematic diagram of the adjustment of the compression ratio of the encoder, data compression, feedback, and decompression;
FIG. 3 is a schematic diagram of the structure of each neural network model;
FIG. 4 is a flow chart of a method for feeding back data provided by the present invention;
FIG. 5 is a schematic structural diagram of an apparatus for adjusting the compression ratio of an encoder according to the present invention;
FIG. 6 is a schematic diagram of a data feedback device according to the present invention;
fig. 7 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The following describes, with reference to fig. 1 to 7, an encoder compression ratio adjustment method, a data feedback method, an encoder compression ratio adjustment device, a data feedback device, and an electronic device built based on the encoder compression ratio adjustment method according to an embodiment of the present invention.
Fig. 1 illustrates an adjusting method for a compression rate of an encoder according to an embodiment of the present invention, where the method includes:
step 110: inputting target data into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label;
step 120: the compression ratio of the encoder is adjusted to the optimal compression ratio.
It can be understood that the target data in this embodiment refers to data to be compressed, and when the above method for adjusting the compression rate of the encoder is applied to a frequency division duplex system, the target data may be downlink channel state information, i.e. a channel matrix.
In an exemplary embodiment, the process of inputting the target data into the first neural network model which is constructed in advance to obtain the optimal compression rate of the target data may specifically include:
analyzing the sparsity of the target data to obtain a sparsity analysis result;
based on the sparsity analysis result, an optimal compression ratio of the target data is determined.
Further, the process of determining the optimal compression rate of the target data based on the sparsity analysis result may specifically include:
and based on the sparse degree analysis result, taking the theoretical compression ratio of which the feedback precision is higher than a preset precision threshold value and the feedback bit number is the minimum in a plurality of preset theoretical compression ratios as the optimal compression ratio.
In this embodiment, the preset plurality of theoretical compression ratios, that is, the selectable number of compression ratios may be K, and the function of the first neural network model is to mainly select an optimal compression ratio that can take both feedback overhead and feedback accuracy into consideration from the K theoretical compression ratios as an optimal compression ratio, that is, the optimal compression ratio in this embodiment may be defined as a compression ratio in which the number of feedback bits is as small as possible on the premise of ensuring that the feedback accuracy is higher than a preset accuracy threshold.
It can be understood that, after the precision threshold corresponding to the feedback precision is determined, the higher the sparsity of the target data is, the fewer the number of bits to be fed back is, and the lower the feedback overhead is; conversely, the lower the sparsity of the target data is, the more bits need to be fed back, and the higher the feedback overhead is.
FIG. 2 shows the implementation principle of the compression ratio adjustment method of the encoder, and H in FIG. 2 represents the input target data, such as the channel matrix, which can obtain the optimal compression ratio after passing through the first neural network modelThe optimal compression ratio is input into the self-adaptive module, so that the compression ratio of the encoder can be adjusted, specifically, the number of nodes of a full connection layer in the second neural network model can be adjusted to realize the adjustment of the compression ratio, so that the target data can be compressed according to the optimal compression ratio through the second neural network model corresponding to the encoder, S represents a code word obtained after the target data is compressed by the encoder, the code word S and preset identification information corresponding to the optimal compression ratio are input into a decoder, the code word S can be decompressed according to the optimal compression ratio through the third neural network model corresponding to the decoder to obtain decompressed data
Figure BDA0003587334250000062
In addition, the manner of inputting the fixed compression rate to the adaptive module by the dashed box in fig. 2 is a conventional compression manner, and is mainly used for comparison with the manner of the improved adaptive compression rate provided by the present embodiment.
In an exemplary embodiment, the training process of the first neural network model specifically includes:
acquiring a plurality of target data samples with different sparsity, and respectively determining an optimal compression rate label corresponding to each target data sample;
and training the pre-constructed neural network through the target data sample marked with the optimal compression rate label to obtain a first neural network model.
Further, the process of training the pre-constructed neural network through the target data sample marked with the optimal compression rate label may specifically include:
introducing a weighted term into a loss function of the neural network to obtain a weighted loss function;
and training the neural network through the target data sample marked with the optimal compression rate label based on the weighted loss function.
Referring to fig. 3, the first neural network Model in this embodiment may also be referred to as a Sparsity Analysis Model (SAM), which is mainly composed of 4 fully-connected layers, and the excitation function of the last layer is Softmax, and may output K numbers of [0,1] intervals, where each number represents a probability that the corresponding compression rate is the optimal compression rate, and the compression rate with the highest probability is to be adopted, i.e., as the optimal compression rate.
For this classification network, the number of neurons in the output layer is M, and the excitation function is a Softmax function, which can be expressed as:
Figure BDA0003587334250000061
where z denotes the output vector of the last layer and p denotes the number of output data, which is the same as the optional number of compression ratios, i.e., len (z) equals K.
However, the above network inevitably makes errors, in order to improve the probability of controlling the feedback accuracy above the accuracy threshold, the present embodiment weights the loss function during the network training, that is, introduces a weighting term into the loss function, in the present embodiment, the loss function uses a cross entropy function, and by weighting the cross entropy function, it is possible to suppress erroneous judgment that the input target data is misclassified as a smaller compression ratio, and the expression of the cross entropy function after introducing the weighting term is:
Figure BDA0003587334250000071
wherein, yikA k-th element of a vector under one-hot encoding representing a compression rate tag of the ith object data,
Figure BDA0003587334250000072
representing the estimate of the one-hot coded vector element, gamma, by the neural networkkA weight indicating a penalty for misclassification of target data into a smaller compression ratio, and a larger penalty weight may be used for a smaller compression ratio.
It is easy to find that when the first neural network model is trained, the loss function is properly weighted, and on the premise of not reducing the classification accuracy of the network, the behavior of the network that the target data is wrongly matched with a compression ratio smaller than an optimal value can be inhibited, so that the probability of controlling the feedback precision to be higher than a preset precision threshold value is improved.
In an exemplary embodiment, the process of adjusting the compression ratio of the encoder to the optimal compression ratio may specifically include:
and adjusting the number of nodes of the full connection layer in the second neural network model corresponding to the encoder so as to adjust the compression ratio of the encoder to the optimal compression ratio.
Therefore, according to the method for adjusting the compression rate of the encoder provided by the embodiment of the invention, the sparsity of the target data is analyzed through the first neural network model, so that the optimal compression rate which can give consideration to both the feedback precision above the precision threshold and the feedback overhead can be output, and the compression rate of the encoder is adjusted to the optimal compression rate, so that the self-adaptive dynamic adjustment of the compression rate of the encoder is realized, and the data compression process is more flexible and reliable.
Fig. 4 shows a feedback method for data according to an embodiment of the present invention, where the method may be applied to a user side, and includes:
step 410: adjusting the compression ratio of the encoder to the optimal compression ratio according to the method for adjusting the compression ratio of the encoder;
step 420: compressing the target data according to the optimal compression rate;
step 430: feeding back the compressed target data and preset identification information corresponding to the optimal compression ratio to a base station;
the base station is used for determining the optimal compression ratio according to the preset identification information and decompressing the compressed target data through the optimal compression ratio.
In an exemplary embodiment, the target data may be downlink channel state information.
In a Frequency-Division Duplex (FDD) system, Information between an uplink Channel and a downlink Channel does not have direct reciprocity, a base station cannot directly acquire the downlink Channel through the uplink Channel, and a user terminal needs to feed back the Channel to the base station, so that feedback of Channel Status Information (CSI) is a key of a MIMO system.
Due to the development of artificial intelligence technology, the 6G-oriented wireless communication technology starts to adopt a scheme of displaying CSI by feedback. The method includes that a terminal uses an Encoder (Encode) to compress display CSI to obtain a code word in a self-Encoder (Autoencoder) mode, a user side feeds the code word back to a base station, and the base station uses a Decoder (Decode) to restore the code word to the displayed CSI. The base station can realize a better beam forming algorithm by using the CSI fed back by the user side to resist the channel effect.
However, the conventional encoder is only suitable for fixed compression rate feedback, and since the communication environment is constantly changing, relying on the fixed compression rate may cause unnecessary bits and waste of resources.
It is contemplated that in varying environments, a higher compression rate should be used when the channel matrix is sparse and vice versa. Therefore, in this embodiment, the compression ratio of the encoder is adjusted according to the sparsity of the downlink channel state information, the downlink channel state information is compressed by the optimal compression ratio, the preset identification information corresponding to the optimal compression ratio and the compressed code word are fed back to the base station, and the base station decompresses the code word by the optimal compression ratio to restore the information, thereby implementing the feedback process of the downlink state information with the adaptive multi-compression ratio.
In the practical application process, taking the target data as the downlink channel state information as an example, because the user terminal can feed back the compressed target data and also feed back the preset identification information corresponding to the optimal compression ratio, for example, the preset identification information may be the category code corresponding to the compression ratio, corresponding category codes may be allocated to a plurality of selectable compression ratios in advance, and the mapping relationship between the compression ratio and the category codes is configured in advance at the user terminal and the base station side, so that the base station can directly determine the corresponding compression ratio according to the received category codes.
When the selectable number of compression ratios is K, the selection of the compression ratio (i.e., the optimal compression) is fed backPreset identification information corresponding to rate) needs log2Since K bits in this embodiment are smaller, and K is usually less than 10, the number of bits required for selecting the feedback compression rate is often negligible, and therefore, in the data feedback process, too many bits are not consumed for feeding back the preset identification information corresponding to the optimal compression rate.
Referring to fig. 3, a convolution layer (i.e., Encoder CNN in fig. 3) in a second neural network model corresponding to an Encoder (Encoder) on one side (i.e., User Equipment) of a User Equipment (User Equipment) is composed of an initiation block, i.e., a leftmost convolution block ConvBn (convolution Batch Norm) in fig. 3, the size of a convolution kernel is 3 × 3, the initiation block has 3 branches, and each branch is processed by a convolution kernel with a different size; for a larger convolution kernel, decomposing the larger convolution kernel into two smaller convolution kernels by adopting an asymmetric decomposition method so as to reduce the number of parameters;
of the three branches shown in fig. 3, the first branch consists of three convolutions with convolution kernel sizes of 1 × 5, 5 × 1, 1 × 1, respectively, the second branch also consists of three convolutions with convolution kernel sizes of 1 × 7, 7 × 1, 1 × 1, respectively, the third branch consists of two convolutions with convolution kernel sizes of 3 × 3, 1 × 1, respectively;
finally, the outputs of multiple branches of an initiation block are merged in a channel dimension, in this embodiment, taking that target data is downlink channel state information as an example, since the number of channels of a channel matrix is 2, the number of channels of the merged data needs to be compressed to 2 by convolution with a size of 1 × 1, and then the merged data is sent to a full connection layer, that is, an encoder FC (encoder full connection layer) in fig. 3, where multiple nodes may be set in the layer, and the layer respectively corresponds to different selectable compression rates, where fig. 3 shows a case where the compression rates CR are 4 and CR are 32, an adjustment process of the compression rates in the encoder may be regarded as a principle of switching in fig. 3, a selection of the compression rates of the encoder is determined by an arrow between an optimal compression rate and a node, and a codeword S is obtained after being compressed by the encoder.
It is easy to see that, in the encoder part of this embodiment, each compression ratio shares one convolutional neural network to save the load of storage network parameters, and the adjustment of the compression ratio of the encoder can be realized by adjusting the number of nodes of the full connection layer in the second neural network model, so the encoder can be called as a self-adaptive multi-compression-ratio encoder.
On the base station side, the data decompression can be realized through a decoder, and the decoder part can use different full connection layers and convolutional neural networks in consideration of larger storage space of the base station side.
Specifically, in this embodiment, a Decoder (Decoder) portion at one end of a Base Station (BS) may be provided with a plurality of third neural network models to implement decompression processes at different compression rates, see fig. 3, taking one of the third neural network models as an example, a basic component unit of the network model is RDBlock, and a result is output after GFF (Global Feature Fusion) operation and GRL (Global Residual Learning).
Each RDBlock is composed of 3 dense connection layers and an attention mechanism module, in fig. 3, the three dense connection layers are three convolutions with a size of 5 × 5, and the attention mechanism module is specifically an SE(s) (squeze-and-Excitation) module which mainly includes an averaging pooling layer Averagepool and a full connection layer FC. The attention mechanism module utilizes the characteristic of large difference between each row of processed data, and particularly relates to an application object of a channel matrix, the characteristic inherits the sparsity of the channel matrix in a time delay domain, and by applying weights to each row, a neural network can put more attention to a part of rows with richer information, and the rows correspond to non-0 rows of an original channel matrix.
Meanwhile, a dense connection mode is also adopted among the RDBlock layers to improve the performance of the neural network, and finally, the 3 dense connection layers and the attention mechanism module output data after passing through LRL (Local Residual Learning). In this embodiment, the base station side decoder decompresses the data to obtain decompressed data
Figure BDA0003587334250000101
When the neural network is trained, a second neural network model in the self-encoder needs to be trained separately from a first neural network model, and taking the target data as a channel matrix as an example, the training process may specifically include:
firstly, a plurality of channel matrix samples with different sparsity are obtained, and the samples are used for training a second neural network model, so that the situation that encoders under various compression rates are fully trained is prevented.
Secondly, testing the channel matrix samples according to a mode of taking feedback precision and feedback overhead into consideration, determining the optimal compression ratio of each channel matrix sample, and marking a label according to the corresponding optimal compression ratio to obtain the channel matrix sample marked with the optimal compression ratio label.
And finally, putting the channel matrix sample with the label into a neural network for training to obtain a trained first neural network model.
In the practical application process, still taking the target data as the downlink channel state information as an example, the feedback method of the data may specifically include:
step 1: the frequency division duplex system works in a mode of self-adapting compression rate;
step 2: the user terminal transforms the downlink channel state information of the spatial frequency domain to an angle time delay domain and compresses the downlink channel state information according to a corresponding compression rate;
and step 3: the user side feeds the compressed code words and the category codes corresponding to the optimal compression rate back to the base station;
and 4, step 4: the base station decompresses the downlink channel state information according to the received code words and the compression rate information corresponding to the category codes and recovers the original downlink channel state information;
and 5: and the base station performs beam forming by using the recovered downlink channel state information, informs the user terminal to perform the next channel information feedback after a period of time, and returns to the step 1.
Therefore, according to the feedback method of the data provided by the embodiment of the invention, the compression ratio of the encoder is automatically adjusted through the first neural network model according to the change of the environment, so that the data to be fed back can be compressed according to the optimal compression ratio and fed back to the base station.
The following describes the apparatus for adjusting the compression rate of an encoder provided by the present invention, and the apparatus for adjusting the compression rate of an encoder described below and the method for adjusting the compression rate of an encoder described above can be referred to correspondingly.
Fig. 5 shows an apparatus for adjusting the compression ratio of an encoder according to an embodiment of the present invention, which includes:
a first processing module 510, configured to input target data into a first neural network model that is constructed in advance, so as to obtain an optimal compression rate of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label;
an adjusting module 520, configured to adjust the compression rate of the encoder to an optimal compression rate.
In an exemplary embodiment, the first processing module 510 may be specifically configured to:
analyzing the sparsity of the target data to obtain a sparsity analysis result;
based on the sparsity analysis result, an optimal compression ratio of the target data is determined.
Further, the first processing module 510 may specifically determine an optimal compression rate of the target data based on the sparsity analysis result by the following steps:
and based on the sparse degree analysis result, taking the theoretical compression ratio of which the feedback precision is higher than a preset precision threshold value and the feedback bit number is the minimum in a plurality of preset theoretical compression ratios as the optimal compression ratio.
In an exemplary embodiment, the apparatus for adjusting a compression rate of an encoder may further include:
the model training module is used for acquiring a plurality of target data samples with different sparsity and respectively determining an optimal compression rate label corresponding to each target data sample; and training the pre-constructed neural network through the target data sample marked with the optimal compression rate label to obtain a first neural network model.
Further, the model training module may specifically implement training of a pre-constructed neural network through a target data sample labeled with an optimal compression rate label in the following manner, including:
introducing a weighted term into a loss function of the neural network to obtain a weighted loss function;
and training the neural network through the target data sample marked with the optimal compression rate label based on the weighted loss function.
In an exemplary embodiment, the adjusting module 520 may specifically be configured to: and adjusting the number of nodes of the full connection layer in the second neural network model corresponding to the encoder so as to adjust the compression ratio of the encoder to the optimal compression ratio.
Fig. 6 shows a data feedback apparatus provided by an embodiment of the present invention, which includes:
a second processing module 610, configured to adjust the compression rate of the encoder to an optimal compression rate by the above-mentioned method for adjusting the compression rate of the encoder;
a compression module 620, configured to compress the target data according to the optimal compression rate;
a feedback module 630, configured to feed back the compressed target data and preset identification information corresponding to the optimal compression ratio to the base station;
the base station is used for determining the optimal compression ratio according to the preset identification information and decompressing the compressed target data through the optimal compression ratio.
In an exemplary embodiment, the target data may be downlink channel state information.
Therefore, the feedback device of the data provided by the embodiment of the invention can reduce the waste of redundant bits and resources in the compression and feedback processes of the data by adjusting the compression ratio of the encoder to the optimal compression ratio, and can better meet the requirements of practical application.
Fig. 7 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 7: a processor (processor)710, a communication Interface (Communications Interface)720, a memory (memory)830 and a communication bus 740, wherein the processor 710, the communication Interface 720 and the memory 730 communicate with each other via the communication bus 740. Processor 710 may call logic instructions in memory 730 to perform a method of adjusting the compression ratio of an encoder, the method comprising: inputting target data into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label; the compression ratio of the encoder is adjusted to the optimal compression ratio.
In addition, the logic instructions in the memory 730 can be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer being capable of executing the method for adjusting the compression rate of an encoder provided in the above embodiments, the method including: inputting target data into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label; the compression ratio of the encoder is adjusted to the optimal compression ratio.
In still another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method for adjusting the compression rate of an encoder provided in the above embodiments, the method including: inputting target data into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label; the compression ratio of the encoder is adjusted to the optimal compression ratio.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for adjusting a compression ratio of an encoder, comprising:
inputting target data into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label;
adjusting a compression rate of an encoder to the optimal compression rate.
2. The method for adjusting the compression rate of an encoder according to claim 1, wherein the inputting the target data into a first neural network model constructed in advance to obtain the optimal compression rate of the target data comprises:
analyzing the sparsity of the target data to obtain a sparsity analysis result;
and determining the optimal compression rate of the target data based on the sparsity analysis result.
3. The method of adjusting the compression rate of an encoder according to claim 2, wherein the determining the optimal compression rate of the target data based on the sparsity analysis result comprises:
and based on the sparse degree analysis result, taking the theoretical compression ratio of which the feedback precision is higher than a preset precision threshold value and the feedback bit number is the minimum in a plurality of preset theoretical compression ratios as the optimal compression ratio.
4. The method for adjusting the compression ratio of an encoder according to claim 1, wherein the training process of the first neural network model comprises:
acquiring a plurality of target data samples with different sparsity, and respectively determining an optimal compression rate label corresponding to each target data sample;
and training the pre-constructed neural network through the target data sample marked with the optimal compression rate label to obtain a first neural network model.
5. The method for adjusting the compression rate of an encoder according to claim 4, wherein the training of the pre-constructed neural network through the target data samples marked with the best compression rate labels comprises:
introducing a weighted term into the loss function of the neural network to obtain a weighted loss function;
and training the neural network through a target data sample marked with an optimal compression rate label based on the weighted loss function.
6. The method for adjusting the compression rate of an encoder according to claim 1, wherein the adjusting the compression rate of the encoder to the optimal compression rate comprises:
and adjusting the number of nodes of a full connection layer in a second neural network model corresponding to an encoder so as to adjust the compression rate of the encoder to the optimal compression rate.
7. A method for feeding back data, comprising:
adjusting the compression ratio of an encoder to the optimal compression ratio according to the method of adjusting the compression ratio of an encoder according to any one of claims 1 to 6;
compressing the target data according to the optimal compression rate;
feeding back the compressed target data and preset identification information corresponding to the optimal compression ratio to a base station;
the base station is configured to determine the optimal compression ratio according to the preset identification information, and decompress the compressed target data by the optimal compression ratio.
8. The method for feeding back data of claim 7, wherein the target data is downlink channel state information.
9. An apparatus for adjusting a compression ratio of an encoder, comprising:
the first processing module is used for inputting target data into a first neural network model which is constructed in advance to obtain the optimal compression ratio of the target data; the first neural network model is obtained by training a neural network based on a target data sample marked with an optimal compression rate label;
and the adjusting module is used for adjusting the compression rate of the encoder to the optimal compression rate.
10. An apparatus for feeding back data, comprising:
a second processing module, for adjusting the compression rate of the encoder to the optimal compression rate according to the method for adjusting the compression rate of the encoder according to any one of claims 1 to 6;
the compression module is used for compressing the target data according to the optimal compression rate;
the feedback module is used for feeding back the compressed target data and preset identification information corresponding to the optimal compression ratio to a base station;
the base station is configured to determine the optimal compression ratio according to the preset identification information, and decompress the compressed target data according to the optimal compression ratio.
CN202210369304.XA 2022-04-08 2022-04-08 Method for adjusting compression ratio of encoder and method and device for feeding back data Pending CN114692774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210369304.XA CN114692774A (en) 2022-04-08 2022-04-08 Method for adjusting compression ratio of encoder and method and device for feeding back data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210369304.XA CN114692774A (en) 2022-04-08 2022-04-08 Method for adjusting compression ratio of encoder and method and device for feeding back data

Publications (1)

Publication Number Publication Date
CN114692774A true CN114692774A (en) 2022-07-01

Family

ID=82143449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210369304.XA Pending CN114692774A (en) 2022-04-08 2022-04-08 Method for adjusting compression ratio of encoder and method and device for feeding back data

Country Status (1)

Country Link
CN (1) CN114692774A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116760935A (en) * 2023-08-21 2023-09-15 湖南马栏山视频先进技术研究院有限公司 Data fence system based on image data transmission

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116760935A (en) * 2023-08-21 2023-09-15 湖南马栏山视频先进技术研究院有限公司 Data fence system based on image data transmission
CN116760935B (en) * 2023-08-21 2023-10-17 湖南马栏山视频先进技术研究院有限公司 Data fence system based on image data transmission

Similar Documents

Publication Publication Date Title
Xie et al. A lite distributed semantic communication system for Internet of Things
CN110350958B (en) CSI multi-time rate compression feedback method of large-scale MIMO based on neural network
US20210125070A1 (en) Generating a compressed representation of a neural network with proficient inference speed and power consumption
CN109344893B (en) Image classification method based on mobile terminal
CN111901024B (en) MIMO channel state information feedback method based on fitting depth learning resistance
CN111310852B (en) Image classification method and system
US11750848B2 (en) High-fidelity generative image compression
CN110930408B (en) Semantic image compression method based on knowledge reorganization
Wei et al. Compact MQDF classifiers using sparse coding for handwritten Chinese character recognition
CN114692774A (en) Method for adjusting compression ratio of encoder and method and device for feeding back data
Zhang et al. SCAN: Semantic communication with adaptive channel feedback
CN118035624A (en) Low-rank adaptive quantitative fine tuning method and device for large language model
CN112399177A (en) Video coding method and device, computer equipment and storage medium
WO2023236609A1 (en) Automatic mixed-precision quantization method and apparatus
WO2023104205A1 (en) Feedback method, acquisition method, training method, terminal, base station, electronic device, and medium
CN116155333A (en) Channel state information feedback method suitable for large-scale MIMO system
Dong et al. Innovative semantic communication system
CN113688989B (en) Deep learning network acceleration method, device, equipment and storage medium
Yujia et al. SNR-adaptive deep joint source-channel coding scheme for imagesemantic transmission with convolutional block attention module
Wen et al. Fast and robust compression of deep convolutional neural networks
CN118279701B (en) Model and sample storage resource combined optimization continuous evolution learning method and system
Paul et al. Image compression scheme based on histogram equalization and convolution neural network
CN118171698B (en) Neural network model quantization method
Lubana et al. Machine Foveation: An application-aware compressive sensing framework
WO2024077453A1 (en) Apparatus, methods, and computer programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination