CN115660068A - Deep neural network model quantization method oriented to rapid remote sensing image processing - Google Patents
Deep neural network model quantization method oriented to rapid remote sensing image processing Download PDFInfo
- Publication number
- CN115660068A CN115660068A CN202211301827.7A CN202211301827A CN115660068A CN 115660068 A CN115660068 A CN 115660068A CN 202211301827 A CN202211301827 A CN 202211301827A CN 115660068 A CN115660068 A CN 115660068A
- Authority
- CN
- China
- Prior art keywords
- quantization
- bit width
- layer
- convolution
- remote sensing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013139 quantization Methods 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012545 processing Methods 0.000 title claims abstract description 16
- 238000003062 neural network model Methods 0.000 title claims abstract description 14
- 238000005457 optimization Methods 0.000 claims abstract description 32
- 238000005520 cutting process Methods 0.000 claims abstract description 18
- 238000005070 sampling Methods 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000009826 distribution Methods 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 8
- 238000007667 floating Methods 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 4
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 9
- 238000013135 deep learning Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000011002 quantification Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000007493 shaping process Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a deep neural network model quantization method for rapid remote sensing image processing, which comprises the following steps: the method comprises the following steps: and based on the remote sensing image data obtained by sampling, carrying out optimization solution on the input feature cutting boundary based on an error minimization principle to obtain a cutting threshold value of each layer of input features of the network model. Step two: and constructing an optimization equation for bit width variables of each convolution layer of the pre-trained network model weight and each convolution kernel in the convolution layer based on the remote sensing image data obtained by sampling, and solving based on a hierarchical search method to obtain the quantized bit width configuration of the model weight. Step three: and D, performing network forward reasoning on the input image according to the cutting threshold value and the quantization bit width obtained in the first step and the second step, wherein the quantization of the input features is performed in real time in the reasoning process, and the quantization of the model weight is completed in a preprocessing process before the forward reasoning.
Description
Technical Field
The invention belongs to the field of deep learning and signal processing, and particularly relates to a deep neural network model quantization method for rapid remote sensing image processing.
Background
In recent years, deep learning is the most popular problem of modern IT technology, and shadows of deep learning, including intelligent machine translation, medical diagnosis, financial analysis, automatic driving and the like, can be seen in almost all fields, so that the deep learning has high value and high practical value in the scientific research field. The "depth" of deep learning mainly refers to the number of layers of the model. The complexity of the network is continuously improved, the complexity among the layers is increased in a geometric multiple mode, and correspondingly, the requirement on the computing capacity of the processor is increased explosively. The deep learning brings about a sharp increase in the amount of calculation and higher requirements for calculation hardware.
For the requirement of rapid remote sensing image processing, due to the fact that the resolution is high, the image is large, and a large number of high-precision floating point parameters in a depth neural network with high fineness and complexity limit the deployment capability of the network on a computing platform with limited resources. Therefore, in order to meet the requirement for rapidly processing the remote sensing image, the deep neural network model has important research significance in quantitative compression.
At present, a more mature quantization scheme is 8-bit quantization after training, and the precision performance almost the same as that of a 32-bit floating point model can be achieved. However, 8-bit quantization still has high requirements on the storage and calculation performance of the mobile device, so that the pre-trained model cannot be well deployed in some practical application situations. Meanwhile, the quantization method of less than 8bit is not mature at present and is widely applied in the industry. There are two key points in the quantization process for the network model: and (4) clipping of boundary threshold values and optimal bit width allocation. In consideration of the current research trend aiming at the low bit quantization method, the invention provides an innovative solution to the two key problems based on the mean square error optimization theory.
Disclosure of Invention
The invention aims to provide a deep neural network model quantification method for rapid remote sensing image processing. The method belongs to a quantization scheme after training, and aims at the forward propagation process of the network, the input images or characteristics of each layer and the model weight are quantized, the training and the backward propagation of the network are not required to be carried out by using a source domain data set, and the optimal estimation is carried out by using only a small number of samples obtained by sampling from a test data set.
The invention provides a deep neural network model quantization method for rapid remote sensing image processing, which comprises the following implementation steps:
the method comprises the following steps: based on a sampled remote sensing image sample set, carrying out optimization solution on the input feature cutting boundary based on an error minimization principle to obtain a cutting threshold value of each layer of input features of the network model;
step two: constructing an optimization equation for bit width variables of each convolution layer of the pre-trained network model weight and each convolution core in the convolution layer based on a remote sensing image sample set obtained by sampling, and solving based on a hierarchical search method to obtain the quantized bit width configuration of the model weight;
step three: performing network forward reasoning on the input image according to the cutting threshold value and the quantization bit width obtained in the first step and the second step, wherein the quantization of the input characteristics is performed in real time in the reasoning process, and the quantization of the model weight is completed before the forward reasoning;
in the step one, the input feature clipping boundary is optimized and solved based on an error minimization principle based on a sampled remote sensing image sample set to obtain a clipping threshold value of each layer of input features of the network model, and the method is as follows:
s11, performing Laplace distribution fitting on the image data in the sample set obtained by sampling to obtain shape parameters beta = { beta } of all L convolutional layers 1 ,β 2 ,...β L };
S12, constructing an analytical equation of the quantization error mean square error expectation according to the parameter beta obtained by calculation in the step S11, and performing optimization solution by adopting an error minimization method to obtain a primarily optimized cutting value
S13, calculated according to the step S12Construction ofBased on the optimization equation for minimizing the distance of the output feature vector, andsearching in the neighborhood by the set step length to obtain the optimal clipping value of the input characteristic
S14, keeping the floating point precision of the weight parameters unchanged, starting from the first convolution layer of the model, executing the steps S12 and S13 to obtain the optimal clipping threshold value of the current layer, and sequentially circulating until the clipping threshold values of all convolution layer input characteristics are completedCalculating;
in the second step, "the bit width variables of each convolution layer and each convolution kernel in the convolution layer of the pre-trained network model weight are constructed into an optimization equation, and the optimization equation is solved based on a hierarchical search method to obtain the quantized bit width configuration of the model weight", which includes the following steps:
s21, distributing a quantized bit width variable for each convolution layer according to the selected network structure and a sample set obtained by sampling, constructing a multi-objective optimization equation according to a pareto optimization method, and obtaining the optimal distribution bit width solution K = { K } of each convolution layer through a searching method 1 ,k 2 ,...k L };
S22, k calculated according to S21 i Judging whether the number of the convolution kernels needing bit width allocation currently exceeds a grouping threshold value or not, if so, grouping the convolution kernels based on a k-means clustering method, allocating a bit width variable to each group, establishing a pareto multi-objective optimization equation and solving the optimal bit width variable { k ] of each group currently group_1 ,k group_2 ,...};
S23, executing the step S22 on the current convolutional layer until the number of the convolutional cores needing to be allocated with the bit width currently is smaller than a grouping threshold value, regarding each current convolutional core as an independent group, and executing the step S22Performing the last optimization solution to obtain the bit width distribution optimal solution of each convolution kernel under the current convolution layer
And S24, circularly executing the steps S22 and S23 on all convolution layers in the network model until all convolution kernels in the model are distributed with single bit width optimal solutions, and finishing the step of quantizing the model weight.
In the third step, "carry on the network forward reasoning to the input image, wherein carry on the quantization of the input characteristic in reasoning process in real time, finish the quantization of the model weight before the forward reasoning", its way is as follows:
s31, collecting a certain number of sample images of a test data source, and training the selected network model based on the source data set to obtain a model weight file under the FP32 precision;
s32, calculating a cutting threshold value of each layer of input features of the network by using the steps S11-S14;
s33, performing mixed precision quantization on all convolution kernels in the network by using the steps S21-S24;
and S34, carrying out real-time input feature quantization on the input features of each layer of the network for the input image, and carrying out convolution operation on the input features and the quantized convolution weights obtained in the step S33 until the output result of the network is obtained through calculation.
Through the steps, the optimal low-order quantization configuration calculation is carried out on the input characteristics of the network and the model weight based on the optimization method, and the effective compression of the model parameter quantity and the calculated quantity is realized.
According to the design of the invention, the deep neural network model quantization method facing the rapid remote sensing image processing is realized, the algorithm implementation complexity is low, the model loss after quantization is low, and the model loss is closer compared with a 32-bit floating point model, so that the method is particularly suitable for the rapid remote sensing image processing task facing the mobile terminal low-power consumption embedded platform deployment.
According to the design of the invention, the deep neural network model quantization method facing the rapid remote sensing image processing is realized, the precision equivalent to that of a mainstream quantization tool can be realized when 8bit quantization is carried out, the precision close to 8bit quantization can be realized when the precision is lower than that of 8bit quantization, the parameter quantity and the calculated quantity of the model can be effectively reduced, and the deployment difficulty facing a mobile platform is greatly reduced.
Drawings
Fig. 1 is a general step flow diagram.
FIG. 2 is a logic flow diagram of an optimized convolutional kernel bit width allocation method
Detailed Description
So that the manner in which the features, objects, and functions of the invention are attained and can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings.
As shown in fig. 1, which is a flowchart of the general steps of the present invention, first, pre-training a model in a source data set to obtain a model weight with FP32 precision; then data sampling is carried out to generate a sample set, and the optimal cutting threshold value input by each layer of the model is calculated; then, carrying out low-order quantization on the FP32 precision model according to the set average bit width; and finally, carrying out forward reasoning on the network model by using the test image to obtain an output result. The invention provides a deep neural network model quantization method based on optimized clipping and bit width estimation. The specific implementation steps are as follows:
the first step is as follows: and based on a sample image set obtained by sampling, carrying out optimization solution on the input feature clipping boundary based on an error minimization principle to obtain the clipping threshold value of each layer of input features of the network model.
Firstly, normalizing all image data in a sample set, inputting a pre-training network with FP32 precision, counting and collecting input characteristic diagram values of each convolution layer, and performing Laplace distribution fitting to obtain a shape parameter beta = { beta = (beta) = 1 ,β 2 ,...β L }. Wherein the laplace probability density function is represented as follows:
the quantization error resulting from input clipping is next modeled. For a clipping threshold [ -p, p ], the expectation of the quantized mean square error can be expressed as a function of the threshold p, as follows:
the first two terms in the above equation represent errors introduced by clipping, and the third term represents errors introduced by quantization. Wherein Represents a quantization step size, M represents a quantization bit width,representing the floating point number after inverse quantization of the reshaped quantized value. Based on the integral inverse derivative mathematical rule, the above equation can be simplified to the following form:
it can be seen that the expectation of the quantization error is a function with respect to the clipping threshold p, and that this function has a global minimum. By taking the derivative of the above equation with respect to p and making the derivative zero, a clipping threshold can be found that minimizes the quantization error, as follows:
the theoretical optimal value p can be obtained by solving the intersection point of the two curves by using a mathematical tool t The numerical solution of (c). Since there is some error in the fitting of the laplacian distribution to the data, it is necessary to target p t Building minimization networksThe optimization equation of the output error of the collateral feature is further optimized as follows:
in the above formula,. Epsilon p Is p t The invention takes the value of the fixed-length neighborhood as the quantization step size epsilon p =Δ;Δ ε Representing neighborhood search precision;respectively representing fixed-length feature vectors obtained by forward reasoning the quantized input and the unquantized FP32 precision input by using a network model with FP32 precision;representing the euclidean distance between two feature vectors. By the above formula at p t The fixed neighborhood of the target object is traversed and searched with a certain precision to obtain a cutting threshold value p which enables the distance between the quantized output feature and the FP32 precision output feature to be minimum * . Finally obtaining the optimized clipping threshold value of the input characteristics of all the network layers by executing the same steps on all the network layers
The second step is that: and constructing an optimization equation for bit width variables of each convolution layer of the pre-trained network model weight and each convolution kernel in the convolution layer based on a sample image set obtained by sampling, and solving based on a hierarchical search method to obtain the quantized bit width configuration of the model weight.
FIG. 2 is a logic flow diagram of a method for optimizing bit width allocation of convolution kernels. Firstly, performing layer-by-layer bit width optimization on the model weight, namely allocating a bit width variable to each convolutional layer, and establishing a multi-objective optimization function to solve the pareto optimal solution set K = { K = 1 ,k 2 ,...k L }. The multi-objective optimization function constructed by the invention is as follows:
wherein, F ori (x i ),F qnt (x i ) And the characteristic vectors are obtained by respectively carrying out network forward reasoning on the pre-training model weight with the FP32 precision and the model weight obtained by quantification under the current layer-by-layer bit width distribution configuration on the input characteristics with the FP32 precision. d (F) ori (x i ),F qnt (x i ) Represents the calculated euclidean distance between the two vectors. K represents the optimal solution K = { K } obtained by searching after traversing all images based on cutting data set 1 ,k 2 ,…k L }. The second equation in the above equation represents the constraint of the average bit width, that is, the average value of the bit width obtained after optimizing all convolutional layers is ensured to be M. The third equation in the above equation represents parameter constraint, i.e. all parameters contained in the quantization model are guaranteed not to exceed the set threshold value p target 。
As shown in fig. 2, after the optimal allocated bit width of the model layer by layer is obtained through traversal search, the optimal bit width calculation for each channel based on cluster grouping is performed on each convolutional layer. Firstly, performing Laplacian data distribution fitting on each convolution kernel in the convolution layer to obtain shape parameters, taking the shape parameters as input data of a k-means clustering function, and clustering the convolution kernels into a plurality of clusters, wherein the shape parameters are expressed as follows:
G={G 1 ,G 2 …G n }
constructing the following multi-objective optimization equation:
wherein,and the characteristic vectors are obtained by respectively carrying out network forward reasoning on the pre-training model weight with the FP32 precision and the model weight obtained by quantification under the current grouping bit width distribution configuration on the input characteristics with the FP32 precision.Representing the calculated euclidean distance between the two vectors. And G represents the optimal solution obtained by searching after traversing all the images based on the cutting data set. The second equation in the above equation represents the current packet bit width distribution constraint, i.e. the average value of the bit widths obtained after optimizing all packets is ensured to be the current layer bit width k obtained by optimizing the previous layer by layer layer_i . The third equation in the above equation represents the parameter constraint, i.e. it is ensured that all parameters contained in the quantization model do not exceed the maximum value of the parameter of the current layer
As shown in fig. 2, for each packet obtained by clustering, firstly, performing optimization solution based on the above formula to obtain a bit width configuration parameter of each packet, then judging whether the number of convolution kernels in the packet exceeds a set threshold, if not, regarding all convolution kernels under the packet as an individual packet, and performing the last round of optimization by using the above formula to obtain the optimized bit width of each convolution kernel in the packet; and if the number of the convolution kernels exceeds the threshold value, continuing to perform the next round of clustering grouping and optimizing until the number of the convolution kernels in each group is smaller than the threshold value, performing the optimal round of optimization solution and ending the cycle. Based on the steps, the optimal bit width calculation of each convolution kernel in the model can be obtained, and at the moment, the mixed precision quantization of the model is completed.
The third step: and D, performing network forward reasoning on the input image according to the cutting threshold value and the quantization bit width obtained in the first step and the second step, wherein the quantization of the input characteristics is performed in real time in the reasoning process, and the quantization of the model weight is completed before the forward reasoning.
The input quantization parameters for each layer of the network model and the quantization configuration parameters of the model itself have been calculated. Therefore, this step illustrates how the calculated quantization parameters are applied to forward reasoning of the model. For a certain input image, firstly, the first layer input quantization threshold value and the set quantization bit width obtained by the image based on the first step of calculation are quantized in real time, and the quantized result is sent to the quantized first layer network weight to carry out convolution calculation based on shaping data. And quantizing the numerical value output by the first layer by adopting the second layer input quantization threshold value obtained by the first-step calculation and the set quantization bit width, and sending the quantized result to the quantized second layer network weight to perform convolution calculation based on the shaping data. And sending the emergence characteristics of the second layer into a third layer and then a fourth layer of the third layer of the 8230the third layer in the same way until the operation of all network levels is completed, and finally carrying out inverse quantization on the obtained shaping output result to obtain the final network output. The above process can be expressed using the following formula:
wherein,indicating the output of the n-th network; x is the number of Int An input representing a layer n network; b is a mixture of Int Representing the quantized bias parameters of the nth layer of convolution layer; w Int Representing the weight parameter of the nth layer of convolution layer after quantization; *Representing a convolution multiply-add operation; s represents a scale factor of scale quantization; h () represents the activation function in the model. So far, the quantization and inference methods related to the present invention are described.
Although the present invention has been described with reference to the above embodiments, it should be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (4)
1. A deep neural network model quantization method oriented to rapid remote sensing image processing is characterized in that: the method comprises the following steps:
the method comprises the following steps: based on a sampled remote sensing image sample set, carrying out optimization solution on the input feature cutting boundary based on an error minimization principle to obtain a cutting threshold value of each layer of input features of the network model;
step two: based on a remote sensing image sample set obtained by sampling, constructing an optimization equation for bit width variables of each convolution layer of the pre-trained network model weight and each convolution kernel in the convolution layer, and solving based on a hierarchical search method to obtain quantitative bit width configuration of the model weight;
step three: and D, performing network forward reasoning on the input image according to the cutting threshold value and the quantization bit width obtained in the first step and the second step, wherein the quantization of the input characteristics is performed in real time in the reasoning process, and the quantization of the model weight is completed before the forward reasoning.
2. The deep neural network model quantization method for rapid remote sensing image processing according to claim 1, characterized in that: the first step is specifically as follows:
s11, performing Laplacian distribution fitting on the image data in the sample set obtained by sampling to obtain shape parameters beta = { beta } of all L convolutional layers 1 ,β 2 ,…β L };
S12, according toS11, constructing an analytical equation of a quantized error mean square error expectation by using the parameter beta obtained by calculation in the step S11, and performing optimization solution by adopting an error minimization method to obtain a primarily optimized cutting value
S13, calculated according to the step S12Constructing an optimization equation based on the minimized output feature vector distance, andsearching in the neighborhood by the set step length to obtain the optimal clipping value of the input characteristic
S14, keeping the floating point precision of the weight parameters unchanged, starting from the first convolution layer of the model, executing the steps S12 and S13 to obtain the optimal clipping threshold value of the current layer, and sequentially circulating until the clipping threshold values of all convolution layer input features are completedAnd (4) calculating.
3. The deep neural network model quantization method for rapid remote sensing image processing according to claim 1, characterized in that: the second step comprises the following specific processes:
s21, distributing quantized bit width variables to each convolutional layer according to the selected network structure and a sample set obtained by sampling, constructing a multi-objective optimization equation according to a pareto optimization method, and obtaining the optimal distribution bit width solution K = { K } of each convolutional layer through a search method 1 ,k 2 ,…k L };
S22, k obtained by calculation according to S21 i Judging the bit width currently to be allocatedWhether the number of the convolution kernels exceeds a grouping threshold value or not, if yes, grouping the convolution kernels based on a k-means clustering method, distributing a bit width variable for each group, establishing a pareto multi-objective optimization equation and solving the optimal bit width variable { k } of each current group group_1 ,k group_2 ,…};
S23, executing the step S22 on the current convolutional layer until the number of the convolutional cores needing to be allocated with the bit width is smaller than a grouping threshold value, regarding each current convolutional core as an independent group, and performing the last optimization solution by using the step S22 to obtain the optimal solution of the bit width allocation of each convolutional core under the current convolutional layer
And S24, circularly executing the steps S22 and S23 on all convolution layers in the network model until all convolution kernels in the model are distributed with single bit width optimal solutions, and finishing the step of quantizing the model weight.
4. The deep neural network model quantization method for rapid remote sensing image processing according to claim 1, characterized in that: the third step comprises the following specific processes:
s31, collecting a certain number of sample images of a test data source, and training the selected network model based on the source data set to obtain a model weight file under the FP32 precision;
s32, calculating a cutting threshold value of each layer of input features of the network by using the steps S11-S14;
s33, performing mixed precision quantization on all convolution kernels in the network by using the steps S21-S24;
and S34, carrying out real-time input feature quantization on the input features of each layer of the network for the input image, and carrying out convolution operation on the input features and the quantized convolution weights obtained in the step S33 until the output result of the network is obtained through calculation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211301827.7A CN115660068A (en) | 2022-10-24 | 2022-10-24 | Deep neural network model quantization method oriented to rapid remote sensing image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211301827.7A CN115660068A (en) | 2022-10-24 | 2022-10-24 | Deep neural network model quantization method oriented to rapid remote sensing image processing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115660068A true CN115660068A (en) | 2023-01-31 |
Family
ID=84991456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211301827.7A Pending CN115660068A (en) | 2022-10-24 | 2022-10-24 | Deep neural network model quantization method oriented to rapid remote sensing image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115660068A (en) |
-
2022
- 2022-10-24 CN CN202211301827.7A patent/CN115660068A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111860982A (en) | Wind power plant short-term wind power prediction method based on VMD-FCM-GRU | |
CN113011570A (en) | Adaptive high-precision compression method and system of convolutional neural network model | |
CN112101525A (en) | Method, device and system for designing neural network through NAS | |
CN113222138A (en) | Convolutional neural network compression method combining layer pruning and channel pruning | |
CN111105035A (en) | Neural network pruning method based on combination of sparse learning and genetic algorithm | |
CN117117859B (en) | Photovoltaic power generation power prediction method and system based on neural network | |
CN112434848A (en) | Nonlinear weighted combination wind power prediction method based on deep belief network | |
CN109242150A (en) | A kind of electric network reliability prediction technique | |
WO2023236319A1 (en) | Convolutional neural network deployment and optimization method for microcontroller | |
CN115204035A (en) | Generator set operation parameter prediction method and device based on multi-scale time sequence data fusion model and storage medium | |
CN116187835A (en) | Data-driven-based method and system for estimating theoretical line loss interval of transformer area | |
CN113627070A (en) | Short-term photovoltaic power prediction method | |
CN113610227A (en) | Efficient deep convolutional neural network pruning method | |
CN114169251A (en) | Ultra-short-term wind power prediction method | |
CN112308298A (en) | Multi-scenario performance index prediction method and system for semiconductor production line | |
CN116415177A (en) | Classifier parameter identification method based on extreme learning machine | |
CN117875481A (en) | Carbon emission prediction method, electronic device, and computer-readable medium | |
CN113076663A (en) | Dynamic hybrid precision model construction method and system | |
CN112613604A (en) | Neural network quantification method and device | |
CN116865255A (en) | Short-term wind power prediction method based on improved entropy weight method and SECEEMD | |
CN116663745A (en) | LSTM drainage basin water flow prediction method based on PCA_DWT | |
CN115438784A (en) | Sufficient training method for hybrid bit width hyper-network | |
CN115660068A (en) | Deep neural network model quantization method oriented to rapid remote sensing image processing | |
Zhou et al. | Lite-YOLOv3: a real-time object detector based on multi-scale slice depthwise convolution and lightweight attention mechanism | |
CN115423149A (en) | Incremental iterative clustering method for energy internet load prediction and noise level estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |