CN114584150A - Sensor network signal completion method realized by utilizing deep neural network - Google Patents

Sensor network signal completion method realized by utilizing deep neural network Download PDF

Info

Publication number
CN114584150A
CN114584150A CN202210211863.8A CN202210211863A CN114584150A CN 114584150 A CN114584150 A CN 114584150A CN 202210211863 A CN202210211863 A CN 202210211863A CN 114584150 A CN114584150 A CN 114584150A
Authority
CN
China
Prior art keywords
signal
quantization
network
graph
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210211863.8A
Other languages
Chinese (zh)
Other versions
CN114584150B (en
Inventor
李沛
王保云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210211863.8A priority Critical patent/CN114584150B/en
Publication of CN114584150A publication Critical patent/CN114584150A/en
Application granted granted Critical
Publication of CN114584150B publication Critical patent/CN114584150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention relates to a sensor network signal completion method realized by using a deep neural network, which comprises the following steps: step 1: inputting the signals of the observation nodes into a parallel scalar quantizer to obtain the discrete output of the quantized observation node signals and establish a problem; step 2: an interpolation module of the design drawing signal; and step 3: a design graph signal quantization rule module, wherein the graph signal quantization rule module adopts a soft-hard quantizer for quantization; and 4, step 4: and (3) carrying out joint training on the interpolation module obtained in the step (2) and the graph signal quantization rule module obtained in the step (3). The invention uses the parallel scalar quantizers, designs each quantizer respectively, and uses the neural network to carry out joint design on quantization bit allocation and interpolation operators, thereby improving the completion performance.

Description

Sensor network signal completion method realized by utilizing deep neural network
Technical Field
The invention relates to a signal completion method, in particular to a sensor network signal completion method realized by utilizing a deep neural network.
Background
In sensor signal collection, some of the information on the sensors is often corrupted to such an extent that it cannot be accurately obtained. In this case, the unknown information in the target sensor is deduced according to the information available on other sensors and the potential relation between different nodes. At the moment, the sensor signal is modeled into a graph signal, and the completion of the sensor network signal is realized in a graph signal recovery mode.
Image denoising, signal restoration, robust principal component analysis and anomaly detection are closely related to image signal restoration. Image denoising recovers an image from noisy observations. Standard techniques include gaussian smoothing, wiener local empirical filtering, and wavelet thresholding methods. Signal restoration reconstructs lost or corrupted signal portions, including images and video. Standard techniques include fully variational based methods, image model based methods and sparse representations. Compressive sensing acquires and reconstructs signals by taking only a limited number of measurements. It assumes that the signal is sparse and finds a solution to the underdetermined linear system by subspace transformation techniques. The entire matrix is recovered from its subset by assuming the observation matrix is a low rank matrix. Compressive sensing and its extension include consideration of noisy versions and dispersion algorithms through graphs. Recovering a low rank matrix from the corrupted measurements by robust principal component analysis; it divides the image into two parts: a smooth background and a sparse foreground. It is robust to severely damaged entries compared to principal component analysis.
The interpolation method for the graph signal mainly comprises a mode based on frequency spectrum and a mode based on signal smoothness. When the predefined map signal is a bandwidth limited signal, it is considered to recover one or more smooth map signals from noisy, corrupted or incomplete measurements. Graph signal recovery can be described as an optimization problem, and a general solution to the problem is given by the cross direction multiplier method.
A recently developed framework for on-graph discrete signal processing graphically represents the problem of de-noising of signals. Which for the first time describes the de-noising of the graph signal as an optimization problem and derives an exact closed-form solution, represented by an inverse graph filter, and an approximate iterative solution, represented by a standard graph filter. The basic concept of "smoothed signal" inherited by laplacian maps (generally considered as a variant of laplacian maps) is not applicable. To overcome this limitation, it introduces a class of filters based on the edge Laplacian, which is a special case of the first-order simplex complex Hodge-Laplacian operator, and demonstrates how this edge Laplacian causes a low-pass filter, thus forcing (approximate) flow conservation in the processed signal.
The problem of recovering a smooth graph signal from a noise sample observed from a small number of nodes is based on the Laplace quadratic form of the graph, and the signal recovery problem is expressed as a convex optimization problem by utilizing regularization. The optimality condition of the optimization problem forms a linear equation system containing a Laplace graph, and the linear system can be solved through an iterative Gauss-Seidel method. The performance of the reconstruction task is inevitably affected by a plurality of error sources, including observation noise and quantization error, and two strategies for optimizing and selecting transmission power, quantization bits and sampling sets are provided, so that the graph signals are interpolated under the condition of ensuring the performance.
In the existing research, the default observation value of the graph signal recovery process is a continuous amplitude signal, which is usually difficult to realize, and the existing sensor signal interpolation operator also has the following disadvantages:
1. the interpolation operator is driven by the signal model hypothesis, and when the signal to be processed does not accord with the preset signal model, the performance of the interpolation operator is very poor;
2. discrete amplitude storage of the digitized signal is not considered. The storage of the signal is limited by hardware devices, which causes quantization errors, and the superposition of the quantization errors and background noise further reduces the interpolation accuracy.
The observation signal is set as the quantization signal, and the joint design of the quantization and recovery process is work with innovation and significance.
Disclosure of Invention
In order to solve the problem of recovery of sensor network signals in the prior art, the invention considers that signals are discretely stored through parallel scalar quantizers with different specifications, and signal interpolation is carried out through a deep neural network training method.
In order to achieve the purpose, the invention is realized by the following technical scheme:
the invention relates to a sensor network signal completion method realized by using a deep neural network, which comprises the following steps:
step 1: inputting the signals of the observation nodes into a parallel scalar quantizer to obtain the discrete output of the quantized observation node signals, and establishing a problem, which specifically comprises the following steps:
step 1-1: defining an observation signal as y ═ Ψ x + n, wherein Ψ is an observation matrix corresponding to a node set S, and n is inevitable additive Gaussian noise in an observation process;
step 1-2: recovering S according to information on part of node set ScWhen the signal on S is a continuous amplitude signal, the following optimization problem is established:
Figure BDA0003532394230000031
where e (x) characterizes the difference between the observed value and the true value, where y is the actual observed signal, and when the observed signal does not contain noise, e (x) is 0, s (x) characterizes a smoothing term, where λ is a hyper-parameter, and a larger λ represents a smoother image signal;
step 1-3: the difference term e (x) is retained, and the quantized version of the observation signal Ψ x is Q (Ψ x) ═ Ψ Q (x).
Step 1-4: substituting the quantized form of the signal into a problem
Figure BDA0003532394230000032
Step 2: the specific design process of the interpolation module of the design chart signal comprises the following steps:
step 2-1: the problem of establishing graph signal interpolation is as follows:
Figure BDA0003532394230000033
wherein R (-) represents a de-noiser of the graph signal, and λ is a regularization parameter expressed as the weight of the de-noiser in the objective function;
step 2-2: designing a trainable loop de-noising device R (x) based on the DAU, and setting the interpolation problem in the step 2-1 as the simplified symbol
Figure BDA0003532394230000034
Step 2-3: using a quadratic optimization based denoiser, defined as:
Figure BDA0003532394230000035
s.t.v=Mx。
wherein
Figure BDA0003532394230000036
NiA set of neighbors denoted as node i;
step 2-4: iterating the de-noiser in the step 2-3 by using an iterative algorithm based on ADMM expansion:
x(l+1)←A*as(l)+B*at(l)+C*a(PTy(l)),
s(l+1)←D*ax(l)+E*a(QTy(l)),
y(l+1)←NNu(Px(l+1)),
z(l+1)←NNr(Qs(l+1)),
wherein Aa,B*a,C*a,D*a,E*aIs a trainable graph convolution network with filter coefficients, which are trainable parameters, subscript a denotes trainable, and NNu(. and NN)r(. is) two fully connected neural networks containing trainable parameters.
And step 3: the design drawing signal quantization rule module quantizes by adopting a soft-hard quantizer, specifically, obtains scalar quantization mapping by using the sum of shifted hyperbolic tangent lines, and is given by the following formula:
Figure BDA0003532394230000041
wherein { ai,bi,ciIs a series of real-valued parameters, with parameter { c }iOfIncreasing the corresponding hyperbolic tangent approximates a step function.
And 4, step 4: performing joint training on the interpolation module obtained in the step 2 and the graph signal quantization rule module obtained in the step 3, and specifically comprising the following steps:
step 4-1: establishing a greedy-based quantization bit distribution algorithm;
step 4-2: the interpolation module is used for representing the quantization distortion degree under the statistical average and using an ADMM expansion network for the graph signals;
step 4-3: selecting a quantizer to be updated according to the current quantization state;
step 4-4: the comparison is performed again after the training is completed.
The invention has the beneficial effects that: compared with the scheme of using the same quantization bit allocation, the method adopts different quantization bits for different quantizers, more quantization bits for signals with large amplitude ranges and less quantization bits for signals with small amplitude ranges, thereby improving the utilization rate of quantization resources;
compared with the traditional method based on a signal model, the method has the advantages that the potential characteristics of the signals can be learned from the data more accurately to complement the signals;
the greedy-based deep expansion network only processes one quantization bit allocation in each layer iteration, the training speed is improved, and when a signal model is unchanged and only the total quantization bit number is increased, new increase can be performed on the basis of the trained network, and the reusability of the neural network is improved.
Drawings
FIG. 1 is a block diagram of the graded gradient method of the present invention.
Fig. 2 is a block diagram of soft-hard quantization of the present invention.
FIG. 3 is a schematic diagram of the greedy algorithm of the present invention.
FIG. 4 is a schematic diagram of step 4 joint training of the present invention.
Detailed Description
In the following description, for purposes of explanation, numerous implementation details are set forth in order to provide a thorough understanding of the embodiments of the invention. It should be understood, however, that these implementation details are not to be interpreted as limiting the invention. That is, in some embodiments of the invention, such implementation details are not necessary.
The invention relates to a sensor network signal completion method realized by using a deep neural network, which comprises the following steps:
step 1: and inputting the signals of the observation nodes into a parallel scalar quantizer to obtain the discrete output of the quantized observation node signals, and establishing a problem.
The graph signal appearing in the real dataset is generally smooth because its signal value x is when nodes i are connected to jiAnd xjAre also similar, i.e. xi≈xjFor (i, j) ∈ E. Smoothness of the graph signal is usually defined by the total variance, which is specified as
Figure BDA0003532394230000051
Where L ═ D-W is the graph laplacian matrix.
In the signal observation process, only information on a part of the node set S can be acquired, and in this case, the observation signal is defined as y ═ Ψ x + n, where Ψ is an observation matrix corresponding to the node set S, and n is additive gaussian noise inevitable in the observation process.
The purpose is to recover S from information on part of the set S of nodescWhen the signal on S is a continuous amplitude signal, the following optimization problem is established:
Figure BDA0003532394230000061
where e (x) characterizes the difference between the observed value and the true value, where y is the actual observed signal, and when the observed signal does not contain noise, e (x) is 0, s (x) characterizes a smoothing term, where λ is a hyper-parameter, and a larger λ represents a smoother image signal.
In the observation process, the influence of additive noise is inevitably received, so that the difference term e (x) needs to be reserved, on the other hand, considering that the digital storage of the observation signal needs to convert the continuous amplitude signal into a discrete form, the quantized form of the observation signal Ψ x is Q (Ψ x) ═ Ψ Q (x), since Ψ is used as a selection matrix, the quantization level of the observation signal is not changed, and the signal in the quantized form is substituted to create a problem
Figure BDA0003532394230000062
Because of considering the potential nonlinear representation form of the signal, the invention does not choose to directly use the definitions of E (x) and S (x), but keeps the definitions in an uncertain function form, and then recovers the definitions by using a joint design of interpolation and learning quantization of the image signal based on ADMM expansion.
And 2, step: and an interpolation module of the design drawing signal.
When the statistical properties of the graph signals obey the independent gaussian assumption, a closed form expression of the interpolation matrix can be obtained from the known observation/sampling matrix. When the statistical characteristics of the graph signal and the noise are unknown, an interpolation algorithm is usually designed according to the vertex domain smoothing characteristic or the frequency domain sparse characteristic of the graph signal. At this time, the graph signal interpolation problem can be established as:
Figure BDA0003532394230000063
where R (-) represents the denoiser for the graph signal, which is usually expressed as a nonlinear function, and λ is a regularization parameter expressed as the weight of the denoiser in the objective function; when the denoiser is an unknown function, it can be approximated using a neural network. The invention combines the ADMM algorithm with the deep expansion network to realize the ADMM expansion network.
Deep algorithm expansion is a method of building a multi-layer network by expanding a loop of a conventional optimization algorithm and deploying trainable parameters at each layer. The depth expansion layer corresponds to one step of the iterative optimization algorithm. The parameters in each layer may be trained from available training data to minimize the loss function without having to manually select the parameters. Some variants of deep-evolving networks have been proposed in the field of signal processing.
The traditional ADMM algorithm needs a definite function object, and the deep expansion network extracts potential data features by using a later data set. Therefore, I design a DAU-based trainable loop denoiser R (x) in the present invention, and set the interpolation problem in step 2-1 to be symbol-simplified
Figure BDA0003532394230000071
Virtually any denoiser can be used to design r (x), including fully connected neural networks and graph neural networks. In the present invention, a quadratic optimization based denoiser is used, with l1Regularization, which has the advantage of only a small number of parameters, thereby improving training efficiency, and the denoiser is defined as:
Figure BDA0003532394230000072
s.t.v=Mx。
wherein
Figure BDA0003532394230000073
NiDenoted as the neighbor set of node i.
The invention uses an iterative algorithm based on ADMM expansion to iterate the noise remover in the step 2-3:
x(l+1)←A*as(l)+B*at(l)+C*a(PTy(l)),
s(l+1)←D*ax(l)+E*a(QTy(l)),
y(l+1)←NNu(Px(l+1)),
z(l+1)←NNr(Qs(l+1)),
wherein Aa,B*a,C*a,D*a,E*aIs a trainable graph convolution net with filter coefficientsThe filter coefficients are trainable parameters, subscript a denotes trainable, and NNu(. and NN)r(. is) two fully connected neural networks containing trainable parameters. Intuitively, the invention uses separately trainable graph convolution AaThe like replaces fixed graph convolution and solves the sub-optimization problem with a trainable fully-connected neural network, allowing trainable operators to learn adaptively from data, often reducing a large number of computational parameters.
In order to construct a complete network architecture, the present invention will construct z(0),s(0),z(0)The matrix is initialized to all zeros and iterated sequentially. Obtaining interpolated output of a signal by optimizing trainable graph convolution and trainable parameters in two sub-neural networks
Figure BDA0003532394230000081
All trainable parameters here come from two parts, including filter coefficients in each of the trainable graph convolutions and parameters in the fully-connected neural network. By optimizing these parameters, complex a priori information in the raw map signal can be captured in a data-driven manner. To train these parameters, the present invention considers the loss function as follows:
Figure BDA0003532394230000082
wherein
Figure BDA0003532394230000083
To formulate the output of the network f (-) and t is the original input signal value. The invention then uses a random gradient descent method to minimize the losses and optimize the network. The node signal value t containing noise is used as input and monitoring for the network. In addition, it is also simple to extend the above arrangement to training using multiple map signals.
The algorithm developed here is rooted in the ADMM algorithm. In practice, the optimization problem of the present invention can be solved using various alternative iterative algorithms, which may result in different network architectures. Whichever iterative algorithm is used, the core strategy is to follow iterative steps, using trainable graph convolution instead of fixed but computationally expensive graph filtering. A network architecture that follows this policy may also be referred to as a graph expanded network (GAN). Compared with the traditional map signal interpolation operator, the proposed method can learn various complex signal priors from a given map signal by utilizing the learning capability of the deep neural network. In contrast to many general-purpose neural networks, the proposed method can be explained by analyzing iterative steps. The present invention expands the iterative algorithm into a graph neural network by mapping each iteration to a single network layer and stacking multiple layers together.
And step 3: and the diagram signal quantization rule module is quantized by a soft-hard quantizer.
The depth task based quantizer proposed by the present invention implements scalar quantization as an intermediate activation in an analog-to-digital hybrid DNN. This layer converts its continuous amplitude input to a discrete digital representation. This irreconcilability of the continuous mapping results in a significant challenge to applying SGD to optimize network hyper-parameters. In particular, the quantization activation can be modeled as determining a superposition of step functions jointly mapped to successive regions of a single value, such that the gradient of the cost function is zero. Therefore, direct application of SGD cannot properly set the pre-quantization network. To overcome this drawback, the present invention introduces two common approaches, namely graded gradient and soft-hard quantization.
As shown in fig. 1, in the hierarchical gradient method, a quantization value is modeled as an analog value destroyed by i hierarchical modules independent from each other, so that the quantization does not affect the back propagation process. Since the quantization error is determined by the analog value, the resulting model is rather inaccurate. In particular, although under certain input distributions, quantization noise can be modeled as uncorrelated with the input, they are not independent of each other. In fact, in order to make the quantization error independent of the input, a subtractive dithered quantization should be used, which does not represent the operation of the actual ADC. Therefore, quantization using the model during training may result in a mismatch between the training system and the test system.
Under the hierarchical gradient model, the continuous-to-discrete mapping is fixed, i.e., uniformly quantized, and the training algorithm completely backpropagates the gradient values through the quantization layer. A hierarchical gradient-based approach can lead to poor performance when the quantizer causes non-negligible distortion. This method performs relatively poorly at low quantization rates, where scalar quantization produces a non-negligible error term that depends on the analog input. Thus, the structure illustrates that a scalar quantizer exists during training and is not limited to a fixed uniform quantizer.
As shown in fig. 2, soft-hard quantization is based on approximating an unquantizable mapping with a quantifiable mapping. Here we replace the continuous to discrete transform with a non-linear activation function that has approximately the same behavior as the quantizer. In particular, the present invention uses a sum of shifted hyperbolic tangents, which are already very detailed with step functions in the presence of large amplitude inputs, resulting in a scalar quantization mapping given by:
Figure BDA0003532394230000091
wherein { ai,bi,ciIs a series of real-valued parameters, notably, with the parameter { c }iIncreasing, the corresponding hyperbolic tangent approaches a step function.
In addition to learning the weights of analog and digital DNN, this soft-to-hard approach allows the network to learn its quantized activation function, especially the best-fit parameter { a }iAnd { b } ofiAnd (displacement). These adjustable parameters are then used to determine the decision region of the scalar quantizer, resulting in a learned quantization mapping. Parameters c that substantially control the similarity of actual continuous to discrete mappingsiNo reflection of the quantized decision region (by { b) }iControl) and its associated digital value (by a)iDetermined) and therefore cannot be learned from training. Set { ciMay be according to a quantization resolution MiSet up or modify using anneal based optimization, where { c }iIn the training periodAnd manually increased. The optimization proposed is by fitting the parameters ai,biRealized as part of the network hyperparameter theta. Due to the differentiability of the quantization function, the entire network can now be optimized in an end-to-end manner using standard SGD, including analog and digital DNN and quantization rules.
When training is over, we use scalar quantizer
Figure BDA0003532394230000101
Instead of learning the activation function, the decision region of the scalar quantizer is defined by an adjustable parameter { a }i,biAnd (4) deciding. We use collections
Figure BDA0003532394230000102
To determine the decision regions of the quantizer and to center each decision region
Figure BDA0003532394230000103
The values are set to their respective presentation levels. On the premise of ensuring generality, we assume
Figure BDA0003532394230000104
(when this condition is not met, the parameters will be sorted and re-indexed accordingly). The resulting quantizer is
Figure BDA0003532394230000105
And 4, step 4: and (3) carrying out joint training on the interpolation module obtained in the step (2) and the graph signal quantization rule module obtained in the step (3).
In order to solve the problem of combinatorial optimization, the invention adopts a greedy-based method. In particular, the bit allocation problem is solved by a non-parametric, iterative, greedy approach. For reconstruction, the present invention employs an ADMM-based deep unfolding network. In order to jointly solve the quantization and reconstruction problems, the method combines a greedy algorithm and an ADMM-based deep expansion network, and defines the greedy algorithm as a learning amount based on greedyAnd (5) networking. Before discussing the proposed joint learning algorithm, the present invention first discusses a standard greedy algorithm and its variants that are suitable for the objectives of the present invention. The goal of reconsidering the optimization problem is to select only the best quantization bit allocation pattern Mi}。
Specifically, the step of joint training comprises:
step 4-1: a greedy-based quantization bit allocation algorithm is established.
Inputting: laplace matrix L, total number of quantized bits log2M
And (3) outputting: quantization bit allocation { M }i}
Initialization: for any i ∈ S, Mi=1
When in use
Figure BDA0003532394230000111
The method comprises the following steps:
the selection index e is maxifM(Mi)
Updating Me=Me+1;
Wherein f isMThe (DEG) represents the average benefit brought by adding a quantization state number under the condition of selecting the current quantization bit distribution, and f represents the quantization distortion degree from the angle of statistical average because the quantization bit distribution is trained by using a data setMThe definition of (·) can be expressed by the following form:
Figure BDA0003532394230000112
wherein f isM,j({Mi}(k)) Denotes the pair { M in the k-th iterationiThe update operation is to select the quantizer with index j and add a new quantization state number,xrepresenting the true signal, x(k+1),jIndicating that the quantized signal after the allocation of the quantization bits is updated in the (k + 1) th iteration
Figure BDA0003532394230000121
In the above algorithm, fM(. cndot.) is a fixed function, so for a given set of examples, finding an optimal mapping to obtain the distortion of the reconstructed signal from the current quantization bits solves the joint optimization problem.
Step 4-2: characterizing the quantization distortion degree under statistical averaging, and using the ADMM expansion network for an interpolation module of the graph signal, specifically: training data vector [ x ] needed by each batch1,x2,...]Splicing to obtain a high-dimensional matrix X, inputting the matrix into the soft-hard quantizer designed in the step 3 in parallel, and calculating the quantization distortion as follows:
Figure BDA0003532394230000122
the ADMM expansion network is then used in the interpolation block of the graph signal, specifically represented as the interpolated output replacing the input of the soft-hard quantizer in the quantization distortion factor. The total distortion factor of interpolation-quantization at this time can be expressed as:
Figure BDA0003532394230000123
wherein XSIs the signal that needs to be interpolated.
Step 4-3: selecting a quantizer to be updated according to the current quantization state: allocating initial quantization bits to be all 1, then sequentially adding a new index according to the greedy algorithm provided in the step 4-1, and in the kth iteration, an operator fM(. The) acting on the quantization bit allocation, sequentially using all selectable quantizers and an interpolation module, selecting the quantizer with the fastest total distortion reduction as a selection result of the kth iteration, and outputting { Mi}(k+1)
Step 4-4: after the training is completed, the comparison is carried out again: after the learning quantization network finishes training, setting the greedy algorithm at the end of the K iteration, and comparing the used total quantization bit number log2M1And givenTotal quantized bit number log of2M0If M is1<M0If so, continuing to add a new network layer number based on the result of the Kth iteration; if M is1=M0Then, the current quantization bit allocation and the corresponding reconstruction network parameters are taken as final results to be output; if M is1>M0The nearest network module is deleted and the comparison is performed again.
The invention uses the parallel scalar quantizers, designs each quantizer respectively, and uses the neural network to carry out joint design on quantization bit allocation and interpolation operators, thereby improving the completion performance.
The above description is only an embodiment of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (10)

1. A sensor network signal completion method realized by utilizing a deep neural network is characterized by comprising the following steps: the sensor network signal completion method comprises the following steps:
step 1: inputting the signals of the observation nodes into a parallel scalar quantizer to obtain the discrete output of the quantized observation node signals and establish a problem;
and 2, step: an interpolation module of the design drawing signal;
and step 3: designing a graph signal quantization rule module, wherein the graph signal quantization rule module adopts a soft-hard quantizer to quantize;
and 4, step 4: and (4) performing combined training on the interpolation module obtained in the step (2) and the graph signal quantization rule module obtained in the step (3).
2. The method for completing sensor network signals by using the deep neural network as claimed in claim 1, wherein: the step 4 of joint training comprises the following steps:
step 4-1: establishing a greedy-based quantization bit distribution algorithm;
step 4-2: the interpolation module is used for representing the quantization distortion degree under the statistical average and using an ADMM expansion network for the graph signals;
step 4-3: selecting a quantizer to be updated according to the current quantization state;
step 4-4: the comparison is performed again after the training is completed.
3. The method for completing sensor network signals by using the deep neural network as claimed in claim 2, wherein: the step 4-2 is specifically as follows:
the training data vector [ x ] required by each batch1,x2,...]Splicing to obtain a high-dimensional matrix X, inputting the matrix into the soft-hard quantizer designed in the step 3 in parallel, and calculating the quantization distortion as follows:
Figure FDA0003532394220000011
then, the ADMM expansion network is used in an interpolation module of the graph signal, specifically, the interpolated output replaces the input of the soft-hard quantizer in the quantization distortion, and then the total distortion of interpolation-quantization can be expressed as:
Figure FDA0003532394220000012
wherein XSIs the signal that needs to be interpolated.
4. The method for completing sensor network signals by using the deep neural network as claimed in claim 2, wherein: the step 4-3 is specifically as follows: allocating initial quantization bits to be all 1, then sequentially adding a new index according to the greedy algorithm provided in the step 4-1, and in the kth iteration, an operator fM(. to) for quantization bit allocation, all alternative quantizers are used in turn and selected by an interpolation moduleSelecting the quantizer with the fastest total distortion reduction as the selection result of the k iteration and outputting { M }i}(k+1)
5. The method for completing sensor network signals by using the deep neural network as claimed in claim 2, wherein: the step 4-4 is specifically as follows: after the learning quantization network finishes training, setting the greedy algorithm at the end of the K iteration, and comparing the used total quantization bit number log2M1With a given total quantized bit number log2M0If M is1<M0If so, continuing to add a new network layer number based on the result of the Kth iteration; if M is1=M0Then, the current quantization bit allocation and the corresponding reconstruction network parameters are taken as final results to be output; if M is1>M0The nearest network module is deleted and the comparison is performed again.
6. The method for completing sensor network signals by using the deep neural network as claimed in claim 2, wherein: the greedy-based quantization bit allocation algorithm in the step 4-1 is as follows:
inputting: laplace matrix L, total number of quantized bits log2M
And (3) outputting: quantization bit allocation { M }i}
Initialization: for any i ∈ S, Mi=1
When in use
Figure FDA0003532394220000021
The method comprises the following steps:
the selection index e is maxifM(Mi)
Updating Me=Me+1;
Wherein f isM(. The) represents the average gain from adding a new quantization state number when selecting the current quantization bit allocation, and is defined as
Figure FDA0003532394220000022
Wherein f isM,j({Mi}(k)) Represents the k-th iteration for { M }iThe updating operation is to select the quantizer with index j and add a new quantization state number, x represents the real signal, x(k+1),jIndicating that the quantized signal after the allocation of quantized bits is updated in the (k + 1) th iteration
Figure FDA0003532394220000031
7. The method for completing the sensor network signal by using the deep neural network as claimed in claim 1, wherein: the soft-hard quantization in step 3 is specifically to obtain scalar quantization mapping by using a sum of shifted hyperbolic tangent lines, which is given by the following formula:
Figure FDA0003532394220000032
wherein { ai,bi,ciIs a series of real-valued parameters, with parameter { c }iIncreasing, the corresponding hyperbolic tangent approaches a step function.
8. The method for completing sensor network signals by using the deep neural network as claimed in claim 1, wherein: the design process of the interpolation module of the signals in the step 2 comprises the following steps:
step 2-1: the problem of establishing graph signal interpolation is as follows:
Figure FDA0003532394220000033
wherein R (-) represents a de-noiser of the graph signal, and λ is a regularization parameter expressed as the weight of the de-noiser in the objective function;
step 2-2: designing a trainable loop de-noising device R (x) based on the DAU, and setting the interpolation problem in the step 2-1 as the simplified symbol
Figure FDA0003532394220000034
Step 2-3: using a quadratic optimization based denoiser, defined as:
Figure FDA0003532394220000035
s.t.v=Mx。
wherein
Figure FDA0003532394220000036
NiA set of neighbors denoted as node i;
step 2-4: iterating the denoiser in step 2-3 by using an iterative algorithm based on ADMM expansion:
Figure FDA0003532394220000041
s(l+1)←D*ax(l)+E*a(QTy(l)),
y(l+1)←NNu(Px(l+1)),
z(l+1)←NNr(Qs(l+1)),
wherein Aa,B*a,C*a,D*a,E*aIs a trainable graph convolution network with filter coefficients, which are trainable parameters, subscript a denotes trainable, and NNu(. and NN)r(. is) two fully connected neural networks containing trainable parameters.
9. The method for completing sensor network signals by using the deep neural network as claimed in claim 8, wherein: and setting a loss function in the iteration process of the step 2-4 as follows:
Figure FDA0003532394220000042
wherein
Figure FDA0003532394220000043
To formulate the output of the network f (-), t is the original input signal value.
10. The method for completing the sensor network signal by using the deep neural network as claimed in claim 1, wherein: the step 1 specifically comprises the following steps:
step 1-1: defining the observed signal as y ═ Ψx+nWhere Ψ is an observation matrix corresponding to the node set S, and n is inevitable additive gaussian noise during the observation process;
step 1-2: recovering S according to information on part of node set ScWhen the signal on S is a continuous amplitude signal, the following optimization problem is established:
Figure FDA0003532394220000044
where e (x) characterizes the difference between the observed value and the true value, where y is the actual observed signal, and when the observed signal does not contain noise, e (x) is 0, s (x) characterizes a smoothing term, where λ is a hyper-parameter, and a larger λ represents a smoother image signal;
step 1-3: reserving a difference term e (x), and quantizing the observation signal Ψ x into Q (Ψ x) ═ Ψ Q (x);
step 1-4: substituting the quantized form of the signal into a problem
Figure FDA0003532394220000051
CN202210211863.8A 2022-03-04 2022-03-04 Sensor network signal complement method realized by deep neural network Active CN114584150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210211863.8A CN114584150B (en) 2022-03-04 2022-03-04 Sensor network signal complement method realized by deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210211863.8A CN114584150B (en) 2022-03-04 2022-03-04 Sensor network signal complement method realized by deep neural network

Publications (2)

Publication Number Publication Date
CN114584150A true CN114584150A (en) 2022-06-03
CN114584150B CN114584150B (en) 2024-06-21

Family

ID=81777826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210211863.8A Active CN114584150B (en) 2022-03-04 2022-03-04 Sensor network signal complement method realized by deep neural network

Country Status (1)

Country Link
CN (1) CN114584150B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103974268A (en) * 2013-01-29 2014-08-06 上海携昌电子科技有限公司 Low-delay sensor network data transmission method capable of adjusting fine granularity
CN106572093A (en) * 2016-10-31 2017-04-19 北京科技大学 Wireless sensor array data compression method and wireless sensor array data compression system
US20200311878A1 (en) * 2019-04-01 2020-10-01 Canon Medical Systems Corporation Apparatus and method for image reconstruction using feature-aware deep learning
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN113674172A (en) * 2021-08-17 2021-11-19 上海交通大学 Image processing method, system, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103974268A (en) * 2013-01-29 2014-08-06 上海携昌电子科技有限公司 Low-delay sensor network data transmission method capable of adjusting fine granularity
CN106572093A (en) * 2016-10-31 2017-04-19 北京科技大学 Wireless sensor array data compression method and wireless sensor array data compression system
US20200311878A1 (en) * 2019-04-01 2020-10-01 Canon Medical Systems Corporation Apparatus and method for image reconstruction using feature-aware deep learning
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN113674172A (en) * 2021-08-17 2021-11-19 上海交通大学 Image processing method, system, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王保云 等: "分析大数据:非规则结构与图信号", 南京邮电大学学报, 30 October 2020 (2020-10-30) *

Also Published As

Publication number Publication date
CN114584150B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN107633486B (en) Structural magnetic resonance image denoising method based on three-dimensional full-convolution neural network
Shah et al. Solving linear inverse problems using gan priors: An algorithm with provable guarantees
CN109711426B (en) Pathological image classification device and method based on GAN and transfer learning
Liu et al. Learning converged propagations with deep prior ensemble for image enhancement
Thakur et al. Image de-noising with machine learning: A review
Huang et al. A provably convergent scheme for compressive sensing under random generative priors
CN109636722B (en) Method for reconstructing super-resolution of online dictionary learning based on sparse representation
Liu et al. Adaptive sparse coding on PCA dictionary for image denoising
CN114418883A (en) Blind image deblurring method based on depth prior
CN112200733B (en) Grid denoising method based on graph convolution network
Dinesh et al. 3D point cloud color denoising using convex graph-signal smoothness priors
Mallat et al. Deep learning by scattering
Poddar et al. Recovery of noisy points on bandlimited surfaces: Kernel methods re-explained
CN116405100B (en) Distortion signal restoration method based on priori knowledge
CN110717402B (en) Pedestrian re-identification method based on hierarchical optimization metric learning
CN114584150B (en) Sensor network signal complement method realized by deep neural network
CN113433514B (en) Parameter self-learning interference suppression method based on expanded deep network
CN113298232B (en) Infrared spectrum blind self-deconvolution method based on deep learning neural network
Riegler et al. Depth Restoration via Joint Training of a Global Regression Model and CNNs.
CN115205308A (en) Fundus image blood vessel segmentation method based on linear filtering and deep learning
Bonettini et al. Learning the Image Prior by Unrolling an Optimization Method
CN113420710A (en) Sensor data noise reduction method based on multi-resolution wavelet
Wei et al. Image denoising with deep unfolding and normalizing flows
CN109919857A (en) A kind of noise image complementing method based on weighting Si Laiteen norm minimum
Luo et al. Maximum a posteriori on a submanifold: a general image restoration method with gan

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant