CN108881660B - Method for compressing and calculating hologram by adopting quantum neural network for optimizing initial weight - Google Patents

Method for compressing and calculating hologram by adopting quantum neural network for optimizing initial weight Download PDF

Info

Publication number
CN108881660B
CN108881660B CN201810409647.8A CN201810409647A CN108881660B CN 108881660 B CN108881660 B CN 108881660B CN 201810409647 A CN201810409647 A CN 201810409647A CN 108881660 B CN108881660 B CN 108881660B
Authority
CN
China
Prior art keywords
network
quantum
training
neural network
hologram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810409647.8A
Other languages
Chinese (zh)
Other versions
CN108881660A (en
Inventor
杨光临
侯深化
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201810409647.8A priority Critical patent/CN108881660B/en
Publication of CN108881660A publication Critical patent/CN108881660A/en
Application granted granted Critical
Publication of CN108881660B publication Critical patent/CN108881660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • H04N1/32277Compression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Holo Graphy (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for compressing a computed hologram by adopting a quantum neural network with optimized initial weight, belonging to the technical field of compression and transmission of computed holograms. On the basis of quantum BP neural network compression transmission calculation holograms, a calculation hologram training set is used for pre-training to obtain quantum BP neural network optimization initial weights, the convergence process of a pre-training network is accelerated by setting pre-trained parameter random initialization variances, secondary network fine tuning training is carried out on given holographic compression data by using the pre-trained optimization initial weights, and meanwhile, the network learning rate is dynamically adjusted in the optimization process to accelerate the quantum BP neural network compression transmission process. The invention can use less iteration times to complete the training of the compression transmission network structure on the basis of not changing the basic structure of the original quantum BP neural network, thereby accelerating the compression speed of the quantum BP neural network on the computed hologram and ensuring the quality of the reconstructed image of the hologram.

Description

Method for compressing and calculating hologram by adopting quantum neural network for optimizing initial weight
Technical Field
The invention provides a method for compressing a computed hologram by adopting a quantum neural network with optimized initial weight, and particularly relates to the technical field of compression transmission of the computed hologram.
Technical Field
The method for calculating the hologram has the characteristics of flexibility, simplicity, convenience and the like, avoids a complex optical path system and a fussy preparation process of the traditional optical holography, and can obtain the effect which is difficult to achieve by artificially designing the optical holography. The value of each point on the computed hologram is the result of interference between the diffracted wave and the reference light, all information of the object is covered, each hologram contains a large amount of redundant information, which generates higher requirements on storage and transmission of the information and limits the development of the computed hologram.
Zhang Chao[1]The feasibility of transmitting a digital Hologram Using a conventional reverse (BP) Neural Network for Compression transmission of a digital Hologram Using an artificial Neural Network was verified by "Chao Zhang, Guanglin Yang and Haiyan Xie," Information Compression of Computer-Generated histogram Using BP Neural Network, "in biological Optics and 3-D Imaging, OSA Technical Digest (CD) (Optical Society of America), paper JMA2, 2010" by et al. However, the traditional BP neural network has the defects of too low processing speed, limited memory capacity and the like under the condition of large information quantity, and the defects limit the effect and application value of the technology applied to digital holographic image compression and transmission. Liu Meng Jia[2]In "Mengjia Liu, Guanglin Yang, and Haiyan Xie," Method of computer-generated hologram compression and transmission using quantuA quantum counter-propagating neural network (QBP) is provided in m back-propagation neural network, "Optical Engineering, Vol.56, No.2, pp.023104-1-6, February 2017[3-5]Compared with the traditional BP neural network, the experiment shows that the QBP neural network is utilized to carry out compression transmission on the computed hologram, so that the training of the structure of the compression transmission network can be completed by using fewer iterative training times, the compression transmission speed of the computed hologram is improved, and the recovery quality of the image is ensured. But it has problems that: the quantum BP neural network is initialized randomly, so that the initial weight of the network is far different from the optimal weight, and the network still needs more iteration times to be converged.
Disclosure of Invention
The invention provides a method for compressing a computed hologram by adopting a quantum neural network with optimized initial weight, which can effectively accelerate the speed of compressing and transmitting the computed hologram.
The principle of the invention is as follows: the quantum BP neural network based on the quantum theory state superposition principle has higher parallel processing speed and stronger data storage capacity than the traditional BP neural network, and the higher compression speed can be obtained by compressing the calculation hologram on the basis. Meanwhile, in the problem of compression and transmission of the Fresnel off-axis calculation hologram, the calculation holograms are all the results of interference of Fresnel diffraction and reference light, and the parameters of the reference light and the parameters of the object emergent light wave in the diffraction process are kept unchanged in the whole process, so that the calculation holograms generated by different images in a holographic plane have greater similarity, and the network model with better generalization capability can be pre-trained by using a hologram training set. Because the optimized initial network weight obtained in the pre-training process has better generalization capability and is close to the optimal value, the adaptation process of the network to data in the secondary training aiming at the image to be compressed (not included in the hologram training set) can be accelerated, so that the convergence speed of the secondary training is improved, and the image compression effect can be improved only by the network with few training times; meanwhile, pre-training can be performed off-line, so that the compression time for calculating the holographic image is greatly shortened.
The technical scheme provided by the invention is as follows:
a method for compressing and calculating a hologram by adopting a quantum neural network with optimized initial weight comprises the following specific steps:
1) selecting a common image data set to carry out normalization preprocessing, and generating a Fresnel off-axis calculation hologram training set according to a Fresnel off-axis holographic principle;
2) the method comprises the following steps of constructing a three-layer quantum BP neural network by using quantum neurons, wherein the three-layer quantum BP neural network comprises an input layer, a hidden layer and an output layer, wherein neurons in the same layer are not connected, and quantum neurons in the layers are mutually connected;
3) initializing each network parameter of the quantum BP neural network constructed in the step 2) by setting the uniform distribution of zero mean, dividing each holographic image in the Fresnel calculation hologram training set manufactured in the step 1) into a plurality of non-overlapped pixel blocks with the same size and converting the pixel blocks into one-dimensional vectors, and performing pre-training as the input of the quantum BP neural network constructed in the step 2) to obtain a pre-trained network model, namely optimized initial weight;
4) and (3) carrying out secondary training on the pre-training network model obtained in the step 3) aiming at the computed hologram to be compressed until the network output error meets a set value to obtain a final compression network, carrying out compression transmission and decompression on the computed hologram to be compressed by using the network, and reconstructing to obtain the hologram.
Further, the fresnel off-axis computation hologram of step 1) is computed by equations 1 and 2:
Figure BDA0001647686800000021
u in formula 10(x0,y0) Is the coordinates of a point on the object, U (x, y) is the object wavefront at the location of the holographic plane, d is the distance between the object plane and the holographic plane, λ is the wavelength,
Figure BDA0001647686800000022
k-2 π/λ, F denotesFourier transform, namely, when reference light R (x, y) irradiates a holographic plane and interferes with the wavefront of object light, interference fringes formed on the holographic plane are a hologram H (x, y), and when a certain included angle exists between incident reference light and the wavefront of the incident object light, off-axis reference light interference is obtained, as shown in formula 2:
Figure BDA0001647686800000031
UR in formula 2*Containing amplitude and phase information of the object wave, U*R is a twin image portion, | U |2+|R|2Bright spot of zero-order diffraction, R*And U*Respectively are conjugate terms of R and U, a Fresnel off-axis computer hologram corresponding to the object is generated after discretization by the principle, and a Fresnel off-axis computer hologram data set is manufactured by the method after preprocessing such as normalization and the like is carried out on the selected common image data set.
Further, the step 2) specifically comprises the following steps:
21) establishing a mathematical model of a quantum neuron consisting of a one-bit phase shift gate and a two-bit controlled NOT gate, wherein two types of parameter forms exist in the quantum neuron model, and one type of parameter form is a weight parameter theta and a threshold parameter lambda corresponding to the phase of the phase shift gate; the other is the roll-over control parameter δ corresponding to the control not gate.
22) The quantum neural network comprises an input layer, a hidden layer and an output layer, wherein neurons in the same layer are not connected, quantum neurons in the layers are connected with one another, and input and output data consistency needs to be guaranteed. And in order to achieve the purpose of data compression, the number of neurons of the hidden layer is smaller than that of neurons of the input layer.
23) Solving the parameters in the quantum neural network constructed in the step 22) by using a back propagation algorithm, and stopping network training when the network output error reaches a set value.
Further, the step 3) specifically comprises the following steps:
31) the network parameters are initialized with a uniform distribution of zero means with a large variance of zero means (greater than 1/3) before pre-training, which results in faster pre-training convergence.
32) Dividing each holographic image in the Fresnel calculation hologram training set manufactured in the step 1) into a plurality of non-overlapping pixel blocks with the same size, converting the pixel blocks into one-dimensional vectors, and pre-training the one-dimensional vectors as the input of the quantum BP neural network constructed in the step 2). The network model is obtained by pre-training aiming at the neural networks with different compression ratios (controlled by the number of neurons in the hidden layer), namely, the optimal initial weights under different compression ratios can be obtained by performing pre-training in an off-line manner, and the compression time of the image cannot be influenced by the hologram training sets with different sizes.
The step 4) specifically comprises the following steps:
41) the invention analyzes and compares the effect of the adaptive sub-gradient method (AdaGrad) and the adaptive moment estimation method (Adam) on carrying out the secondary training on the compression network.
42) Dividing a computational hologram to be compressed (not included in a hologram training set manufactured in the step 1)) into a plurality of non-overlapping pixel blocks with the same size and converting the pixel blocks into one-dimensional vectors, performing secondary training on a pre-training network model obtained in the step 3) until a network output error meets a set value to obtain a final compression network, wherein the network can complete training of the image compression network only by training a few times because an optimized initial network weight obtained in the pre-training process is close to an optimal value.
43) Inputting 42 the computed hologram to be compressed) into the trained image compression network, wherein the output result of the network hidden layer is the result of image compression, and the output of the network output layer is the result of decompression.
Further, fresnel off-axis hologram reconstruction may be achieved through an off-axis reference light irradiation process and a fresnel diffraction process, and image quality may be compared by reconstructing an image with an uncompressed hologram.
The invention provides a method for compressing computer holograms using a quantum neural network that optimizes initial weights. On the basis of a scheme of compressing and transmitting a computed hologram by a quantum BP neural network, the invention utilizes a computed hologram training set to pre-train to obtain the optimized initial weight of the quantum BP neural network, accelerates the convergence process of the pre-trained network by setting the parameter random initialization variance of the pre-training, then carries out secondary network fine tuning training by utilizing the optimized initial weight obtained by the pre-training aiming at given holographic compressed data, and dynamically adjusts the network learning rate in the optimization process to accelerate the compression and transmission process of the quantum BP neural network. The scheme can use fewer iteration times to complete the training of the compression transmission network structure on the basis of not changing the basic structure of the original quantum BP neural network, thereby accelerating the compression speed of the quantum BP neural network on the computed hologram and ensuring the quality of the reconstructed image of the hologram.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a more perfect accelerating and improving method through the analysis in the aspects of network initialization, learning rate and the like on the basis that the quantum BP neural network compresses and transmits the computed hologram. The advantages of the invention are mainly embodied in the following aspects:
the method comprises the following steps of (I) decomposing a process of computing holographic compression based on a quantum BP neural network into two processes of pre-training and secondary training. The pre-training process utilizes the hologram training set to obtain the characteristics of a wide range of computed holograms, and an effective parameter initialization mode is provided for secondary training. And then, carrying out secondary training on the quantum BP neural network aiming at specific holographic compressed data, so that the iteration times of the network can be effectively reduced. The pre-training process can be carried out off-line, so that the calculation amount in the transmission process cannot be increased;
and (II) accelerating the convergence process of the pre-training network by setting the random initialization variance of the parameters in the pre-training process.
And (III) dynamically adjusting the network learning rate by using a self-adaptive moment estimation method in the compression process of the computed hologram, further improving the training speed of the quantum BP neural network, and ensuring the quality of the reconstructed image of the hologram.
Drawings
FIG. 1 is a block diagram of a quantum neural network compressed computer hologram flow with pre-training to obtain optimized initial weights;
FIG. 2 is a block diagram of a quantum neuron mathematical model;
FIG. 3 is a graph showing the influence of different parameter random initialization variances on the pre-training process error in the network pre-training process (100epochs), where the network input layer output layer neurons are all 64, the graph (a) shows that the hidden layer neurons are 32, the graph (b) shows that the hidden layer neurons are 16, the graph (c) shows that the hidden layer neurons are 8, and the graph (d) shows that the hidden layer neurons are 4;
fig. 4 is a graph of output error of the network after random initialization of network parameters is performed at α ═ 1.8 and a learning rate of 0.2 and at α ═ 1.0 and a learning rate of 0.2 × 1.8 in the network pre-training process (100epochs), where all network input layer and output layer neurons are 64, graph (a) and hidden layer neurons are 32, graph (b) and hidden layer neurons are 16, graph (c) and hidden layer neurons are 8, and graph (d) and hidden layer neurons are 4;
fig. 5 is data used for the experiment: (a) an original image; (b) a fresnel computer-made hologram; (c) a hologram reproduction image;
fig. 6 is a computed hologram and its reconstructed image corresponding to each method: (a) (b) direct Random initialization scheme (Random + GD) processed hologram and reconstructed image; (c) (d) holograms and reconstructed images after the solution (Pre-trained + GD) proposed by the invention; (e) (f) Pre-training and adding an AdaGrad optimization method (Pre-trained + AdaGrad) to process the hologram and the reconstructed image; (g) (h) Pre-training plus Adam optimization method (Pre-trained + Adam) processed holograms and reconstructed images.
Examples of the embodiments
The invention will be further described by way of examples of implementation in connection with the accompanying drawings, without in any way limiting the scope of the invention.
The flow block diagram of the method for compressing and calculating the hologram by adopting the quantum neural network with the optimized initial weight is shown in the attached figure 1. In the embodiment of the present invention, the method provided by the present invention specifically includes the following steps:
1) recording an object as a Fresnel off-axis hologram by utilizing a Fresnel diffraction principle;
each point U on the object (here the image)0(x0,y0) The light waves reach the holographic plane through Fresnel diffraction of the near field region, and the object light wave front U (x, y) of the object on the holographic recording plane can be obtained through superposition of the light waves in the holographic plane:
Figure BDA0001647686800000051
in formula 1, d is the distance between the object plane and the holographic plane, λ is the wavelength,
Figure BDA0001647686800000061
k=2π/λ,
Figure BDA0001647686800000065
representing a fourier transform. When the reference light R (x, y) irradiates the hologram plane and interferes with the object light wavefront, the interference fringes formed on the hologram plane are the hologram H (x, y), and when the incident reference light and the incident object light wavefront have a certain included angle, the interference fringes are the off-axis reference light interference, as shown in formula 2:
Figure BDA0001647686800000062
UR in formula 2*Containing amplitude and phase information of the object wave, U*R is a twin image portion, | U |2+|R|2Bright spot of zero-order diffraction, R*And U*The conjugate terms of R and U are respectively used, and the Fresnel off-axis computer hologram corresponding to the object can be generated after discretization by utilizing the principle.
2) Constructing a quantum BP neural network model;
the invention uses the quantum neural network model built by the quantum neurons to compress the computed hologram.
2.1 Quantum neuron model
The quantum neuron mathematical model is shown in fig. 2, and comprises a basic computing unit consisting of a phase shift gate and a two-bit controlled not gate. Let yl(L1, 2.. times.l) is the L-th quantum state x input to the neuroni=f(yi) Phase of (1, 2., L), the quantum neuron model can be written as the following mathematical model:
Figure BDA0001647686800000063
Figure BDA0001647686800000064
o ═ f (y) (formula 5)
In equations 3 and 5, f (θ) ═ cos θ + isin θ. In equation 4, g (x) is sigmoid function, and arg represents the phase of the data. In the quantum neuron model, there are two types of parameter forms, one is a weight parameter θ and a threshold parameter λ corresponding to the phase of the phase shift gate; the other is the roll-over control parameter δ corresponding to the control not gate.
2.2 construction of Quantum BP neural network based on Quantum neurons
When compressing an image using a neural network, it is necessary to ensure consistency between input and output, that is, it is necessary to try to learn a function of y ═ f (x) using the neural network, and therefore, a target is set equal to the input. When the number of the hidden layer network neurons is smaller than that of the input layer neurons, the image compression can be performed by using the back propagation neural network learning algorithm. And in combination with the quantum bit neurons, a quantum neural network model can be constructed to compress the holographic image.
In a quantum neuron, the quantum state can be expressed as | Ψ > - [ cos θ |0 > + sin θ |1 > -, the probability amplitude of |0 > is represented by the real part of the complex function, and the probability amplitude of |1 > is represented by the imaginary part of the complex function. Quantum state |1 > corresponds to the activation state of a neuron, and quantum state |0 > corresponds to the inhibition state of a neuron, i.e., the quantum state of any one neuron is defined as the superposition state of the activation state and the inhibition state. Setting the probability when the final output of the network output layer neuron is an activation state |1 ≧:
output=|Im(o)|2(formula 6)
Im in equation 6 denotes the imaginary part of the data. Let the network input be tn(N ═ 1, 2.. times, N), for a total of P input image blocks, then the error function for the entire network is defined as:
Figure BDA0001647686800000071
the invention uses three layers of network, including an input layer, a hidden layer and an output layer, there is no connection between neurons in the same layer, and the quantum neurons between layers are connected with each other. And setting the neurons of the first input layer as original to-be-compressed holographic images, wherein the neurons of the second hidden layer are holographic image compression results, and the neurons of the third output layer are decompressed holographic images. Solving the constructed quantum neural network by using a back propagation algorithm, wherein the parameter (t +1 moment) updating amount each time is shown as the formula 8-10:
Figure BDA0001647686800000072
Figure BDA0001647686800000073
Figure BDA0001647686800000074
in the expressions 8 to 10, η is the learning rate.
3) Pre-training the quantum BP neural network to obtain an optimized network initialization result;
in the image data compression method based on the neural network, a data compression effect is obtained by setting the number of neurons in the hidden layer to be less than the number of neurons in the input layer and the output layer. The input layer and the hidden layer in the network form an encoder, the hidden layer and the output layer form a decoder, and data output by neurons in the hidden layer is compressed data. The method needs to use the data to be compressed for training the network, so that the generalization capability of the network is weak and the repeated availability is not high. Different image data are compressed each time, the network is trained by using more iteration times from the network which is initialized randomly, and then the compression is carried out, so that the calculation amount in the image data compression process is increased. The compression network model obtained by directly utilizing the training set image training cannot obtain good compression effect on the test set[6]Neural network image compression systems have the ability to compress untrained images, but the compression performance is lower than when using training images, because given a trained neural network, there is no guarantee that the same performance level will be achieved on untrained test set images. In order to balance the compression speed and the image compression effect, the process of compressing the computed hologram image by the quantum neural network can be divided into two processes of pre-training and secondary training, as shown in fig. 1.
In the pre-training process, a large amount of training set data are used for training a set network to obtain generalized network weights, the network weights can be extracted and then transferred to a neural network for secondary training, and the network for the secondary training process is initialized. In the compression and transmission problem of the Fresnel off-axis calculation hologram, the calculation holograms are all the results of interference of Fresnel diffraction and reference light, and the parameters of the reference light and the parameters of the emergent light waves of the object in the diffraction process are kept unchanged in the whole process, so that the calculation holograms generated by different images in the holographic plane have greater similarity, and a network model with better generalization capability can be pre-trained. Therefore, the network weight migration mode can effectively initialize the network, ensure that the network is close to the optimal solution before secondary training, ensure that the initialized parameters are in a better reference position, and avoid obvious local minimum values.
The pre-training provides a way for the secondary training to initialize parameters[7]However, before the pre-training process, the initialization of the weight matrix and the threshold values in the neural network still depends on experience and trial. In the ordinary BP neural network, in order to make the information flow better in the network, the criterion that the variance of each layer output should be equal as much as possible is followed, and Glorot and Bengio[8]A method for automatically determining weight matrix Initialization through the number of input and output neurons is provided, which is called Normalized Initialization and can effectively guarantee the propagation of messages. The quantum neural network is different from a common neural network, and learning parameters are composed of a weight parameter theta and a threshold parameter lambda corresponding to the phase of the phase shift gate and an overturning control parameter delta corresponding to the control NOT gate, and meanwhile, the network does not contain an activation function in the traditional sense. The network parameters are not simply multiplied by the network inputs but are computed through gates. Here, the distribution of initialization of the network is set to
W ═ U [ -alpha, alpha ] (formula 11)
A in equation 11 is the boundary of the set uniform distribution U, the variance var- α2The usual initialization method is to set α to 1. Because the initialization of the parameter values has low requirements on the precision, the value of the weight variance during initialization can be obtained by using experimental analysis. From fig. 3, it can be seen that when the conventional initialization method is used (α ═ 1 and var ═ 1/3), the quantum BP neural network drops more slowly. When a quantum neural network is initialized with a uniform distribution of smaller variance (e.g., α 0.6, var 0.1200), the network output error drops more slowly. When a uniform distribution with a large initialization variance is used (for example, α is 1.4, and var is 0.6533), the convergence of the quantum neural network can be effectively accelerated under different hidden layer neurons.
Since the parameter initialization distribution is obtained by multiplying the uniform distribution of W-U < -1, 1 > by the alpha factor, it can be found from the equation 8-10 that the network parameter update amount is also obtained by multiplying the learning rate eta by the gradient, and the difference between the two is briefly analyzed here. Fig. 4 compares the output error of the network after random initialization of the network parameters at α ═ 1.8 and learning rate η ═ 0.2, and at α ═ 1.0 and learning rate η ═ 0.2 × 1.8. It can be found that the descending speed which is equivalent to or even faster than the increasing of the learning rate can be obtained by increasing the random initialization variance, that is, the effect similar to the increasing of the learning rate can be achieved by increasing the variance during the parameter random initialization, but the increasing of the learning rate in the network easily causes the cost function oscillation, and the setting of the larger parameter random initialization variance only changes the initial state of the network, so that the network is in an unstable state at the beginning, and the unstable state is more beneficial to the subsequent network pre-training. Therefore, in the invention, the network parameters are initialized by modifying the random initialization variance of the parameters in the pre-training process, so that a faster pre-training convergence speed can be obtained.
4) And (3) carrying out secondary training on the pre-training network model obtained in the step (3) aiming at the computed hologram to be compressed (not included in the hologram training set) until the network output error meets a set value, and obtaining a final compression network. The network is utilized to carry out compression transmission and decompression on the computed hologram to be compressed, and then the hologram is obtained through reconstruction;
in the secondary training process, the image to be compressed is used for training on the network after the pre-training initialization, and at the moment, the network is required to fit the input data as much as possible instead of ensuring the generalization capability of the network to a certain extent. Because the optimized initial network weight obtained in the pre-training process has better generalization capability and is close to the optimal value, the adaptation process of the network to data in the secondary training can be accelerated, so that the convergence speed of the training is improved, and the image compression effect can be improved only by training the network for a few times. Since the pre-training can be performed off-line, the compression time for computing the holographic image is greatly reduced.
The invention comprises the analysis and the setting of the learning rate of the network training in the secondary training process. The learning rate is one of the hyper-parameters in the gradient-based optimization network method, and determines the descending step size of the parameter. The invention analyzes and compares the adaptive secondary ladderMeasuring method[9](AdaGrad) and adaptive moment estimation method[10](Adam) adaptively estimates the learning rate to the effect of training the compression network twice.
5) And reproducing the compressed reconstructed hologram, comparing the reproduced image with the original hologram reproduced image, and judging the quality of the compression algorithm by adopting an evaluation standard.
In order to test the compression rate and the quality of the reproduced image, the quality of the reproduced image is measured by a peak signal to noise ratio (PSNR). The peak signal-to-noise ratio (PSNR) is used for measuring the image reconstruction quality after loss compression coding, and the calculation method comprises the following steps:
Figure BDA0001647686800000101
Figure BDA0001647686800000102
MSE in equation 12 represents the mean square error, xpeakIs the peak-to-peak value of the image. In equation 13, M × N is the size of the reconstructed image, f (i, j) is the reconstructed image of the original computed hologram, and f (i, j) is the reconstructed image of the compressed computed hologram.
In the invention, the input and output of the quantum neural network are set to be 64, and the number K of the hidden layer neurons is set to be the value[1 4]And (4) changing. LIVE1 dataset was used for the experiments[11]Pre-training is carried out to obtain the initial weight of the compression network, a lena image is used for carrying out compression transmission based on a quantum BP neural network, and the quantum BP neural network used in the pre-training process is consistent with the neural network used in the compression transmission process. Experiments a set of images (20) randomly selected from LIVE1 data set was grayed and normalized to [0,255 ] for]In the meantime, the size of each image is adjusted to 128 × 128, fresnel off-axis calculation holograms are made, each hologram is divided into non-overlapping training samples with the size of 8 × 8, then the non-overlapping training samples are converted into vectors with the size of 1 × 64 (1024 images with the size of 64 images), and the 20 × 1024 × 64 data are used as the input of a quantum neural network to pre-train the network. Setting of parameters relevant to the experiment asShown in table 1.
TABLE 1 relevant parameter settings for making Fresnel off-axis computation holograms
Figure BDA0001647686800000103
And the network weight obtained after the pre-training is finished is used as an initial value of the compressed network weight. Fig. 5 is a computed hologram of an original lens image and its reconstructed image. Fig. 5(a) is an original lena image (128 × 128 pixels); fig. 5(b) is a computed hologram (256 × 256 pixels) of a lena image; fig. 5(c) shows a hologram reconstructed image of the lena image. As with the pre-trained training data processing method, the digital hologram in fig. 4(b) is used, normalized, divided into 8 × 8 non-overlapping training samples, and converted into 1 × 64 vectors (1024 in total, 1024 × 64) as the input of the quantum neural network for compression.
After the quantum neural network is randomly initialized by the scheme in document 2, the above data are directly used for network training by a Gradient Descent (GD) method (denoted as Random + GD). The scheme of the invention is to use the result of Pre-training as the initial value of the data compression network, and then to perform network training (marked as Pre-trained + GD) on the basis.
In the experiment, the maximum cut-off error of the network needs to be determined to stop the training process of the network, and in order to avoid unnecessary calculation time consumption and ensure basic convergence of the network, the maximum cut-off error of the network as shown in table 2 is set. The optimal learning rates of the direct Random initialization scheme of document 2 (Random + GD) and the scheme proposed by the present invention (Pre-trained + GD), determined at different compression ratios and network maximum cut-off errors, are shown in table 2.
TABLE 2 network maximum cut-off error and optimal learning rate settings during the experiment
Figure BDA0001647686800000111
And (4) carrying out experimental statistics on the number of iterations required when the maximum cut-off error is reached at the optimal learning rate and the quality of a reproduced image. In the direct random initialization scheme, since the final result is unstable due to the random initialization of document 2, multiple averaging is used here to reduce errors. Meanwhile, the invention compares the results of the data compression network training on the basis of the pre-training by using AdaGrad and Adam optimization methods.
TABLE 3 PSNR quality graph of reproduced images under different compression rates
Figure BDA0001647686800000112
Table 3 compares the number of iterations of the network and the PSNR of the reproduced image when the training was stopped for each scheme under the same network maximum cut-off error. An epoch in the table indicates that all incoming data has passed through one forward pass and one reverse pass. From table 3, it can be seen that the scheme of the present invention requires very few epochs, can adapt well to network compression of new data, and basically ensures the quality of the reproduced image of the compressed data, and further accelerates the iterative process (Pre-trained + AdaGrad) by using the AdaGrad method on the basis of Pre-training. FIG. 6(a) (b) is a hologram and reconstructed image after being processed by the direct Random initialization scheme (Random + GD); FIG. 6(c) (d) shows the hologram and the reconstructed image after being processed by the proposed scheme (Pre-trained + GD) of the present invention; FIG. 6(e) (f) is a hologram and reconstructed image after Pre-training plus AdaGrad optimization method (Pre-trained + AdaGrad); FIG. 6(g) (h) is a hologram and reconstructed image after Pre-training plus Adam optimization method (Pre-trained + Adam).
Tables 4, 5, 6, and 7 compare the quality of the reconstructed images after compressing the holograms for each scheme for the same number of iterations. It can be seen that the scheme of obtaining optimal initial weights using pre-training can obtain better quality of the compressed image reproduction from the beginning, indicating that the initial values are near the optimal solution. The optimization method by using Adam has the best reproduced image quality under the conditions that hidden layer neurons are 32, 16 and 8, so that the method by using Adam's adaptive learning rate can obtain better effect under the condition of limiting the time for compressing and calculating the hologram.
TABLE 4 PSNR comparison of reconstructed images for each scheme at the same number of iterations with hidden layer neurons of 32
Figure BDA0001647686800000121
TABLE 5 PSNR comparison of reconstructed images for each scheme at the same number of iterations with hidden layer neurons 16
Figure BDA0001647686800000122
TABLE 6 PSNR comparison of reconstructed images for each scheme at the same number of iterations with hidden layer neurons 8
Figure BDA0001647686800000123
TABLE 7 PSNR comparison of reconstructed images for each scheme at the same number of iterations with hidden layer neurons 4
Figure BDA0001647686800000124
The invention provides a method for compressing computer holograms by adopting a quantum neural network for optimizing initial weight. The method has the advantage that the method has application prospect in three-dimensional holographic information compression and transmission.
It is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.
Reference to the literature
1)Guanglin Yang,Chao Zhang,and Haiyan Xie,"Information compression of computer-generated hologram using BP neural network,"in Digital Holography and Three-Dimensional Imaging(Optical Society of America),paper.JMA2,2010.
2)Mengjia Liu,Guanglin Yang,and Haiyan Xie,"Method of computer-generated hologram compression and transmission using quantum back-propagation neural network," Optical Engineering,Vol.56,No.2,pp.023104,2017.
3)Noriaki Kouda,Nobuyuki Matsui,and Haruhiko Nishimura,"Image Compression by Layered Quantum Neural Networks,"Neural Processing Letters,Vol.16,No.1, pp.67-80,2002.
4)Noriaki Kouda,Nobuyuki Matsui,and Haruhiko Nishimura,"Learning performance of neuron model based on quantum superposition,"in Robot and Human Interactive Communication,2000.RO-MAN 2000.Proceedings.9th IEEE International Workshop on(IEEE),pp.112-117,2000.
5)Noriaki Kouda,Nobuyuki Matsui,Haruhiko Nishimura,and Ferdinand Peper,"Qubit neural network and its learning efficiency,"Neural Computing&Applications, Vol.14,No.2,pp.114-121,2005.
6)Omaima NA Al-allaf,"Improving the performance of backpropagation neural network algorithm for image compression/decompression system,"Journal of Computer Science,Vol.6,No.11,pp.1347-1354,2010.
7) Zhoujiajun and Ouzha, "improved initialization method for deep neural network pre-training," telecommunication technology, Vol.53, No.7, pp.895-898,2013.
8)Xavier Glorot and Yoshua Bengio,"Understanding the difficulty of training deep feedforward neural networks,"in Proceedings of the thirteenth international conference on artificial intelligence and statistics,pp.249-256,2010.
9)John Duchi,Elad Hazan,and Yoram Singer,"Adaptive subgradient methods for online learning and stochastic optimization,"Journal of Machine Learning Research, Vol.12,pp.2121-2159,2011.
10)Diederik P Kingma and Jimmy Ba,"Adam:A Method for Stochastic Optimization," Computer Science,2014.
11)Hamid Rahim Sheikh,Zhou Wang,Lawrence Cormack,and Alan C.Bovik,"LIVE Image Quality Assessment Database Release 2",
http://live.ece.utexas.edu/research/quality,2005。

Claims (4)

1. A method for compressing and calculating a hologram by adopting a quantum neural network with optimized initial weight comprises the following specific steps:
1) selecting a common image data set to carry out normalization preprocessing, and generating a Fresnel off-axis calculation hologram training set according to a Fresnel off-axis holographic principle;
2) quantum neurons are utilized to construct a three-layer quantum BP neural network, the three-layer quantum BP neural network comprises an input layer, a hidden layer and an output layer, neurons in the same layer are not connected, and quantum neurons in layers are mutually connected;
3) initializing each network parameter of the quantum BP neural network constructed in the step 2) by setting the uniform distribution of zero mean, dividing each holographic image in the Fresnel calculation hologram training set manufactured in the step 1) into a plurality of non-overlapped pixel blocks with the same size and converting the pixel blocks into one-dimensional vectors, and performing pre-training as the input of the quantum BP neural network constructed in the step 2) to obtain a pre-trained network model;
4) dividing the computed hologram to be compressed into a plurality of non-overlapping pixel blocks with the same size and converting the pixel blocks into one-dimensional vectors, performing secondary training on the pre-training network model obtained in the step 3) until the network output error meets a set value to obtain a final compression network, and performing compression transmission and decompression on the computed hologram to be compressed by using the network.
2. The method for compressing a computed hologram according to claim 1, wherein step 2) comprises in particular the steps of:
21) constructing a quantum neuron mathematical model consisting of a phase shift gate and two controlled NOT gates;
22) attempting to learn a function of y = f (x) using a quantum BP neural network;
23) solving parameters in the quantum BP neural network constructed in the step 22) by using a back propagation algorithm, and stopping network training when the network output error reaches a set value.
3. The method for compressing a computational hologram according to claim 1, wherein in step 2) the first layer of input layer neurons is set as the original holographic image to be compressed, the second layer of hidden layer neurons is the result of the compression of the holographic image, and the third layer of output layer neurons is the decompressed holographic image, wherein the number of hidden layer neurons is less than the number of input and output layer neurons.
4. The method for compressing a computed hologram according to claim 1, wherein after obtaining the compressed computed hologram in step 4), the fresnel off-axis hologram reconstruction is achieved by an off-axis reference light irradiation process and a fresnel diffraction process.
CN201810409647.8A 2018-05-02 2018-05-02 Method for compressing and calculating hologram by adopting quantum neural network for optimizing initial weight Active CN108881660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810409647.8A CN108881660B (en) 2018-05-02 2018-05-02 Method for compressing and calculating hologram by adopting quantum neural network for optimizing initial weight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810409647.8A CN108881660B (en) 2018-05-02 2018-05-02 Method for compressing and calculating hologram by adopting quantum neural network for optimizing initial weight

Publications (2)

Publication Number Publication Date
CN108881660A CN108881660A (en) 2018-11-23
CN108881660B true CN108881660B (en) 2021-03-02

Family

ID=64326860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810409647.8A Active CN108881660B (en) 2018-05-02 2018-05-02 Method for compressing and calculating hologram by adopting quantum neural network for optimizing initial weight

Country Status (1)

Country Link
CN (1) CN108881660B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG10201904549QA (en) * 2019-05-21 2019-09-27 Alibaba Group Holding Ltd System And Method For Training Neural Networks
JP7279507B2 (en) * 2019-05-21 2023-05-23 富士通株式会社 Information processing device, information processing program and control method
CN110648348B (en) * 2019-09-30 2021-12-28 重庆邮电大学 Quantum image segmentation method based on NEQR expression
CN111144511B (en) * 2019-12-31 2020-10-20 上海云从汇临人工智能科技有限公司 Image processing method, system, medium and electronic terminal based on neural network
CN112085841B (en) * 2020-09-17 2024-04-02 西安交通大学 Digital hologram generating system and method based on deep feedforward neural network
CN116740343A (en) * 2022-03-01 2023-09-12 本源量子计算科技(合肥)股份有限公司 Image segmentation method and device based on quantum classical mixed neural network
CN115761020B (en) * 2022-11-23 2023-05-30 重庆市地理信息和遥感应用中心 Image data compression method based on neural network automatic construction

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101795344B (en) * 2010-03-02 2013-03-27 北京大学 Digital hologram compression method and system, decoding method and system, and transmission method and system
CN103675616B (en) * 2013-11-15 2016-11-23 华南理工大学 The online partial discharge detection signal recognition method of cable
CN105976408A (en) * 2016-04-28 2016-09-28 北京大学 Digital holographic compression transmission method of quantum backward propagation nerve network
US20190188567A1 (en) * 2016-09-30 2019-06-20 Intel Corporation Dynamic neural network surgery
CN106405352A (en) * 2016-11-16 2017-02-15 国网河南省电力公司电力科学研究院 Equivalent salt deposit density (ESDD) prediction and early warning system for power insulator surface contaminant
CN107634943A (en) * 2017-09-08 2018-01-26 中国地质大学(武汉) A kind of weights brief wireless sense network data compression method, equipment and storage device

Also Published As

Publication number Publication date
CN108881660A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108881660B (en) Method for compressing and calculating hologram by adopting quantum neural network for optimizing initial weight
Gai et al. New image denoising algorithm via improved deep convolutional neural network with perceptive loss
Tran et al. GAN-based noise model for denoising real images
Cho Boltzmann machines and denoising autoencoders for image denoising
Al-Allaf Improving the performance of backpropagation neural network algorithm for image compression/decompression system
Al-Allaf Fast Backpropagation Neural Network algorithm for reducing convergence time of BPNN image compression
CN115546060A (en) Reversible underwater image enhancement method
CN115131196A (en) Image processing method, system, storage medium and terminal equipment
CN116524048A (en) Natural image compressed sensing method based on potential diffusion model
CN113763268B (en) Blind restoration method and system for face image
CN117058045A (en) Method, device, system and storage medium for reconstructing compressed image
CN114862699B (en) Face repairing method, device and storage medium based on generation countermeasure network
CN113658330B (en) Holographic encoding method based on neural network
Sadrizadeh et al. Removing impulsive noise from color images via a residual deep neural network enhanced by post-processing
Hou et al. Optimized initial weight in quantum-inspired neural network for compressing computer-generated holograms
Liu et al. Calculating real-time computer-generated holograms for holographic 3D displays through deep learning
CN114998107A (en) Image blind super-resolution network model, method, equipment and storage medium
CN114067015A (en) Pure phase hologram generation method and system combining DNN
Bassey et al. An experimental study of multi-layer multi-valued neural network
Yadav et al. Learning overcomplete representations using leaky linear decoders
Ashin et al. Image Classification in the Era of Deep Learning
Ma et al. A method for compressing computer-generated hologram using genetic algorithm optimized quantum-inspired neural network
Yu et al. Single image de-noising via staged memory network
CN116402916B (en) Face image restoration method and device, computer equipment and storage medium
Ma et al. Compressing Color Computer-Generated Hologram Using Gradient Optimized Quantum-Inspired Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant