CN117353754A - Coding and decoding method, system, equipment and medium of Gaussian mixture model information source - Google Patents

Coding and decoding method, system, equipment and medium of Gaussian mixture model information source Download PDF

Info

Publication number
CN117353754A
CN117353754A CN202311239266.7A CN202311239266A CN117353754A CN 117353754 A CN117353754 A CN 117353754A CN 202311239266 A CN202311239266 A CN 202311239266A CN 117353754 A CN117353754 A CN 117353754A
Authority
CN
China
Prior art keywords
floating point
point number
gaussian mixture
mixture model
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311239266.7A
Other languages
Chinese (zh)
Inventor
宋丹
孙养龙
许志平
高志斌
王琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jimei University
Original Assignee
Jimei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jimei University filed Critical Jimei University
Priority to CN202311239266.7A priority Critical patent/CN117353754A/en
Publication of CN117353754A publication Critical patent/CN117353754A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1105Decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/23Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes
    • H03M13/235Encoding of convolutional codes, e.g. methods or arrangements for parallel or block-wise encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0014Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the source coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0059Convolutional codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The application provides a coding and decoding method, a coding and decoding system, coding and decoding equipment and a coding and decoding medium of a Gaussian mixture model information source, and relates to the technical field of information processing. The method comprises the following steps: modeling a plurality of source components as Gaussian mixture model sources, and representing the sources as an initial floating point number sequence according to a probability density function of the Gaussian mixture model; compressing the initial floating point number sequence by adopting a belief propagation algorithm constructed based on the original pattern low-density parity check code to obtain a binary sequence; and when the binary sequence obtained by compression is decoded, adopting a full-connection layer neural network to reconstruct the decoding to obtain a recovery floating point number sequence of the Gaussian mixture model information source. The coding and decoding method is high in data compression efficiency, simple in equipment realization and low in application complexity, can effectively solve the data compression problem faced by the central node of the wireless sensor network, saves the system power consumption and labor cost of the end node, and improves the information processing efficiency of the wireless sensor network.

Description

Coding and decoding method, system, equipment and medium of Gaussian mixture model information source
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a method, a system, an apparatus, and a medium for encoding and decoding a gaussian mixture model source.
Background
In the field of signal processing, lossy data compression refers to: fewer bits (bits) are used to represent the source, and bits are reduced by eliminating redundant information. In practical application, after the sensor node of the Internet of things acquires information, the data compression technology is used for reducing resources required by information storage and transmission. For example, when transmitting information such as images and videos, the communication system needs to use complex algorithms and hardware devices, and the complexity of the algorithms and the hardware cost can be effectively reduced by utilizing data compression to realize information preprocessing.
The lossy source coding (Lossy Source Coding) is a key technology for realizing data compression in a communication system, and can directly process data collected by different sensor nodes in the environment of the Internet of things, and the fidelity of the data meets the shannon source coding theorem. After the data is compressed, most redundant information is removed, only the most useful characteristic information is reserved, the data size can be obviously reduced, and the communication bandwidth is saved. In a lossy source coding system, the system mainly comprises two modules, namely source coding and source decoding, and the lossy compression and lossless recovery of data are respectively realized.
Typically, to implement source data compression, gaussian source coding may be implemented directly at the end nodes of the sensor network, or gaussian mixture model source coding may be implemented at the central node. However, in a sensor network in practical application, end nodes are generally distributed in remote corners which are inconvenient to touch, and data compression is performed while data acquisition is performed by the end node sensors, which results in larger system power consumption and use cost. Therefore, the end node only performs data acquisition, gathers the data to the central node for data compression processing, can effectively reduce the energy consumption and cost of the sensor, and is convenient for the complex actual scene layout of the sensor network node. After data collected by different end node sensors are converged at a central node, the data can be modeled as a Gaussian mixture model (Gaussian Mixture Model) source to realize data compression processing.
At present, research work (D.Song, J.Ren, L.Wang and g.chen, "Gaussian source coding based on P-LDPC code," IEEE Transactions on Communications, vol.71, no.4, pp.1970-1981, apr.2023) for realizing lossy source coding based on linear block codes has completed the lossy source coding technique of gaussian sources, but the work does not consider lossy source coding of gaussian mixture model sources.
Therefore, the lack of a lossy source coding technology aiming at a Gaussian mixture model source at present leads to the difficulty of realizing a more accurate information processing function by a central node in the application scene of the wireless sensor network, thereby increasing the power consumption and the cost of information processing at an end node and requiring higher hardware resource development and human resource maintenance in practical application.
Disclosure of Invention
In order to solve the problems, the application provides a coding and decoding method, a system, equipment and a medium of a Gaussian mixture model information source, which are used for solving the data compression problem faced by a wireless sensor network center node in the related technology, and can effectively save the system power consumption and the labor cost of end node information processing and construct a more environment-friendly wireless sensor network.
In a first aspect, the present application provides a method for encoding and decoding a gaussian mixture model source, the method comprising;
s1, representing a Gaussian mixture model information source as an initial floating point number sequence according to a probability density function of the Gaussian mixture model information source, wherein the probability density function is obtained by modeling a plurality of information source components conforming to Gaussian distribution according to a preset proportion;
s2, adopting a belief propagation algorithm, taking each floating point in the initial floating point sequence as initial likelihood information of the belief propagation algorithm, carrying out iterative updating on the initial likelihood information according to the connection relation between the variable nodes and the check nodes in the P-LDPC code, and obtaining a binary sequence corresponding to the initial floating point sequence through likelihood value judgment;
s3, inputting the binary sequence into a trained full-connection layer neural network, and mapping the binary sequence into a recovery floating point number sequence of a Gaussian mixture model information source according to a mapping function fitted by the full-connection layer neural network.
In one possible implementation manner, the P-LDPC code is represented as a master pattern matrix formed by connecting variable nodes and check nodes, and the S2 includes:
according to the connection relation between the variable nodes and the check nodes in the original model diagram matrix, carrying out iterative updating on the initial likelihood information until the confidence propagation algorithm judges correctly in the original model diagram matrix and/or reaches the upper limit of iteration times, and outputting a binary sequence corresponding to the initial floating point number sequence;
the length of the initial floating point number sequence X is k: x= { X 1 ,x 2 ,...,x k },x 1 To x k Is a real number, k is a positive integer; the compressed binary sequence Y is expressed as: y= { Y 1 ,y 2 ,...,y k },y 1 To y k Taking 0 or 1.
In one possible implementation, the mapping function is expressed as: representing a recovered floating point number sequence, Y representing a binary sequence, W T Is the weight matrix of the mapping function and beta is the bias vector of the mapping function.
In one possible implementation, the training process of the full-connection layer neural network includes:
inputting a sample binary sequence into the full-connection layer neural network, and obtaining a training floating point number sequence according to a sample weight matrix and a sample bias vector fitted by the full-connection layer neural network, wherein the sample binary sequence is obtained by compressing a sample floating point number sequence corresponding to a given sample Gaussian mixture model information source;
calculating the deviation between the sample floating point number sequence and the training floating point number sequence by taking a mean square distortion function as a loss function of the full-connection layer neural network;
and calculating gradient descent for the loss function, and iteratively updating the sample weight matrix and the sample bias vector until the loss function meets a training target and/or reaches preset iteration times.
In a second aspect, a system for encoding and decoding a gaussian mixture model source is provided, the system comprising: an encoder based on the P-LDPC code and a decoder constructed based on the full-connection layer neural network;
the encoder is used for: adopting a belief propagation algorithm, taking each floating point in an initial floating point sequence of a Gaussian mixture model information source as initial likelihood information of the belief propagation algorithm, carrying out iterative updating on the initial likelihood information according to a connection relation between a variable node and a check node in a P-LDPC code, and obtaining a binary sequence corresponding to the initial floating point sequence through likelihood value judgment;
the initial floating point number sequence is determined according to a probability density function of the Gaussian mixture model source, and the probability density function is obtained by modeling a plurality of source components conforming to Gaussian distribution according to a preset proportion;
the decoder is used for: and inputting the binary sequence into a trained full-connection layer neural network, and mapping the binary sequence into a recovery floating point number sequence of a Gaussian mixture model information source according to a mapping function fitted by the full-connection layer neural network.
In one possible implementation manner, the P-LDPC code is represented as a master pattern matrix formed by variable nodes and check node connections, and the encoder is configured to:
according to the connection relation between the variable nodes and the check nodes in the original model diagram matrix, carrying out iterative updating on the initial likelihood information until the confidence propagation algorithm judges correctly in the original model diagram matrix and/or reaches the upper limit of iteration times, and outputting a binary sequence corresponding to the initial floating point number sequence;
the initial floating point numberSequence X has length k: x= { X 1 ,x 2 ,...,x k },x 1 To x k Is a real number, k is a positive integer; the compressed binary sequence Y is expressed as: y= { Y 1 ,y 2 ,...,y k },y 1 To y k Taking 0 or 1.
In one possible implementation, the mapping function employed by the decoder is expressed as: representing a recovered floating point number sequence, Y representing a binary sequence, W T Is the weight matrix of the mapping function and beta is the bias vector of the mapping function.
In one possible implementation, the training process of the full-connection layer neural network adopted by the decoder includes:
inputting a sample binary sequence into the full-connection layer neural network, and obtaining a training floating point number sequence according to a sample weight matrix and a sample bias vector fitted by the full-connection layer neural network, wherein the sample binary sequence is obtained by compressing a sample floating point number sequence corresponding to a given sample Gaussian mixture model information source;
calculating the deviation between the sample floating point number sequence and the training floating point number sequence by taking a mean square distortion function as a loss function of the full-connection layer neural network;
and calculating gradient descent for the loss function, and iteratively updating the sample weight matrix and the sample bias vector until the loss function meets a training target and/or reaches preset iteration times.
In a third aspect, a computing device is provided, the computing device comprising a memory and a processor, the memory storing at least one program, the at least one program being executable by the processor to implement the method of encoding and decoding a gaussian mixture model source as provided in the first aspect.
In a fourth aspect, a computer readable storage medium is provided, in which at least one program is stored, the at least one program being executed by a processor to implement the method for encoding and decoding a gaussian mixture model source as provided in the first aspect.
The technical scheme provided by the application at least comprises the following technical effects:
1. the Gaussian mixture model information source is directly compressed, so that the Gaussian mixture model information source can be practically applied to a central node of a wireless sensor network for information processing, and the information processing power consumption and cost of an end node are saved, so that a more efficient and energy-saving wireless sensor network is constructed. 2. Compared with the traditional compressed sensing method, the method has higher data compression efficiency, and can effectively reduce the structure and application complexity of a compression system. 3. The belief propagation algorithm is adopted for encoding, related hardware equipment supporting the belief propagation algorithm can be multiplexed to realize the encoder, the equipment is simple to realize, and the design and manufacturing cost can be saved.
In summary, the application provides a coding and decoding method with high data compression efficiency, simple equipment realization and low application complexity aiming at the Gaussian mixture model information source, which can effectively solve the data compression problem faced by the central node of the wireless sensor network, greatly save the system power consumption and the labor cost of the information processing of the end node and improve the information processing efficiency of the wireless sensor network.
Drawings
FIG. 1 is a schematic diagram of a coding and decoding system for a Gaussian mixture model source according to an embodiment of the present application;
fig. 2 is a flow chart of a coding and decoding method of a gaussian mixture model source according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an encoder according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a decoder according to an embodiment of the present application;
fig. 5 is a schematic hardware structure of a computing device according to an embodiment of the present application.
Detailed Description
To further illustrate the embodiments, the present application provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments and together with the description, serve to explain the principles of the embodiments. With reference to these matters, one of ordinary skill in the art would understand other possible embodiments and the advantages of the present application. The components in the figures are not drawn to scale and like reference numerals are generally used to designate like components. The term "at least one" in this application means one or more, the term "plurality" in this application means two or more, for example, a plurality of nodes means two or more.
In a wireless sensor network of the Internet of things, data collected by sensors of different end nodes are converged at a central node, and a certain degree of information aliasing exists. By modeling the data collected by the different end node sensors as gaussian components, respectively, the information converged at the central node can be modeled as a gaussian mixture model (Gaussian Mixture Model, GMM) source of a mixture of different gaussian components. In view of the fact that a Gaussian mixture model source is not provided with a more efficient solution in the direction of lossy source coding, the coding and decoding method is high in data compression efficiency, simple in equipment implementation and low in application complexity, the data compression problem faced by a wireless sensor network center node can be effectively solved, system power consumption and labor cost of end node information processing are greatly saved, and information processing efficiency of the wireless sensor network is improved.
The present application will now be further described with reference to the drawings and detailed description.
The application provides a coding and decoding method, a coding and decoding system, coding and decoding equipment and a medium of a Gaussian mixture model information source, which can be applied to a data compression scene of a wireless sensor network. In the wireless sensor network, after each end node sensor node collects the information source components with Gaussian distribution, the information source components are mixed to a central node; the central node models each source component as a Gaussian mixture model source, and performs compression, reconstruction and other processing.
Example 1
The embodiment of the application provides a coding and decoding system of a Gaussian mixture model information source. Fig. 1 is a schematic diagram of a coding and decoding system of a gaussian mixture model source according to an embodiment of the present application. Referring to fig. 1, the system includes: an encoder (lossy source encoder 10) based on a protogram Low-Density Parity-Check (P-LDPC) code and a decoder (source decoder 20) constructed based on a full-link layer neural network. The encoder is used for carrying out lossy source coding on the Gaussian mixture model source so as to realize data compression; and the decoder is used for carrying out data recovery to obtain the Gaussian mixture model information source.
The encoder adopts a belief propagation algorithm, takes each floating point in an initial floating point sequence of a Gaussian mixture model information source as initial likelihood information of the belief propagation algorithm, carries out iterative updating on the initial likelihood information according to a connection relation between a variable node and a check node in the P-LDPC code, and obtains a binary sequence corresponding to the initial floating point sequence through likelihood value judgment.
The initial floating point number sequence is determined according to a probability density function of a Gaussian mixture model source, and the probability density function is obtained by modeling a plurality of source components conforming to Gaussian distribution according to a preset proportion.
The decoder is used for inputting the binary sequence into the trained full-connection layer neural network, and mapping the binary sequence into a recovery floating point number sequence of the Gaussian mixture model information source according to a mapping function fitted by the full-connection layer neural network. In some embodiments, the decoder is also referred to as a decoder.
Illustratively, the encoder may be deployed in a central node of the wireless sensor network. The decoder may be deployed in any node of the wireless sensor network or in any computing device that interacts with the central node. Based on the method, the overall information processing capacity inside the wireless sensor network can be improved, and the external interaction efficiency of the wireless sensor network can be improved.
The encoder and decoder may be implemented using hardware modules or based on software programs, as examples, and are not limited in this application. In some embodiments, because the encoder uses the coding and decoding principles of the belief propagation algorithm, the encoder can be implemented by multiplexing the relevant hardware devices supporting the belief propagation algorithm, so that design and manufacturing costs can be saved.
Aiming at the Gaussian mixture model information source, the encoding and decoding method is high in data compression efficiency, simple in equipment implementation and low in application complexity, the data compression problem faced by the central node of the wireless sensor network can be effectively solved, the system power consumption and the labor cost of end node information processing are greatly saved, and the information processing efficiency of the wireless sensor network is improved.
The following describes in detail the encoding and decoding method of the gaussian mixture model source provided in the present application with reference to the above encoding and decoding system.
Example 2
Fig. 2 is a flow chart of a coding and decoding method of a gaussian mixture model source provided in the present application. As shown in FIG. 2, the method includes S1-step S3.
S1, expressing the Gaussian mixture model information source as an initial floating point number sequence according to a probability density function of the Gaussian mixture model information source, wherein the probability density function is obtained by modeling a plurality of information source components conforming to Gaussian distribution according to a preset proportion.
In this embodiment, step S1 is a data preparation stage, and the obtained initial floating point number sequence may be directly input into the encoder for compression.
Illustratively, the initial floating point number sequence is used to represent an initial Gaussian mixture model source, and is prepared by using a probability density function of the Gaussian mixture model. Specifically, the initial floating-point number sequence X has a length k (including k floating-point numbers), expressed as: x= { X 1 ,x 2 ,...,x k },x 1 To x k Is a real number, and k is a positive integer.
S2, the encoder adopts a belief propagation algorithm, takes each floating point in the initial floating point number sequence as initial likelihood information of the belief propagation algorithm, carries out iterative updating on the initial likelihood information according to the connection relation between the variable nodes and the check nodes in the P-LDPC code, and obtains a binary sequence corresponding to the initial floating point number sequence through likelihood value judgment.
In the embodiment of the present application, the stopping conditions of the iterative update include: the belief propagation algorithm decides correct in the master pattern matrix and/or reaches an upper limit on the number of iterations. The P-LDPC code is represented as an original model diagram matrix formed by connecting variable nodes and check nodes, and the encoder carries out iterative updating on initial likelihood information according to the connection relation between the variable nodes and the check nodes in the original model diagram matrix until the judgment of a belief propagation algorithm in the original model diagram matrix is correct and/or reaches the upper limit of iteration times, and outputs a binary sequence corresponding to the initial floating point number sequence. The judgment is correct, so that the binary sequence output by the original model diagram matrix obtained by the iteration can better reserve the information source information of the initial floating point number sequence, and the information source fidelity is improved while the transmission bandwidth is saved.
Illustratively, the compressed binary sequence Y is expressed as: y= { Y 1 ,y 2 ,...,y k },y 1 To y k Taking 0 or 1, and k is a positive integer. It can be appreciated that the number of binary numbers in the compressed binary sequence Y is the same as the number of floating point numbers in the initial floating point number sequence, ensuring consistent dimensions before and after data compression.
Fig. 3 is a schematic structural diagram of an encoder according to an embodiment of the present application. As shown in fig. 3, the lossy source encoder provided in the present application adopts a belief propagation (Belief Propagation, BP) algorithm, which implements likelihood information iteration according to the connection relationship between the check node and the variable node in the P-LDPC code, so as to compress floating point numbers in the initial gaussian mixture model source sequence (initial floating point number sequence X) into bits (binary sequence Y).
Referring to fig. 3, the p-LDPC code is represented as an orthographic matrix with dimension mxn, and similarly may be represented as a Tanner graph with m check nodes and n variable nodes, where the degree in the orthographic matrix represents the number of connecting edges between the nodes, and m and n may be any positive integer. The code word is represented by a graph model, and then the message transmission and reasoning are carried out based on the connection relation among the nodes, so that the data compression can be realized.
Specifically, in the embodiment of the present application, each floating point in the initial floating point sequence is used as initial likelihood information (usually, the floating point is used to represent probability information and is used as a basis for decision), and the belief propagation algorithm can implement iteration and update of likelihood information according to the connection relationship of the Tanner graph, and after multiple iterations, the floating point can be compressed into bits.
S3, the decoder inputs the binary sequence into the trained full-connection layer neural network, and maps the binary sequence into a recovery floating point number sequence of the Gaussian mixture model information source according to a mapping function fitted by the full-connection layer neural network.
In the embodiment of the present application, the mapping function is expressed as: representing a recovered floating point number sequence, Y representing a binary sequence, W T Is the weight matrix of the mapping function and beta is the bias vector of the mapping function.
Fig. 4 is a schematic structural diagram of a decoder according to an embodiment of the present application. As shown in fig. 4, the decoder is constructed by using an input layer and an output layer to obtain a full-connection layer linear network, which can be used for recovering the gaussian mixture model source sequence. Of course in other embodiments there may be more hidden layers to provide higher linear programming capability, and the application is not limited thereto.
Wherein the full-connection layer linear network constructed by the input layer and the output layer is expressed as a mapping function Representing a recovered floating point number sequence, Y representing a binary sequence, W T Is the weight matrix of the mapping function and β is the bias vector (column vector) of the mapping function. Training process of full-connection layer network by giving some Gaussian mixture model informationThe initial floating point sequence X of the source and the binary sequence Y obtained after compression thereof calculate the proper weight matrix W T And the bias vector beta is fitted to obtain a linear relation which can be matched with the Gaussian mixture model source.
Further, in the embodiment of the present application, before the application of the decoder to perform data recovery, the training process of the full-connection layer neural network includes:
and step A, inputting a sample binary sequence into the full-connection layer neural network, obtaining a training floating point number sequence according to a sample weight matrix and a sample bias vector fitted by the full-connection layer neural network, and compressing the sample binary sequence according to a sample floating point number sequence corresponding to a given sample Gaussian mixture model information source.
And B, taking a mean square distortion function as a loss function of the full-connection layer neural network, calculating gradient descent for the loss function, and iteratively updating the sample weight matrix and the sample bias vector until the loss function meets a training target and/or reaches preset iteration times.
Wherein the penalty function is used to calculate a deviation between the sample floating point number sequence and the training floating point number sequence.
Specifically, in the training process of the full-connection layer mapping function, the minimum distortion of the mean square distortion function is expressed as a loss function:e is a mean operator, ">Represents a training floating point sequence (i.e., a recovered floating point sequence during training), and X represents a sample floating point sequence (corresponding to a given training tag).
The weight matrix and bias vector are updated continuously by calculating the gradient descent for the loss function. Substituting the training data into the mapping function of the full-connection layer network to recover the data of the input binary sequence Y and obtain a recovered Gaussian mixture model information source sequence(restore floating point sequence),> is a real number.
On one hand, the technical scheme of the application can directly compress the Gaussian mixture model information source, so that the method can be practically applied to a central node of a wireless sensor network for information processing, and the information processing power consumption and cost of an end node are saved, so that a more efficient and energy-saving wireless sensor network is constructed. On the other hand, the technical scheme of the method can directly compress the floating point number sequence of the Gaussian mixture model into a binary sequence, and compared with a traditional compressed sensing method, the method has higher compression efficiency, and can effectively reduce the structure and the implementation complexity of a compression system.
In conclusion, the method is oriented to Gaussian mixture model information sources, a set of efficient lossy information source coding and decoding methods are constructed by using P-LDPC codes, belief propagation algorithms and full-connection layer networks, and the method has the characteristics of high data compression efficiency, simplicity in equipment implementation and simplicity in application complexity.
The present application also provides a computing device that may be used to implement the encoder and/or decoder described above. Fig. 5 is a schematic hardware structure of a computing device provided in an embodiment of the present application, where, as shown in fig. 5, the computing device includes a processor 501, a memory 502, a bus 503, and a computer program stored in the memory 502 and capable of running on the processor 501, where the processor 501 includes one or more processing cores, the memory 502 is connected to the processor 501 through the bus 503, and the memory 502 is used to store program instructions, and when the processor executes the computer program, the processor implements all or part of the steps in the foregoing method embodiments provided in the present application.
Further, as an executable scheme, the computing device may be a computer unit, and the computer unit may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The computer unit may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the constituent structures of the computer unit described above are merely examples of the computer unit and are not limiting, and may include more or fewer components than those described above, or may combine certain components, or different components. For example, the computer unit may further include an input/output device, a network access device, a bus, etc., which is not limited in this embodiment of the present application.
Further, as an implementation, the processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is a control center of the computer unit, connecting various parts of the entire computer unit using various interfaces and lines.
The memory may be used to store the computer program and/or modules, and the processor may implement the various functions of the computer unit by running or executing the computer program and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the cellular phone, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the methods described above in the embodiments of the present application.
The modules/units integrated with the computer unit may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction.
While this application has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (10)

1. A method for encoding and decoding a gaussian mixture model source, the method comprising;
s1, representing a Gaussian mixture model information source as an initial floating point number sequence according to a probability density function of the Gaussian mixture model information source, wherein the probability density function is obtained by modeling a plurality of information source components conforming to Gaussian distribution according to a preset proportion;
s2, adopting a belief propagation algorithm, taking each floating point in the initial floating point sequence as initial likelihood information of the belief propagation algorithm, carrying out iterative updating on the initial likelihood information according to the connection relation between variable nodes and check nodes in the P-LDPC code, and obtaining a binary sequence corresponding to the initial floating point sequence through likelihood value judgment;
s3, inputting the binary sequence into a trained full-connection layer neural network, and mapping the binary sequence into a recovery floating point number sequence of a Gaussian mixture model information source according to a mapping function fitted by the full-connection layer neural network.
2. The method of encoding and decoding according to claim 1, wherein the P-LDPC code is represented as a master pattern matrix formed by variable nodes and check node connections, and the S2 includes:
according to the connection relation between the variable nodes and the check nodes in the original model diagram matrix, carrying out iterative updating on the initial likelihood information until the confidence propagation algorithm judges correctly in the original model diagram matrix and/or reaches the upper limit of iteration times, and outputting a binary sequence corresponding to the initial floating point number sequence;
the length of the initial floating point number sequence X is k: x= { X 1 ,x 2 ,...,x k },x 1 To x k Is a real number, k is a positive integer; the compressed binary sequence Y is expressed as: y= { Y 1 ,y 2 ,...,y k },y 1 To y k Taking 0 or 1.
3. The codec method of claim 1, wherein the mapping function is expressed as: representing a recovered floating point number sequence, Y representing a binary sequence, W T Is the weight matrix of the mapping function and beta is the bias vector of the mapping function.
4. A method of coding and decoding according to claim 3, wherein the training process of the full-connection layer neural network comprises:
inputting a sample binary sequence into the full-connection layer neural network, and obtaining a training floating point number sequence according to a sample weight matrix and a sample bias vector fitted by the full-connection layer neural network, wherein the sample binary sequence is obtained by compressing a sample floating point number sequence corresponding to a given sample Gaussian mixture model information source;
and taking a mean square distortion function as a loss function of the full-connection layer neural network, calculating gradient descent for the loss function, and iteratively updating the sample weight matrix and the sample bias vector until the loss function meets a training target and/or reaches preset iteration times, wherein the loss function is used for calculating the deviation between the sample floating point number sequence and the training floating point number sequence.
5. A system for encoding and decoding a gaussian mixture model source, said system comprising: an encoder based on the P-LDPC code and a decoder constructed based on the full-connection layer neural network;
the encoder is used for: adopting a belief propagation algorithm, taking each floating point in an initial floating point sequence of a Gaussian mixture model information source as initial likelihood information of the belief propagation algorithm, carrying out iterative updating on the initial likelihood information according to a connection relation between a variable node and a check node in a P-LDPC code, and obtaining a binary sequence corresponding to the initial floating point sequence through likelihood value judgment;
the initial floating point number sequence is determined according to a probability density function of the Gaussian mixture model source, and the probability density function is obtained by modeling a plurality of source components conforming to Gaussian distribution according to a preset proportion;
the decoder is used for: and inputting the binary sequence into a trained full-connection layer neural network, and mapping the binary sequence into a recovery floating point number sequence of a Gaussian mixture model information source according to a mapping function fitted by the full-connection layer neural network.
6. The codec system of claim 5, wherein the P-LDPC code is represented as a master pattern matrix of variable nodes and check node connections, the encoder being configured to:
according to the connection relation between the variable nodes and the check nodes in the original model diagram matrix, carrying out iterative updating on the initial likelihood information until the confidence propagation algorithm judges correctly in the original model diagram matrix and/or reaches the upper limit of iteration times, and outputting a binary sequence corresponding to the initial floating point number sequence;
the length of the initial floating point number sequence X is k: x= { X 1 ,x 2 ,...,x k },x 1 To x k Is a real number, k is a positive integer; the compressed binary sequence Y is expressed as: y= { Y 1 ,y 2 ,...,y k },y 1 To y k Taking 0 or 1.
7. The codec system of claim 5, wherein the mapping function employed by the decoder is expressed as:representing a recovered floating point number sequence, Y representing a binary sequence, W T Is the weight matrix of the mapping function and beta is the bias vector of the mapping function.
8. The codec system of claim 7, wherein the training process of the full-connection layer neural network employed by the decoder comprises:
inputting a sample binary sequence into the full-connection layer neural network, and obtaining a training floating point number sequence according to a sample weight matrix and a sample bias vector fitted by the full-connection layer neural network, wherein the sample binary sequence is obtained by compressing a sample floating point number sequence corresponding to a given sample Gaussian mixture model information source;
and taking a mean square distortion function as a loss function of the full-connection layer neural network, calculating gradient descent for the loss function, and iteratively updating the sample weight matrix and the sample bias vector until the loss function meets a training target and/or reaches preset iteration times, wherein the loss function is used for calculating the deviation between the sample floating point number sequence and the training floating point number sequence.
9. A computing device comprising a memory and a processor, the memory storing at least one program, the at least one program being executable by the processor to implement the codec method of any one of claims 1-4.
10. A computer readable storage medium, characterized in that at least one program is stored in the storage medium, the at least one program being executed by a processor to implement the codec method of any one of claims 1 to 4.
CN202311239266.7A 2023-09-25 2023-09-25 Coding and decoding method, system, equipment and medium of Gaussian mixture model information source Pending CN117353754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311239266.7A CN117353754A (en) 2023-09-25 2023-09-25 Coding and decoding method, system, equipment and medium of Gaussian mixture model information source

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311239266.7A CN117353754A (en) 2023-09-25 2023-09-25 Coding and decoding method, system, equipment and medium of Gaussian mixture model information source

Publications (1)

Publication Number Publication Date
CN117353754A true CN117353754A (en) 2024-01-05

Family

ID=89368235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311239266.7A Pending CN117353754A (en) 2023-09-25 2023-09-25 Coding and decoding method, system, equipment and medium of Gaussian mixture model information source

Country Status (1)

Country Link
CN (1) CN117353754A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117650873A (en) * 2024-01-30 2024-03-05 集美大学 Performance analysis method of information source channel joint coding system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117650873A (en) * 2024-01-30 2024-03-05 集美大学 Performance analysis method of information source channel joint coding system
CN117650873B (en) * 2024-01-30 2024-03-29 集美大学 Performance analysis method of information source channel joint coding system

Similar Documents

Publication Publication Date Title
TWI419481B (en) Low density parity check codec and method of the same
CN113424202A (en) Adjusting activation compression for neural network training
CN113273082A (en) Neural network activation compression with exception block floating point
WO2018033823A1 (en) Efficient reduction of resources for the simulation of fermionic hamiltonians on quantum hardware
CN112991472B (en) Image compressed sensing reconstruction method based on residual error dense threshold network
Žalik et al. Chain code lossless compression using move-to-front transform and adaptive run-length encoding
WO2020154083A1 (en) Neural network activation compression with non-uniform mantissas
CN105260776A (en) Neural network processor and convolutional neural network processor
CN117353754A (en) Coding and decoding method, system, equipment and medium of Gaussian mixture model information source
JP7408799B2 (en) Neural network model compression
CN105763203B (en) Multi-element LDPC code decoding method based on hard reliability information
CN114282678A (en) Method for training machine learning model and related equipment
CN113595993A (en) Vehicle-mounted sensing equipment joint learning method for model structure optimization under edge calculation
CN110751265A (en) Lightweight neural network construction method and system and electronic equipment
JP2016536947A (en) Method and apparatus for reconstructing data blocks
CN115664899A (en) Channel decoding method and system based on graph neural network
Deng et al. Reduced-complexity deep neural network-aided channel code decoder: A case study for BCH decoder
WO2022246986A1 (en) Data processing method, apparatus and device, and computer-readable storage medium
CN113852443B (en) Low-complexity multi-user detection method in SCMA system
CN114510609A (en) Method, device, equipment, medium and program product for generating structure data
WO2017045142A1 (en) Decoding method and decoding device for ldpc truncated code
Li et al. Towards communication-efficient digital twin via AI-powered transmission and reconstruction
TW202406344A (en) Point cloud geometry data augmentation method and apparatus, encoding method and apparatus, decoding method and apparatus, and encoding and decoding system
WO2023159820A1 (en) Image compression method, image decompression method, and apparatuses
CN113872610B (en) LDPC code neural network training and decoding method and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination