CN114039718B - Hash coding method and system of self-adaptive weighted probability model - Google Patents

Hash coding method and system of self-adaptive weighted probability model Download PDF

Info

Publication number
CN114039718B
CN114039718B CN202111208527.XA CN202111208527A CN114039718B CN 114039718 B CN114039718 B CN 114039718B CN 202111208527 A CN202111208527 A CN 202111208527A CN 114039718 B CN114039718 B CN 114039718B
Authority
CN
China
Prior art keywords
sequence
coding
binary
hash
symbol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111208527.XA
Other languages
Chinese (zh)
Other versions
CN114039718A (en
Inventor
王杰林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Yaosheng Communication Technology Co ltd
Original Assignee
Hunan Yaosheng Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Yaosheng Communication Technology Co ltd filed Critical Hunan Yaosheng Communication Technology Co ltd
Priority to CN202111208527.XA priority Critical patent/CN114039718B/en
Publication of CN114039718A publication Critical patent/CN114039718A/en
Application granted granted Critical
Publication of CN114039718B publication Critical patent/CN114039718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/06Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
    • H04L9/0643Hash functions, e.g. MD5, SHA, HMAC or f9 MAC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/34Encoding or coding, e.g. Huffman coding or error correction

Abstract

The invention discloses a Hash coding method and a Hash coding system of a self-adaptive weighted probability model, which are different from the prior art in that the method is based on the weighted probability used in the weighted probability model codingAndis adaptively changed with the coded symbols of the currently coded binary sub-sequence, i.eAndtherefore, the method can be suitable for a 'stream' coding mode, supports data stream coding by taking bytes as a unit, and realizes self-adaptive adjustment based on a coding sequence.

Description

Hash coding method and system of self-adaptive weighted probability model
Technical Field
The invention relates to the technical field of information communication, in particular to a Hash coding method and a Hash coding system of a self-adaptive weighted probability model.
Background
The Hash algorithm can compress the message with any length to the message abstract with fixed length, and is widely applied in the fields of digital signature, file verification, information encryption and the like. The current standard Hash algorithm mainly comprises several large series of MD (Message-Digest) information Digest algorithm, SHA (Secure Hash Algorithm) secure Hash algorithm, lattice-based Hash algorithm, MAC (Message Authentication Code) Message authentication code and the like. The MD information summary algorithm mainly comprises MD2, MD4, MD5 and other series, and the main SHA secure hash algorithm mainly comprises SHA-1 and SHA-2 (SHA-224, SHA-256, SHA-384, SHA-512) series and SHA-3 (KECCAK algorithm). The MAC algorithm is mainly a hash function algorithm with a key in HMAC, and is based on the original MD and SHA algorithm, and the key is added, mainly in the HmacSHA series (HmacSHA 1, hmacSHA224, hmacSHA256, hmacSHA38, hmacSHA 512), and the digest length is identical to the original MD series and SHA series. The digest lengths are different, and the operation structures inside the algorithms are different, so that a system needs to integrate a large number of hash algorithms with different digest lengths to adapt to the safety requirements, and the waste of system resources and cost is caused. With the advent of quantum computing, computing performance is greatly improved, and security can be ensured only by adopting a hash algorithm with a longer message digest or a hash algorithm with a more reliable operation structure. However, the longer the message digest, the greater the burden on network transmission, checking operations and storage, and the computational burden of hash algorithms with complex operation structures.
In the prior art scheme (i.e. the "jelin code" correlation scheme), generally, the probabilities of symbol 0 and symbol 1 are counted first, for example, 1000 bits, and then the statistical probabilities are utilized to implement data coding through a weighted probability model, so that the prior art scheme cannot involve "data stream" calculation, and there is still room for improvement.
Disclosure of Invention
The present invention aims to at least solve the technical problems existing in the prior art. Therefore, the invention provides a Hash coding method and a Hash coding system of a self-adaptive weighted probability model. The method can be suitable for a 'stream' coding mode, supports data stream coding taking bytes as a unit, and realizes self-adaptive adjustment based on a coding sequence.
The first aspect of the present invention provides a Hash coding method of an adaptive weighted probability model, applied to a coding end, comprising the following steps:
s101, acquiring a binary information source sequence X with a sequence length of n, and randomly generating m 2 A first two-dimensional table and a second two-dimensional table of bits, randomly generating a hash value with a length L;
step S102, dividing the binary source sequence X into [ n/m ] linearly 2 ]The method comprises the steps of a segment sub-sequence, wherein the sequence length of each segment sub-sequence is v, three statistical variables are set to be i, j and T, the initial value of the statistical variable i is 0, the initial value of the statistical variable j is 1, and the initial value of the statistical variable T is 0;
step S103, obtaining the j-th binary source sequence XIth symbol X in segment subsequence i
Step S104, calculate x= (i+X) i )mod m,y=(i+X i ) M; wherein mod represents a modulo operation;
step S105, inquiring corresponding f (x, y) from the first two-dimensional table according to x, y; wherein f (x, y) is a value corresponding to coordinates (x, y) in the first two-dimensional table;
step S106, calculatingi=i+1, if i is less than or equal to v, jumping to step S103; if i>v, obtaining a binary sequence Y corresponding to the j-th segment subsequence, and entering step S107; wherein said->Representing an exclusive-or operation;
step S107, setting i to 0, calculatingAnd H (Y) = -p log 2 p-(1-p)log 2 (1-p); wherein c represents the number of symbol 0 in the binary sequence Y, and H (Y) represents the information entropy of the binary sequence Y;
step S108, calculate x= (i+X) i )mod m,y=(i+X i )/m;
Step S109, inquiring corresponding g (x, y) from the second two-dimensional table according to x, y; wherein g (x, y) is a value corresponding to coordinates (x, y) in the second two-dimensional table;
step S110, calculatingAnd->Wherein s represents an integer greater than 3; said->The weighted probabilities for symbol 0 and symbol 1, respectively; the r represents a weighting coefficient; the C is 0 Representing the coding of said X i The number of symbol 0 in the total number of symbols already encoded in the binary sequence Y, the C 0 The initial value of (C) is 1, the C 1 Representing the coding of said X i The number of symbols 1 in the total number of symbols already encoded in the binary sequence Y, the C 1 An initial value of 1; alpha represents the code for said X i The number of total symbols already coded in the binary sequence Y before and the initial value of alpha is 2;
step S111, if Y i =0,C 0 =C 0 +1, α=α+1; otherwise->And->C 1 =C 1 +1, α=α+1; wherein the R is i 、R i-1 、L i L and i-1 r is a coding variable 0 =1,L 0 =0;
Step S112, i=i+1, if i is less than or equal to v, jumping to step S108; if i>v, obtaining the coding result L corresponding to the binary sequence Y v
Step S113, order L v =L v +T,T=L v
Step S117, j=j+1, if j is less than or equal to [ n/m ], jumping to step S103; if j > [ n/m ], ending the coding to obtain a ciphertext corresponding to the binary source sequence X;
step S118, the ciphertext is sent to a decoding end.
In a second aspect of the present invention, there is provided a Hash coding system of an adaptive weighted probability model, comprising:
a data acquisition unit for acquiring a binary source sequence X with a sequence length of n, followed byMechanically generating m 2 A first two-dimensional table and a second two-dimensional table of bits, randomly generating a hash value with a length L;
a data encoding unit for executing the following steps:
step S102, dividing the binary source sequence X into [ n/m ] linearly 2 ]The method comprises the steps of a segment sub-sequence, wherein the sequence length of each segment sub-sequence is v, three statistical variables are set to be i, j and T, the initial value of the statistical variable i is 0, the initial value of the statistical variable j is 1, and the initial value of the statistical variable T is 0;
step S103, obtaining the ith symbol X in the jth sub-sequence of the binary source sequence X i
Step S104, calculate x= (i+X) i )mod m,y=(i+X i ) M; wherein mod represents a modulo operation;
step S105, inquiring corresponding f (x, y) from the first two-dimensional table according to x, y; wherein f (x, y) is a value corresponding to coordinates (x, y) in the first two-dimensional table;
step S106, calculatingi=i+1, if i is less than or equal to v, jumping to step S103; if i>v, obtaining a binary sequence Y corresponding to the j-th segment subsequence, and entering step S107; wherein said->Representing an exclusive-or operation;
step S107, setting i to 0, calculatingAnd H (Y) = -p log 2 p-(1-p)log 2 (1-p); wherein c represents the number of symbol 0 in the binary sequence Y, and H (Y) represents the information entropy of the binary sequence Y;
step S108, calculate x= (i+X) i )mod m,y=(i+X i )/m;
Step S109, inquiring corresponding g (x, y) from the second two-dimensional table according to x, y; wherein g (x, y) is a value corresponding to coordinates (x, y) in the second two-dimensional table;
step S110, calculatingAnd->Wherein s represents an integer greater than 3; said->The weighted probabilities for symbol 0 and symbol 1, respectively; the r represents a weighting coefficient; the C is 0 Representing the coding of said X i The number of symbol 0 in the total number of symbols already encoded in the binary sequence Y, the C 0 The initial value of (C) is 1, the C 1 Representing the coding of said X i The number of symbols 1 in the total number of symbols already encoded in the binary sequence Y, the C 1 An initial value of 1; alpha represents the code for said X i The number of total symbols already coded in the binary sequence Y before and the initial value of alpha is 2;
step S111, if Y i =0,C 0 =C 0 +1, α=α+1; otherwise->And->C 1 =C 1 +1, α=α+1; wherein the R is i 、R i-1 、L i L and i-1 r is a coding variable 0 =1,L 0 =0;
Step S112, i=i+1, if i is less than or equal to v, jumping to step S108; if i>v, obtaining the coding result L corresponding to the binary sequence Y v
Step S113, order L v =L v +T,T=L v
Step S117, j=j+1, if j is less than or equal to [ n/m ], jumping to step S103; if j > [ n/m ], ending the coding to obtain a ciphertext corresponding to the binary source sequence X;
and the data transmitting unit is used for transmitting the ciphertext to the decoding end.
According to the embodiment of the invention, at least the following technical effects are achieved:
unlike prior art approaches, the method uses weighted probabilities in encoding based on weighted probability modelsAnd->Is adaptively changed with the encoded symbol of the currently encoded binary sub-sequence (the existing scheme is a value fixed based on the symbol probability of the pre-counted sequence to be encoded), i.e. -> And->Therefore, the method can be suitable for a 'stream' coding mode, supports data stream coding by taking bytes as a unit, and realizes self-adaptive adjustment based on a coding sequence.
It should be noted that the advantages between the system provided by the second aspect of the present invention and the prior art are the same as those between the method described above and the prior art, and will not be described in detail here.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of a first two-dimensional table according to one embodiment of the present invention;
FIG. 2 is a schematic diagram of a second two-dimensional table according to one embodiment of the present invention;
fig. 3 is a schematic flow chart of a Hash coding method of an adaptive weighted probability model according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a Hash coding system with an adaptive weighted probability model according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
Before describing the embodiment of the invention, the implementation principle of the embodiment of the invention is described, and mainly comprises the following steps of self-adaptive weighted probability model Hash coding and weighted probability model information entropy and collision limit analysis:
first, a self-adaptive weighted probability model Hash codes;
1. a weighted probability model;
signaling source sequence x= (X) 1 ,X 2 ,…,X i ,…,X n ) Is a discrete sequence of a finite number of values or a few possible values, X i E a= {0,1,2, …, k }. There is then a probability space for everything in a:
since the random process must be transferred to a certain symbol, at any time there is:
thus, arbitrary symbol X i The distribution function of (2) is:
p(0)≤F(x)≤1,s∈A。
definition 1, let discrete random variables X, X e a= {0,1, …, k }, P { x=a } = (a) (ea), weighted probability mass function isp (a) is a probability mass function, p (a) is more than or equal to 0 and less than or equal to 1, r is a weight coefficient, and:
F(a)=∑ i≤a p(i) (2)
if F (a, r) satisfies F (a, r) =rf (a), F (a, r) is referred to as a weighted cumulative distribution function, abbreviated as a weighted distribution function. Obviously, the weighted probability sum of all symbols is
Let the discrete source sequence x= (X) 1 ,X 2 ,…,X n ),X i E A, and let F (X i -1)=F(X i )-p(X i ) The weighted distribution function of the sequence X is denoted as F (X, r). When n=1:
F(X,r)=rF(X 1 -1)+rp(X 1 )
when n=2:
F(X,r)=rF(X 1 -1)+r 2 F(X 2 -1)p(X 1 )+r 2 p(X 1 )p(X 2 )
when n=3:
F(X,r)=rF(X 1 -1)+r 2 F(X 2 -1)p(X 1 )+r 3 F(X 3 -1)p(X 1 )p(X 2 )+r 3 p(X 1 )p(X 2 )p(X 3 )
order theAnalogize to obtain:
the set of weighted distribution functions satisfying equation (3) is defined as a weighted probability model, abbreviated as weighted model, denoted as { F (X, r) }. If X i E a= {0,1}, then { F (X, r) } is called a binary weighted model. And (3) making:
H n =F(X,r) (4)
L n =H n -R n (6)
wherein X is i E a, n=1, 2, …. When r=1:
h is obtainable from (4), (5) and (6) n =f (X, 1), i.e., section coding (arithmetic coding) is a lossless coding method based on a weighted distribution function when r=1.
Due to X i Must take the value of A, so p (X i )>0. Obviously, the interval columns of (4), (5) and (6) [ L ] i ,H i ) Is the variable X of the source sequence X at time i (i=0, 1,2, …, n) i Corresponding interval subscripts, R i =H i -L i Is the length of the interval. R is set to i=0 according to formulas (4), (5) and (6) 0 =H 0 =1,L 0 =0, so i=1, 2, …, n is calculated as:
L i =L i-1 +R i-1 F(X i -1,r)
H i =L i +R i (8)
performing weighted probability model coding operation on the information source sequence X through a (8), and performing L n Is a real number and is the result of weighted probability model coding. L (L) n Binary sequences are obtained by means of binary conversion.
It should be noted that the above is common knowledge in the art (refer to the description of the relevant principles of "Jielin code").
2. Adaptive weighted probability model Hash coding (introducing the principle of implementing adaptation based on weighted probability model);
let count value C of symbol x (x E A) x The initial value of (1), C x =1。
Let T be the sum of the count values of all symbols in set a, namely:
then T has an initial value of s+1. Let the ith symbol to be encoded be x i And x is i =a (a e a). The probability of symbol a at the time of encoding is:
then according to definition 1 above there is:
because ofSo there are:
encoding the ith symbol x i After that, updateAnd T, i.e.)>T=t+1. Converting formula (14) to an iterative formula:
it should be noted that the above formula (15) contains three formulas.
Secondly, analyzing information entropy and collision limit of the weighted probability model;
1. weighted probability model information entropy;
let the discrete memory-free source sequence X= (X) 1 ,X 2 ,…,X n )(X i E a, a= {0,1,2, …, k }), when r=1,defined by shannon information entropy, the entropy of X is:
when r.noteq.1, define a probabilityRandom variable X of (2) i The self information amount of (a) is:
I(X i )=-log k+1 p(X i ) (17)
set { X } i In =a } (i=1, 2, …, n, a e a) there is c a And a. When the value of r is determined, the total information amount of the source sequence X is:
the information amount per symbol is then averaged:
let H (X, r) be:
when the value of r is determined, the binary length encoded by the weighted probability model is nH (X, r) (bit). The simplest information source sequence is a binary sequence, the bit length of the binary information source sequence X is set to be n, the probability p (0) and p (1) of the symbol 0 and the symbol 1 in the X are provided, and the sequence with the length of L (bit) is obtained after the weighted probability model coding. When k=1, it is obtainable by formula (18):
-n log 2 r+nH(X)=L (19)
where the entropy of the H (X) sequence X, i.e., H (X) = -p (0) log 2 p(0)-p(1)log 2 p (1), the reduced formula (19) yields:
according to the distortion-free coding theorem, H (X) is the distortion-free coding limit of a discrete memory-free source sequence X, so that the weighted model function F (X, r) can restore the source sequence X without distortion when H (X, r) is equal to or greater than H (X). When H (X, r)<H (X), the weighted model function F (X, r) cannot restore source X, i.e., when L<nH (X) time-encoding result L n Cannot be restored toSource X.
As can be obtained from the equations (19) and (20), when H (X) > L/n, there is r >1, H (X, r) < H (X), and then the weighted model functions F (X, r) satisfying the equation (20) and r >1 are all one-way Hash functions (Hash functions).
2. Collision limit analysis;
and (3) making: the probability of symbol 0 and symbol 1 in the hash value obtained by the weighted probability model hash algorithm of any binary sequence is equal.
And (3) proving: setting bit length of hash value obtained by binary sequence through weighted probability model hash algorithm as L, binary sequence of hash value as Y, information entropy as H (Y) = -p (0) log 2 p(0)-p(1)log 2 p (1). According to the above, nH (X, r) = -nlog 2 r+nh (X) (n is the bit length of binary sequence X), LH (Y) = -nlog 2 r+nH (X). If and only if H (Y) =1, formula (19) holds, i.e., r satisfies formula (20). Otherwise r does not satisfy equation (20). Again, if and only if p (0) =p (1) =0.5, H (Y) =1, so the probabilities of symbol 0 and symbol 1 in sequence Y are equal.
According to the above, the probabilities of the symbols in the hash value obtained by the hash can be obtained. Assuming that the bit length of the hash value is L, the value space range is {0,1, …,2 L -1}. Let d=2 L The hash collision probability of the N trials can be obtained from the hash collision (or "birthday attack") probability as follows:
3. a round function of nonlinear piecewise iterative operations and weighting coefficients;
common Hash algorithms, AES and DES symmetric encryption algorithms all use round functions of nonlinear byte substitution to eliminate linear dependencies, commonly referred to as S-boxes. Based on the round function idea, the subsequent embodiments use piecewise iterative and exclusive-or operations to eliminate the linear correlation and construct a nonlinear round function of weighting coefficients based on equation (20).
(1) Sequence X segment iteration and exclusive OR operation;
bit length of sequence XLet the bit length of each segment be m 2 Then the sequence X is linearly divided intoSegments. Let m=16, < >>v is the corresponding number of bits per segment, obviously +.>Time of dayTime v=m 2 . Randomly generating m 2 The bits are stored in a 16 x 16 two-dimensional table, such as the two-dimensional table shown in fig. 1 (hereinafter referred to as the first two-dimensional table), where x and y are row indices.
Due to symbol X in paragraph j i Is i (i=1, 2, …, m 2 ) The table look-up operation in fig. 1 is:
x=(i+X i ) mod m,y=(i+X i ) / m (22)
the bit value f (x, y) is available. X is X i Performing exclusive-or operation with f (x, Y), and marking the binary sequence after the exclusive-or operation as Y:
the probability of symbol 0 and symbol 1 in the sequence Y is calculated after the exclusive-or operation, and the r value of the current segment is calculated by substituting formula (20).
(2) A nonlinear round function of the weighting coefficients;
after setting L, if r is not equal to i (i=1, 2, …, m) when each segment of binary sequence is subjected to weighted probability model coding 2 ) If the change occurs, r is called a static weighting coefficient; if r varies with i, then r is referred to as the dynamic weighting factor, denoted r (i). Randomly generating m 2 The integers from 0 to 255 are stored in a two-dimensional table of 16 x 16, such as the two-dimensional table shown in FIG. 2 (hereinafter referred to as the second two-dimensional table), wherein x and y are row indices and x and y are countsThe formula is as in formula (22). The nonlinear round function of the values g (x, y), r (i) obtained by looking up the table of fig. 2 is:
in the equation (24), s may be an integer greater than 3, and the actual value is determined according to the calculation accuracy of the computer, where s=4 in the experiments in the embodiment of the present application. When r is<2 (H(X)-L/n) The time-coded result will exceed L bits. Let the coding result be L z The number of bits is t, and the binary sequence of the hash value of the j-th segment is denoted as Z, Z is L n Through the binary sequence after the binary conversion. Because ofTend to 0, so there is no t>L case. Let l=1, 2, …, t, then the latter t bits are xored with the first L bits:
(25) wherein Z l-1 And Z L+l-1 Is the first-1 and L+l-1 binary symbols of sequence Z.
Obtaining a real number L corresponding to the j th segment through weighted probability model coding based on the step (24) v The hash value Z of the current segment is then made available by equation (25).
Referring to fig. 3, in one embodiment of the present invention, a Hash coding method of an adaptive weighted probability model is provided, applied to a coding end, and includes the following steps:
s101, acquiring a binary information source sequence X with a sequence length of n, and randomly generating m 2 A first two-dimensional table and a second two-dimensional table of bits, randomly generating a hash value of length L.
Step S102, dividing the binary source sequence X into [ n/m ] linearly 2 ]Segment sub-sequences, each segment sub-sequence having a sequence length v, j representing a statistical variable and an initial value of 1.
Step S103, obtaining the ith symbol X in the jth sub-sequence of the binary source sequence X i
Step S104, calculate x= (i+X) i )mod m,y=(i+X i ) M; where mod represents a modulo operation.
Step S105, inquiring corresponding f (x, y) from the first two-dimensional table according to x, y; where f (x, y) is a value corresponding to the coordinate (x, y) in the first two-dimensional table.
Step S106, calculatingi=i+1, if i is less than or equal to v, jumping to step S103; if i>v, obtaining a binary sequence Y corresponding to the j-th segment subsequence, and entering step S107; wherein (1)>Representing an exclusive or operation.
Step S107, setting i to 0, calculatingAnd H (Y) = -p log 2 p-(1-p)log 2 (1-p); where c represents the number of symbol 0 in the binary sequence Y, and H (Y) represents the information entropy of the binary sequence Y.
Step S108, calculate x= (i+X) i )mod m,y=(i+X i )/m。
Step S109, inquiring corresponding g (x, y) from the second two-dimensional table according to x, y; where g (x, y) is a value corresponding to the coordinates (x, y) in the second two-dimensional table.
Step S110, calculatingAnd->Wherein s represents an integer greater than 3; />Respectively represent symbolsWeighted probabilities of 0 and symbol 1; r represents a weighting coefficient; c (C) 0 Representing the code X i The number of symbol 0 in the total number of symbols that have been encoded in the previous binary sequence Y, C 0 The initial value of (C) is 1 1 Representing the code X i The number of symbols 1, C, of the total number of symbols already encoded in the preceding binary sequence Y 1 An initial value of 1; alpha represents the code X i The total number of symbols that have been encoded in the previous binary sequence Y and the initial value of a is 2. Wherein the calculation formula of r value is shown as the formula (20), and substituting the formula (20) into the implementation to obtain +.>Note that p (0) and p (1) herein refer to the number of symbols 0 and the number of symbols 1 in the binary sequence Y.
Step S111, if Y i =0,C 00 +1, α=α+1; otherwise->And is also provided withC 11 +1, α=α+1; wherein R is i 、R i-1 、L i L and i-1 r is a coding variable 0 =1,L 0 =0。
Step S112, i=i+1, if i is less than or equal to v, jumping to step S108; if i>v, obtaining the coding result L corresponding to the binary sequence Y v
Step S113, order L v =L v +T,T=L v The method comprises the steps of carrying out a first treatment on the surface of the Wherein T represents a statistical variable and the initial value of T is 0.
Step S117, j=j+1, if j is less than or equal to [ n/m ]]Setting i to zero, and jumping to step S103; if j>[n/m]And ending the coding to obtain the ciphertext corresponding to the binary information source sequence X. Here, L is corresponding to each segment of binary sequence v Ciphertext is obtained after the accumulation, or ciphertext is obtained by performing a binary conversion after the accumulation, without limitation.
Step S118, the ciphertext is sent to the decoding end.
Unlike the prior art scheme (note that the same advantageous effects as the prior art scheme, which will not be described in detail herein), in step S110 and step S111 of the present embodiment, the weighted probabilities used in the encoding based on the weighted probability model areAnd->Is adaptively changed with the encoded symbol of the currently encoded binary sub-sequence (the existing scheme is a value fixed based on the symbol probability of the pre-counted sequence to be encoded), i.e. ->And->Therefore, the method can be suitable for a 'stream' coding mode, supports data stream coding by taking bytes as a unit, and realizes self-adaptive adjustment based on a coding sequence.
Compared with the prior art, the Hash coding based on the self-adaptive weighted probability model in the embodiment has the advantages that the weighted probability is self-adaptive along with the coded symbols of the currently coded binary subsequence, so that the obtained ciphertext is greatly improved in safety, and experiments prove that the ciphertext is also greatly improved in data compression ratio.
It should be noted that the method provided in this embodiment can be widely used for digital signature, file verification and data transmission verification, and those skilled in the art can use the above steps S101 to S118 to implement or improve the functions of digital signature, file verification and data transmission verification.
In some embodiments, after step S113, the method further comprises the steps of:
step S114, L v Conversion to a binary sequence Z, calculation of t=l z -L; wherein L is z Representing the sequence length of the binary sequence Z;
step S115, if l is less than or equal to t,and proceeds to step S116; if l>t, jumping to step S117; wherein l represents a statistical variable and the initial value of l is 1;
step S116, l=l+1, and jump to step S115.
The present embodiment is based on the above embodiment, and after obtaining the encoded sequence, an exclusive-or operation is further performed in step S115, so as to further improve the security of the data.
Based on the above embodiment, the first two-dimensional table is composed of m 2 Each symbol 0 and symbol 1, m=16. Experiments prove that m=16 can play a role in guaranteeing the balance of calculation performance and safety performance, and if the two-dimensional table data is too large, the larger the requirement of calculation capacity is; if the two-dimensional table data is too small, the security performance will be compromised. A balance point may be taken.
Based on the above embodiment, the second two-dimensional table is composed of m 2 An integer composition ranging from 0 to 255, m=16.
Referring to fig. 4, an embodiment of the present invention provides a Hash coding system of an adaptive weighted probability model, which includes a data acquisition unit 100, a data coding unit 200, and a data transmission unit 300, wherein:
the data acquisition unit 100 is configured to perform step S101 of the above-described method embodiment.
The data encoding unit 200 is used to perform steps S101 to S117 of the above-described method embodiment.
The data transmission unit 300 is configured to perform step S118 of the above-described method embodiment.
It should be noted that the present system embodiment and the above method embodiment are based on the same inventive concept, and thus the content related to the above method embodiment is also applicable to the present system embodiment, which is not described in detail herein.
In order to facilitate understanding of those skilled in the art, an embodiment of the present invention provides a Hash coding method of an adaptive weighted probability model, applied to a coding end, including the following steps:
first, let L be the bit length of the custom hash value, and the step of encoding the binary source sequence X with length n by using the weighted probability model is as follows:
step (1) initializing parameters, p=c=l 0 =0,H 0 =R 0 =1,i=t=L z =0,m=16,j=l=1,s=4,v=m 2 -1,α=2,C 0 =C 1 =1。
Step (2) of linearly dividing the binary source sequence X intoSegment binary subsequences.
Step (3) whenWhen (I)>Representing a statistical variable.
And (4) acquiring a j-th section binary subsequence of the binary source sequence X, wherein v bits are taken as a total.
Step (5) obtaining the ith symbol X in the jth binary subsequence i . i represents a statistical variable.
Step (6) calculating X and y, x= (i+x) i )mod m,y=(i+X i )/m。
Step (7) Cha Di a two-dimensional table obtaining f (x, y); for example, the value f (x, y) corresponding to the coordinate (x, y) is looked up from the two-dimensional table in fig. 1.
Step (8)Step (8) is to generate a binary sequence Y corresponding to the j-th segment binary sub-sequence.
Step (9) i=i+1, and if i is less than or equal to v, repeating steps (5) to (9).
Step (10) i=0, counting the number c of symbols 0 in the sequence Y to obtainAnd H (Y) = -p log 2 p-(1-p)log 2 (1-p)。
Step (11) calculating X and y, x= (i+x) i )mod m,y=(i+X i )/m。
Step (12) Cha Di two-dimensional table obtaining g (x, y); for example, the value g (x, y) corresponding to the coordinate (x, y) is looked up from the two-dimensional table in fig. 2.
Step (13) calculating r (i),And-> And
step (14) weighting model coding operation, if Y i =0,C 0 =C 0 +1, α=α+1, otherwiseAnd->C 1 =C 1 +1,α=α+1。
Step (15) i=i+1, and if i is less than or equal to v, repeating steps (11) to (15).
Step (16) L v =L v +T,T=L v
Step (17) L v Is converted into a binary sequence Z through the binary system.
Step (18) counting the bit length L of the binary sequence Z z And calculates t, t=l z -L。
Step (17) when l is less than or equal to t,
step (18) l=l+1, repeating steps (17) to (18).
Step (19) i=0, j=j+1.
Step (20) ifRepeating steps (3) to (20).
And (21) finishing encoding, and outputting a binary sequence Z, wherein Z is a hash value.
The safety analysis for the above steps (1) to (21) is as follows:
piecewise iteration, exclusive-or operation, and round functions are means of defending against linear attacks and differential attacks. According to the encoding flow of the present embodiment, first, because of X i =0 and X i When=1 look up the first two-dimensional table or the second two-dimensional table, the two-dimensional coordinates (y) are random, so the values of f (x, y) and g (x, y) are random. Secondly, the probability of the symbol 0 in the sequence X is fixed, but the probability p of the symbol 0 in each segment of binary sequence is different, and the accuracy of p determines the value space, so that H (Y) is unknown. Factor 2 (H(Y)-L/v) Andunknown, so L v Is not known. And due to L v =L v +T, i.e. the result of the encoding is piecewise iterated in an additive manner, such that L v There is randomness in the bit variation during the operation. Thus, the sufficiency conditions can be analyzed as: coding sequence E (E is any non-X binarySequence) i symbol, f (x, y), g (x, y), R (i), R i And L i And code X i The same hash value is generated while remaining consistent. The sufficient conditions are subject to uncertainty and the probability that each segment satisfies a certain sufficient condition can be analyzed.
(1)X i E {0,1}, f (X, y) acts only on symbol X i Each segment has v bits and each symbol has a probability of correctly selecting f (x, y) of
(2) g (x, y) acts only on the weight coefficient r (i), the probability of correctly selecting g (x, y) per symbol according to (1) being
(3) Let p be the precision of u-bit binary, the probability p of symbol 0 in each segment of binary sequence can be obtained to be 2 u The seed likelihood value is r=2 (H(Y)-L/v) Therefore, it is
(4) T and L v The binary bit number of (2) is L, and the iterative operation is L v =T+L v L is then v And T are correct
The same encoding result can be obtained when all of the above-described sufficient conditions are satisfied. f (x, y) and g (x, y) are S boxes, and linear correlation is eliminated to a certain extent based on S box segmentation iteration and exclusive OR operation. The probability of each segment of sequence symbol 0 and the probability of each segment of sequence symbol 1 are different, and the weighting coefficient used by each symbol code changes due to a round function, so that the length of the weighted probability model after coding has randomness. The bit length of the hash value calculated by the encoding method provided by the embodiment is random, namely, L is a random value, and compared with L, the collision probability is smaller when the L is fixed. And the longer the hash value is, the smaller the collision probability is. The security system then increases with the number of checks, the L randomness increases, so that the probability of a collision is closer to 0.
An embodiment of the invention provides a Hash coding device of an adaptive weighted probability model; the Hash coding device of the adaptive weighted probability model can be any type of intelligent terminal, such as a mobile phone, a tablet computer, a personal computer and the like. Specifically, the Hash encoding apparatus of the adaptive weighted probability model includes: one or more control processors and memory, one control processor being the example. The control processor and the memory may be connected by a bus or other means, this example being by way of example a bus connection.
The memory is used as a non-transient computer readable storage medium and can be used for storing a non-transient software program, a non-transient computer executable program and a module, such as a program instruction/module corresponding to a Hash coding device of the adaptive weighted probability model in the embodiment of the invention; the control processor implements the Hash coding method of the adaptive weighted probability model of the above method embodiment by running non-transitory software programs, instructions and modules stored in the memory. The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; in addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located with respect to the control processor, the remote memory being connectable to the Hash encoding apparatus of the adaptive weighted probability model via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The one or more modules are stored in the memory and when executed by the one or more control processors perform the Hash encoding method of the adaptive weighted probability model in the method embodiment described above.
Embodiments of the present invention also provide a computer-readable storage medium storing computer-executable instructions that are executed by one or more control processors, for example, to cause the one or more control processors to perform the Hash encoding method of the adaptive weighted probability model in the method embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented in software plus a general purpose hardware platform. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. The Hash coding method of the self-adaptive weighted probability model is characterized by being applied to a coding end and comprising the following steps of:
s101, acquiring a binary information source sequence X with a sequence length of n, and randomly generating m 2 A first two-dimensional table and a second two-dimensional table of bits, randomly generating a hash value with a length L;
step S102, dividing the binary source sequence X into [ n/m ] linearly 2 ]The method comprises the steps of a segment sub-sequence, wherein the sequence length of each segment sub-sequence is v, three statistical variables are set to be i, j and T, the initial value of the statistical variable i is 0, the initial value of the statistical variable j is 1, and the initial value of the statistical variable T is 0;
step S103, obtaining the ith symbol X in the jth sub-sequence of the binary source sequence X i
Step S104, calculate x= (i+X) i )modm,y=(i+X i ) M; wherein mod represents a modulo operation;
step S105, inquiring corresponding f (x, y) from the first two-dimensional table according to x, y; wherein f (x, y) is a value corresponding to coordinates (x, y) in the first two-dimensional table;
step S106, calculating Y i =X i F (x, y), i=i+1, if i is less than or equal to v, jumping to step S103; if i>v, obtaining a binary sequence Y corresponding to the j-th segment subsequence, and entering step S107; wherein, the exclusive OR operation is represented;
step S107, setting i to 0, calculatingAnd H (Y) = -plog 2 p-(1-p)log 2 (1-p); wherein c represents the number of symbol 0 in the binary sequence Y, and H (Y) represents the information entropy of the binary sequence Y;
step S108, calculate x= (i+X) i )modm,y=(i+X i )/m;
Step S109, inquiring corresponding g (x, y) from the second two-dimensional table according to x, y; wherein g (x, y) is a value corresponding to coordinates (x, y) in the second two-dimensional table;
step S110, calculatingAnd->Wherein s represents an integer greater than 3; said->The weighted probabilities for symbol 0 and symbol 1, respectively; the r represents a weighting coefficient; the C is 0 Representing the coding of said X i The number of symbol 0 in the total number of symbols already encoded in the binary sequence Y, the C 0 The initial value of (C) is 1, the C 1 Representing the coding of said X i The number of symbols 1 in the total number of symbols already encoded in the binary sequence Y, the C 1 An initial value of 1; alpha represents the code for said X i The number of total symbols already coded in the binary sequence Y before and the initial value of alpha is 2;
step S111, if Y i =0,C 0 =C 0 +1, α=α+1; otherwise->And is also provided withC 1 =C 1 +1, α=α+1; wherein the R is i+1 、R i 、L i+1 L and i r is a coding variable 0 =1,L 0 =0;
Step S112, i=i+1, if i is less than or equal to v, jumping to step S108; if i>v, obtaining the coding result L corresponding to the binary sequence Y v
Step S113, order L v =L v +T,T=L v
Step S117, j=j+1, if j is less than or equal to [ n/m ], jumping to step S103; if j > [ n/m ], ending the coding to obtain a ciphertext corresponding to the binary source sequence X;
step S118, the ciphertext is sent to a decoding end.
2. The Hash coding method of an adaptive weighted probability model according to claim 1, further comprising the step of, after step S113:
step S114, L v Conversion to a binary sequence Z, calculation of t=l z -L; wherein,the L is z Representing the sequence length of the binary sequence Z;
step S115, let l be a statistical variable and the initial value of l be 1, if l is less than or equal to t, Z l-1 =Z l-1 ⊕Z L+l-1 And proceeds to step S116; if l>t, jumping to step S117;
step S116, k=k+1, and the process goes to step S115.
3. The method of Hash encoding of an adaptive weighted probability model of claim 1, wherein the first two-dimensional table consists of m 2 Each symbol 0 and symbol 1.
4. The method of Hash encoding of an adaptive weighted probability model of claim 1, wherein the second two-dimensional table consists of m 2 An integer in the range of 0 to 255.
5. The method of Hash encoding of an adaptive weighted probability model according to any of claims 3 or 4, characterized in that m = 16.
6. The method of Hash encoding of an adaptive weighted probability model according to claim 1, wherein s = 4.
7. A Hash coding system of an adaptive weighted probability model, comprising:
a data acquisition unit for acquiring a binary information source sequence X with a sequence length of n and randomly generating m 2 A first two-dimensional table and a second two-dimensional table of bits, randomly generating a hash value with a length L;
a data encoding unit for executing the following steps:
step S102, dividing the binary source sequence X into [ n/m ] linearly 2 ]Segment subsequences, each segment subsequence having a sequence length v, and setting three statistical variables i, j, Y, wherein the initial value of the statistical variable i is 0, the initial value of the statistical variable j is 1, and the initial value of the statistical variable YA value of 0;
step S103, obtaining the ith symbol X in the jth sub-sequence of the binary source sequence X i
Step S104, calculate x= (i+X) i )modm,y=(i+X i ) M; wherein mod represents a modulo operation;
step S105, inquiring corresponding f (x, y) from the first two-dimensional table according to x, y; wherein f (x, y) is a value corresponding to coordinates (x, y) in the first two-dimensional table;
step S106, calculating Y i =X i F (x, y), i=i+1, if i is less than or equal to v, jumping to step S103; if i>v, obtaining a binary sequence Y corresponding to the j-th segment subsequence, and entering step S107; wherein, the exclusive OR operation is represented;
step S107, setting i to 0, calculatingAnd H (Y) = -plog 2 p-(1-p)log 2 (1-p); wherein c represents the number of symbol 0 in the binary sequence Y, and H (Y) represents the information entropy of the binary sequence Y;
step S108, calculate x= (i+X) i )modm,y=(i+X i )/m;
Step S109, inquiring corresponding g (x, y) from the second two-dimensional table according to x, y; wherein g (x, y) is a value corresponding to coordinates (x, y) in the second two-dimensional table;
step S110, calculatingAnd->Wherein s represents an integer greater than 3; said->The weighted probabilities for symbol 0 and symbol 1, respectively; the r represents a weighting coefficient; the C is 0 Representing the coding of said X i The number of symbol 0 in the total number of symbols already encoded in the binary sequence Y, the C 0 The initial value of (C) is 1, the C 1 Representing the coding of said X i The number of symbols 1 in the total number of symbols already encoded in the binary sequence Y, the C 1 An initial value of 1; alpha represents the code for said X i The number of total symbols already coded in the binary sequence Y before and the initial value of alpha is 2;
step S111, if Y i =0,C 0 =C 0 +1, α=α+1; otherwise->And is also provided withC 1 =C 1 +1, α=α+1; wherein the R is i+1 、R i 、L i+1 L and i r is a coding variable 0 =1,L 0 =0;
Step S112, i=i+1, if i is less than or equal to v, jumping to step S108; if i>v, obtaining the coding result L corresponding to the binary sequence Y v
Step S113, order L v =L v +T,T=L v
Step S117, j=j+1, if j is less than or equal to [ n/m ], jumping to step S103; if j > [ n/m ], ending the coding to obtain a ciphertext corresponding to the binary source sequence X;
and the data transmitting unit is used for transmitting the ciphertext to the decoding end.
8. The Hash coding system of an adaptive weighted probability model of claim 7, wherein said data coding unit is further configured to perform the steps of:
step S114, L v Conversion to a binary sequence Z, calculation of t=l z -L; wherein the L is z Representing the sequence length of the binary sequence Z;
step S115, if l is not more than t, Z l-1 =Z l-1 ⊕Z L+l-1 And proceeds to step S116; if l>t, jumping to step S117; wherein l represents a statistical variable and the initial value of l is 1;
step S116, l=l+1, and jump to step S115.
9. A Hash coding device of an adaptive weighted probability model, characterized in that: comprising at least one control processor and a memory for communication connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the Hash encoding method of the adaptive weighted probability model of any one of claims 1 to 6.
10. A computer-readable storage medium, characterized by: the computer-readable storage medium stores computer-executable instructions for causing a computer to perform the Hash encoding method of the adaptive weighted probability model of any one of claims 1 to 6.
CN202111208527.XA 2021-10-18 2021-10-18 Hash coding method and system of self-adaptive weighted probability model Active CN114039718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111208527.XA CN114039718B (en) 2021-10-18 2021-10-18 Hash coding method and system of self-adaptive weighted probability model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111208527.XA CN114039718B (en) 2021-10-18 2021-10-18 Hash coding method and system of self-adaptive weighted probability model

Publications (2)

Publication Number Publication Date
CN114039718A CN114039718A (en) 2022-02-11
CN114039718B true CN114039718B (en) 2023-12-19

Family

ID=80141473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111208527.XA Active CN114039718B (en) 2021-10-18 2021-10-18 Hash coding method and system of self-adaptive weighted probability model

Country Status (1)

Country Link
CN (1) CN114039718B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109565596A (en) * 2016-05-12 2019-04-02 交互数字Vc控股公司 The method and apparatus of the binary symbol sequence of syntactic element relevant to video data are indicated for context adaptive binary arithmetic coding
CN110798224A (en) * 2019-11-13 2020-02-14 青岛芯海源信息科技有限公司 Compression coding, error detection and decoding method
CN111294058A (en) * 2020-02-20 2020-06-16 湖南遥昇通信技术有限公司 Channel coding and error correction decoding method, equipment and storage medium
CN112821894A (en) * 2020-12-28 2021-05-18 湖南遥昇通信技术有限公司 Lossless compression method and lossless decompression method based on weighted probability model
CN112865961A (en) * 2021-01-06 2021-05-28 湖南遥昇通信技术有限公司 Symmetric encryption method, system and equipment based on weighted probability model
CN112883386A (en) * 2021-01-15 2021-06-01 湖南遥昇通信技术有限公司 Digital fingerprint processing and signature processing method, equipment and storage medium
CN113297591A (en) * 2021-05-07 2021-08-24 湖南遥昇通信技术有限公司 Webpage resource encryption method, equipment and storage medium
CN113300830A (en) * 2021-05-25 2021-08-24 湖南遥昇通信技术有限公司 Data transmission method, device and storage medium based on weighted probability model
CN113486369A (en) * 2021-06-23 2021-10-08 湖南遥昇通信技术有限公司 Encoding method, apparatus, device and medium with symmetric encryption and lossless compression

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10735736B2 (en) * 2017-08-29 2020-08-04 Google Llc Selective mixing for entropy coding in video compression
WO2021025485A1 (en) * 2019-08-06 2021-02-11 현대자동차주식회사 Entropy coding for video encoding and decoding

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109565596A (en) * 2016-05-12 2019-04-02 交互数字Vc控股公司 The method and apparatus of the binary symbol sequence of syntactic element relevant to video data are indicated for context adaptive binary arithmetic coding
CN110798224A (en) * 2019-11-13 2020-02-14 青岛芯海源信息科技有限公司 Compression coding, error detection and decoding method
CN111294058A (en) * 2020-02-20 2020-06-16 湖南遥昇通信技术有限公司 Channel coding and error correction decoding method, equipment and storage medium
CN112821894A (en) * 2020-12-28 2021-05-18 湖南遥昇通信技术有限公司 Lossless compression method and lossless decompression method based on weighted probability model
CN112865961A (en) * 2021-01-06 2021-05-28 湖南遥昇通信技术有限公司 Symmetric encryption method, system and equipment based on weighted probability model
CN112883386A (en) * 2021-01-15 2021-06-01 湖南遥昇通信技术有限公司 Digital fingerprint processing and signature processing method, equipment and storage medium
CN113297591A (en) * 2021-05-07 2021-08-24 湖南遥昇通信技术有限公司 Webpage resource encryption method, equipment and storage medium
CN113300830A (en) * 2021-05-25 2021-08-24 湖南遥昇通信技术有限公司 Data transmission method, device and storage medium based on weighted probability model
CN113486369A (en) * 2021-06-23 2021-10-08 湖南遥昇通信技术有限公司 Encoding method, apparatus, device and medium with symmetric encryption and lossless compression

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于加权自学习散列的高维数据最近邻查询算法;彭聪;钱江波;陈华辉;董一鸿;;电信科学(第06期);全文 *
混沌激光网络通信中的加密技术研究;蔡宗慧;陈飞;;激光杂志(第02期);全文 *

Also Published As

Publication number Publication date
CN114039718A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN113300830B (en) Data transmission method, device and storage medium based on weighted probability model
CN111294058B (en) Channel coding and error correction decoding method, equipment and storage medium
CN111478885B (en) Asymmetric encryption and decryption method, equipment and storage medium
CN110635807B (en) Data coding method and decoding method
KR101817168B1 (en) Method and Apparatus for Approximated Belief Propagation Decoding of Polar Code
US20160006568A1 (en) Tag generation device, tag generation method, and tag generation program
Bos et al. Rapidly verifiable XMSS signatures
CN113486369B (en) Encoding method, apparatus, device and medium with symmetric encryption and lossless compression
Qi et al. A hybrid security and compressive sensing-based sensor data gathering scheme
Gorbenko et al. Post-quantum message authentication cryptography based on error-correcting codes
CN110995415A (en) Encryption algorithm based on MD5 algorithm
CN114039718B (en) Hash coding method and system of self-adaptive weighted probability model
CN113922947B (en) Self-adaptive symmetrical coding method and system based on weighted probability model
CN109639393B (en) Sliding window network coding method based on quadratic permutation polynomial
CN111786681A (en) Cascade decoding method suitable for data post-processing of CV-QKD system
Tanygin et al. The computational complexity of the algorithm for identifying the source of data transmitted by limited length blocks
George et al. PWLCM based image encryption through compressive sensing
Ryabko et al. Asymptotically optimal perfect steganographic systems
CN113922946B (en) SM 3-based data encryption method, system, equipment and medium
CN113938273B (en) Symmetric encryption method and system capable of resisting quantitative parallel computing attack
CN113746599B (en) Encoding method, decoding method, terminal, electronic device, and storage medium
Eisenbarth et al. A performance boost for hash-based signatures
Fernando et al. Reed solomon codes for the reconciliation of wireless phy layer based secret keys
CN114172891B (en) Method, equipment and medium for improving FTP transmission security based on weighted probability coding
Yang et al. Design and analysis of lossy source coding of Gaussian sources with finite-length polar codes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant