CN102523453B - Super large compression method and transmission system for images - Google Patents

Super large compression method and transmission system for images Download PDF

Info

Publication number
CN102523453B
CN102523453B CN201110460855.9A CN201110460855A CN102523453B CN 102523453 B CN102523453 B CN 102523453B CN 201110460855 A CN201110460855 A CN 201110460855A CN 102523453 B CN102523453 B CN 102523453B
Authority
CN
China
Prior art keywords
image
noise
psnr
nonlinear mapping
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110460855.9A
Other languages
Chinese (zh)
Other versions
CN102523453A (en
Inventor
周诠
李晓博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Institute of Space Radio Technology
Original Assignee
Xian Institute of Space Radio Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Institute of Space Radio Technology filed Critical Xian Institute of Space Radio Technology
Priority to CN201110460855.9A priority Critical patent/CN102523453B/en
Publication of CN102523453A publication Critical patent/CN102523453A/en
Application granted granted Critical
Publication of CN102523453B publication Critical patent/CN102523453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The invention provides a super large compression method and a transmission system for images. The method comprises the steps as follows: K noise like images B0i (i equals to 1, ...K) are determined according to an original gray level image A needing to be sent, K nonlinear mappings FAi (i equals to 1, ...K) to the original gray level image A to obtain K mapped images Bi (i equals to 1, ...K), and the original image corresponding to Bi is B0i (i equals to 1, ...K); PSNR (Peak Signal Noise Ratio) of the K mapped images Bi and the K noise like images B0i is worked out to pick out the noise like image B0j corresponding to the peak value of PSNR and the nonlinear mapping FAj; the parameter information of the noise like image B0j and that of the nonlinear mapping FAj are coded according to the required compression ratio R and then transmitted to a receiving end; after receiving the data, the receiving end picks up the parameters in the data and obtains a mapped image B0j through decoding; and inverse mapping is performed to the noise like image B0j to obtain the original gray level image A , and data format transformation is performed to the gray level image A to obtain various formats and images expected by users. The invention is suitable for large compression ratio transmission of various images and has the advantages of security and information hiding property.

Description

A kind of image super large compression transmitting method and transmission system
Technical field
The present invention relates to a kind of method and system of Image Communication, particularly a kind of transmission method of view data and system, belong to communication (as data communication technology etc.) field.
Background technology
Along with scientific and technological development, people are more and more to the demand of high-definition picture, always improving by every means image resolution ratio.But along with the raising of image resolution ratio, data volume is increasing, bring the increasing pressure to transfer of data and storage, adopt the data compression of large compression ratio to be very important.But existing method for compressing image and equipment, getable compression ratio is between several times-tens times, seldom have and can reach " thousands of " compression method doubly, even tens times-100 times above compression methods are also less, although can reach more than 100 times indivedual images perhaps compression ratio.
Existing compress technique standard, is that the correlation based on image is compressed, or to there being the image of redundancy to compress, if image (data) correlation is not strong, cannot effectively compress.For example, to the image of similar white noise, cannot compress, sometimes even can bring the expansion (increase) of data.Prior art is that image is first carried out to various conversion, is mainly dct transform and wavelet transformation (DCT, WT), as JPEG and JPEG2000 compression algorithm.Prior art does not become image that noise like image transmits again through a plurality of Nonlinear Mapping.
Summary of the invention
Technology of the present invention is dealt with problems and is: overcome the deficiencies in the prior art, a kind of transmission method and transmission system of super large compression ratio is provided, be suitable for the large compression ratio transmission of various images, have confidentiality and Information hiding simultaneously.
Technical solution of the present invention is: image super large compression transmitting method, is characterized in that comprising the following steps:
(1) the former gray level image A sending as required determines K noise like image B 0i (i=1, ... K), noise like image B 0i is identical with former gray level image A size, wherein, described K is positive integer, the size of described former gray level image A is that M is capable, N is listed as, Q bit quantization, and total bit number is MNQ;
(2) to former gray level image A carry out respectively K Nonlinear Mapping FAi (i=1 ..., K), be transformed to K width map image Bi (i=1 ... K), the original image that Bi is corresponding be B0i (i=1 ... K); The Y-PSNR PSNR of calculating K width map image Bi and noise like image B 0i, selects noise like image B 0j corresponding to Y-PSNR PSNR maximum and Nonlinear Mapping FAj;
(3) according to required compression ratio R, the parameter information of noise like image B 0j and nonlinear transformation FAj is encoded, be then transferred to receiving terminal;
(4) receiving terminal, after data receiver, extracts parameter wherein, by decoding, is processed and is obtained map image B0j;
(5) noise like image B 0j is carried out to inverse mapping, obtain former gray level image A, gray level image A is carried out to Data Transform, obtain the image of the various forms that user wishes.
The method of determining noise like image B 0i in described step (1) is as follows:
According to original image A size, select the noise like image of formed objects, noise like image size is that M is capable, N is listed as, Q bit quantization, total bit number is MNQ; The gray value Gi of noise like image is between 0 to 2 qpositive integer between-1, compression ratio R is MNQ/Cdata, Cdata is total bit number of noise like image transmitting desired parameters.
Nonlinear Mapping FAi (i=1 in described step (2), ... K) be K the Nonlinear Mapping based on multilayer feedforward neural network, the network configuration of each multilayer feedforward neural network is N1-N2-N3, N1 is input layer number, N2 is middle layer node number, N3 is output layer nodes, select K class image Pi (i=1, ... K), Pi and original image A are with size, the corresponding target image B0i (i=1 of every class image, ... K), adopt backpropagation BP class algorithm to learn respectively K neural net, every class image obtain one with the similar image B i (i=1 of target image B0i, ... K), set in advance Y-PSNR PSNR or the least mean-square error MSE between B0i and Bi, when PSNR or MSE meet the demands, algorithm finishes, form K cover neural network weight Wi (i=1, ... K), the relation of least mean-square error MSE and PSNR is as follows:
PSNR = 10 · log ( ( 2 Q - 1 ) 2 MSE ) ( dB )
Q is the quantization bit of original image A; When the former gray level image A of input, K multilayer feedforward neural network is according to above-mentioned K cover weights Wi (i=1, ... K) carry out Nonlinear Mapping, draw K different map image, select wherein Y-PSNR PSNR the maximum as final Nonlinear Mapping FAj.
Described noise like image B 0i is a class pseudorandom image, based on Logistic chaotic maps, produces, and its model is X n+1=μ X n(1-X n), n ∈ 0,1,2 ... };
Wherein, μ is called branch parameter, 3.5699456 < μ≤4; X 0for initial value, 0 < X 0< 1; Provide μ value and initial value X 0, can calculate and produce X n+1; To X n+1adjudicate can obtain 0-1 sequence Si (i=1 ... MNQ), work as X n+1>=0.5 o'clock, Si=1; 0 < X n+1< 0.5, Si=0; With Q bit, sequence Si is divided, Q bit arrives metric conversion by binary system and forms gray value, thereby obtains a width noise like image; Provide again other μ value and initial value X 0, can obtain all the other noise like images, so just form K width noise like image B 0i, i.e. B01, B02 ..., B0k.
Noise like image B 0j described in step (3) and the transformation parameter of nonlinear transformation FAj are encoded, after coding, total amount of data is H byte, total amount of data Cdata is 8H bit, compression ratio R=MNQ/ (8H), H=H1+H2+H3+H4+H5+H6 wherein, Hi (i=1 ... 6) be positive integer; Transformation parameter coded format is specific as follows:
Table 1 transformation parameter coded format (unit: byte)
Inverse transformation in described step (5) is Nonlinear Mapping FBi (i=1, ... K), this is mapped as K the Nonlinear Mapping based on multilayer feedforward neural network, the network configuration of each multilayer feedforward neural network is N4-N5-N6, N4 is input layer number, N5 is middle layer node number, N6 is output layer nodes, select K class image B 0i (i=1, ... K), be B01, B02, ..., B0K, B0i and original image A are with size, the corresponding target image Pi (i=1 of every class image, ... K), adopt backpropagation BP class algorithm to learn respectively K neural net, every class image obtain one with the similar image A i (i=1 of target image Pi, ... K), set in advance Y-PSNR PSNR or the least mean-square error MSE between Ai and Pi, when PSNR or MSE meet the demands, algorithm finishes, form K cover neural network weight Vi (i=1, ... K), the relation of least mean-square error MSE and PSNR is as follows:
PSNR = 10 &CenterDot; log ( ( 2 Q - 1 ) 2 MSE ) ( dB )
Q is the bit quantization of original image A; When input noise like image B 0j, multilayer feedforward neural network carries out Nonlinear Mapping according to the j cover weights Vj that in above-mentioned K cover weights, Y-PSNR PSNR the maximum is corresponding, draws j image, is the original image A of recovery.
Image super large compression transmission system, is characterized in that: comprise transmitting terminal processing module and receiving terminal processing module, described transmitting terminal processing module comprises image input unit, image mapped unit and image parameter coding unit; Described receiving terminal processing module comprises image parameter decoding unit, Inverse Image Warping unit and image output unit;
Image input unit: by the K choosing a noise like image B 0i (i=1 ... K) pre-stored at image input unit, input original-gray image A, forms the needed input data of Nonlinear Mapping; Described noise like image B 0i is identical with former gray level image A size, and wherein, described K is positive integer, and the size of described former gray level image A is that M is capable, N row, Q bit quantization, total bit number are MNQ;
Image mapped unit: to former gray level image A carry out respectively K Nonlinear Mapping FAi (i=1 ... K), be transformed to K width map image Bi (i=1 ... K), the original image that Bi is corresponding be B0i (i=1 ... K); The Y-PSNR PSNR of calculating K width map image Bi and noise like image B 0i, selects noise like image B 0j corresponding to Y-PSNR PSNR maximum and Nonlinear Mapping FAj;
Image parameter coding unit: according to required compression ratio R, the transmission parameter information of noise like image B 0j and nonlinear transformation FAj is encoded, be then transferred to the image parameter decoding unit of receiving terminal;
Image parameter decoding unit: receive the information of image parameter coding unit, and process the parameter extracting wherein by decoding, by calculating map image B0j;
Inverse Image Warping unit: the noise like image B 0j calculating is carried out to inverse mapping FBj, and inverse mapping FBj is corresponding with Nonlinear Mapping FAj, view data A after being restored;
Image output unit: to recovering rear view data A, the image data format of wishing according to user output.
The present invention's advantage is compared with prior art:
Traditional Image Data Compression transmission method and system, be original image directly to be carried out to data compression (conversion such as time domain, frequency domain) transmit again, or image carried out to piecemeal processing, after compression, transmits.The present invention becomes other noise like image image, image itself is not carried out to compressed encoding transmission but parameter is transmitted, and is essentially different with the conventional compression method of view data, and effect is extremely obvious.
Compression ratio R can set in advance, and the PSNR of Recovery image and original image can reach or be better than the actual compressibility performance of using at present.
(1) the present invention adopts transform process method, and piece image is transformed into other noise like image, the relevant parameters of a transmitted noise image.Conversion itself just has certain secret effect.
(2) the present invention adopts transform process method before transmission, the advanced line translation of piece image, adopts a plurality of feedforward neural networks to form K Nonlinear Mapping and converts.
(3) compression effectiveness of the present invention can be known in advance, because the precision of image transform processes can be set in advance by learning training.
(4) transmission system of the present invention, can reach the current conventional compression transmission system super large compression ratio that is beyond one's reach.
Accompanying drawing explanation
Fig. 1 is transmission system block diagram of the present invention;
Fig. 2 a is image mapped example figure mono-;
Fig. 2 b is image mapped example figure bis-;
Fig. 2 c is image mapped example figure tri-;
Fig. 2 d is image mapped example figure tetra-;
Fig. 3 is image mapped schematic diagram of the present invention;
Fig. 4 is Inverse Image Warping schematic diagram.
Embodiment
Image super large compression transmitting method of the present invention and system are in order to solve large capacity, high-definition picture safe transmission problem and to propose, super large compression refers to that compression ratio is far longer than the compression ratio that current conventional compression method can reach, such as 100 times of above-hundreds of-several thousand times-several ten thousand above compression ratios.The present invention adopts to make a start and first image is carried out to nonlinear transformation, become another width and know in advance the noise like image that generates rule, do not transmit this image, but transmission produces parameter or the expression formula of this noise image, avoided the transmission problem of big data quantity, greatly improved compression ratio, compression ratio can reach thousands of times, and compression quality can reach user's requirement.
The scheme that the present invention adopts is that original image is first converted to noise like image, and such noise image can be used some parametric descriptions technically; Again these parameters are carried out to coding transmission, in receiving end, through decoding processing, inverse transformation, recover original image.Specific implementation step is as follows:
(1) the former gray level image A sending as required determines K noise like image B 0i (i=1, ... K), noise like image B 0i is identical with former gray level image A size, wherein, described K is positive integer, and the size of described former gray level image A is that M is capable, N row, Q bit quantization, total bit number are MNQ; And be the view data that Nonlinear Mapping FAi needs form by the format conversion of former gray level image A;
The method of determining noise like image B 0i is as follows:
According to original image A size, select the noise like image of formed objects, size is that M is capable, N is listed as, Q bit quantization, the gray value of noise like image is between 0 to 2 qpositive integer between-1.
Total bit number Cdata of noise like image transmitting parameter is far smaller than total bit number MNQ of former gray level image A, and compression ratio R is MNQ/Cdata.
When specific implementation, can adopt pseudorandom image as noise like image B 0i, based on Logistic chaotic maps, produce.In recent years, chaology develops very rapid at home and abroad.Chaology refers to that system is from becoming suddenly in order a kind of Evolution Theory of disordered state.The chaos sequence being produced by chaos system has certainty, pseudo-randomness, aperiodicity and the character such as does not restrain, and initial value is had to extremely responsive dependence.More typically there is 1 dimensional Logistic Map etc.
Shown in Fig. 2 a-Fig. 2 d, be a kind of exemplary plot, lena_128.bmp image A is mapped as to chaos image _ 128.bmp image B.A kind of neural network structure is as follows: input 256, hidden layer 256, output 256.Neural network structure can be other structures, as inputs 64, hidden layer 64, output 64.As input 64, hidden layer 32, output 64 etc.
The model of Logistic chaotic maps is X n+1=μ X n(1-X n), n ∈ 0,1,2 ... }
Wherein, μ is called branch parameter, 3.5699456 < μ≤4, and Logistic mappings work is in chaos state.X 0for initial value, 0 < X 0< 1, provides μ value and initial value X 0, can calculate and produce X n+1.To X n+1adjudicate can obtain 0-1 sequence Si (i=1 ... MNQ), (work as X n+1>=0.5, Si=1; 0 < X n+1< 0.5, Si=0), with Q bit, sequence Si divided, and Q bit arrives metric conversion by binary system and forms gray value, thereby obtains a width noise like image B 0; Provide again value and the initial value X of other μ 0, can obtain all the other noise like image B 0, so just form K width noise like image B 0i (i=1 ... K).
(2) to former gray level image A carry out respectively K Nonlinear Mapping FAi (i=1 ... K), be transformed to K width map image Bi (i=1 ... K), the original image that Bi is corresponding be B0i (i=1 ... K); The Y-PSNR PSNR of calculating K width map image Bi and noise like image B 0i, selects noise like image B 0j corresponding to Y-PSNR PSNR maximum and Nonlinear Mapping FAj;
Nonlinear Mapping FAi in described step (2) (i=1 ... be K) based on K multilayer feedforward neural network, each network configuration is N1-N2-N3, and N1 is input layer number, and N2 is middle layer node number, and N3 is output layer nodes.Select K class image as P1, ..., PK (identical with image A size), the corresponding target image B0i (i=1 of every class image, ... K), be that the target image that P1 is corresponding is B01, the target image that P2 is corresponding is B02, ..., the target image that PK is corresponding is B0K, adopt BP (backpropagation) class algorithm to learn respectively K neural net, every class image obtain one with the similar image B i (i=1 of target image B0i, ... K), set in advance Y-PSNR PSNR or the least mean-square error MSE between B0i and Bi, when PSNR or MSE meet the demands, algorithm finishes, form K cover neural network weight Wi (i=1, ... K).PSNR sets as requested, is generally greater than 40dB.When image A is inputted, network carries out Nonlinear Mapping FAi to A according to the K cover weights Wi obtaining, and draws map image Bi, selects wherein PSNR the maximum as final Nonlinear Mapping FAj.
(3) according to required compression ratio R, the transformation parameter of noise like image B 0j and Nonlinear Mapping FAj is encoded, be then transferred to receiving terminal.
(4) receiving terminal, after data receiver, is processed the parameter extracting wherein by decoding, by calculating map image B0j;
(5) the noise like image B 0j calculating is carried out to inverse mapping FBj (FAj is corresponding with nonlinear transformation), obtain former gray level image A.
Inverse mapping is Nonlinear Mapping FBi (i=1, ... K), this is mapped as K the Nonlinear Mapping based on multilayer feedforward neural network, the network configuration of each multilayer feedforward neural network is N4-N5-N6, N4 is input layer number, N5 is middle layer node number, N6 is output layer nodes, select K class image B 01, B02, ..., B0i, ..., B0K (B0i and original image A are with size), the corresponding target image Pi (i=1 of every class image, ... K), be that the target image that B01 is corresponding is P1, the target image that B02 is corresponding is P2, ..., the target image that B0K is corresponding is PK, adopt backpropagation BP class algorithm to learn respectively K neural net, every class image obtain one with the similar image A i (i=1 of target image Pi, ... K), set in advance Y-PSNR PSNR or the least mean-square error MSE between Pi and Ai, when PSNR or MSE meet the demands, algorithm finishes, form K cover neural network weight Vi (i=1, ... K), the relation of least mean-square error MSE and PSNR is as follows:
PSNR = 10 &CenterDot; log ( ( 2 Q - 1 ) 2 MSE ) ( dB )
Q is the quantization bit of original image A; When input noise like image B 0j, multilayer feedforward neural network carries out Nonlinear Mapping according to the j cover weights Vj (weights that PSNR the maximum is corresponding) in above-mentioned K cover weights, draws j image A j, the original image A recovering.
Former gray level image A is carried out to Data Transform, obtain the gray level image of the desirable different-format of user.Image has unprocessed form (raw), bmp form, and tif form, jpg forms etc., have much image format conversion software at present.
As shown in Figure 1, image super large compression transmission system of the present invention comprises transmitting terminal processing module and receiving terminal processing module, and described transmitting terminal processing module comprises that image input unit, image mapped unit and image parameter coding unit form; Described receiving terminal processing module comprises that image parameter decoding unit, Inverse Image Warping unit and image output unit form;
Image input unit: by the K choosing in advance noise like image B 0i (i=1 ... K) be stored in pretreatment unit, input gray level image A, forms the needed input data of Nonlinear Mapping FAi; Described noise like image B 0i is identical with former gray level image A size, and wherein, described K is positive integer, and the size of described former gray level image A is that M is capable, N row, Q bit quantization, total bit number are MNQ;
Image mapped unit: former gray level image A is carried out respectively to K Nonlinear Mapping FAi (i=1, ... K), be transformed to K width map image Bi (i=1, ... K), the map image that FA1 is corresponding is B1, and the map image that FA2 is corresponding is B2, ..., the map image that FAK is corresponding is BK, the noise like image that Bi is corresponding be B0i (i=1 ... K); The Y-PSNR PSNR of calculating K width map image Bi and noise like image B 0i, selects noise like image B 0j corresponding to Y-PSNR PSNR maximum and Nonlinear Mapping FAj;
Image parameter coding unit: according to required compression ratio R, the transmission parameter information of noise like image B 0j and nonlinear transformation FAj is encoded, be then transferred to the image parameter decoding unit of receiving terminal;
In a kind of embodiment of the present invention, start bit, picture numbers, image size, image parameter, reserved data are the transformation parameter of noise like image B 0j, and network sequence number is the transformation parameter of nonlinear transformation FAj.
Image parameter decoding unit: receive the information of image parameter coding unit, and process the parameter extracting wherein by decoding, by calculating map image B0j.
Receiving terminal is received transformation parameter H1 ..., after H6, according to start bit H1, find in order H2, H3, H4 etc.According to noise like image, generate rule and calculate, obtain map image B0j.Detailed process is as follows:
According to H4, determine μ value and initial value X 0, according to the definite M of H3, N, Q, the image B i sequence number j definite according to H2, namely the sequence number j of image B 0i, i.e. B0j.
Calculate X n+1=μ X n(1-X n), n ∈ 0,1,2 ... }
To X n+1adjudicate can obtain 0-1 sequence Si (i=1 ... MNQ), (work as X n+1>=0.5, Si=1; 0 < X n+1< 0.5, Si=0), with Q bit, sequence Si divided, and Q bit arrives metric conversion by binary system and forms gray value, thereby obtains a width noise like image B 0j;
Inverse Image Warping unit: the noise like image B 0j calculating is carried out to inverse transformation, obtain former gray level image A.
Figure 3 shows that image non-linear mapping principle figure, network configuration is N1-N2-N3.
The present invention adopts the method for Nonlinear Mapping, adopts neural net to process.First image is carried out to piecemeal m * n, obtain the vector containing the individual data of L (=m * n), m≤M, n≤N, m and n are positive integer.Structure multilayer feedforward neural network (three layers of BP network), an input layer N1 neuron, a hidden layer N2 neuron, an output layer N3 neuron.Adopt BP class algorithm to learn, from K class input picture Pi (i=1 ... K) obtain K class output image B0i (i=1 ... K), form K cover network weighted value Wi (i=1 ... K).
Multilayer feedforward neural network, by the input to sample, is constantly adjusted the connection weights of each network node, thereby is obtained the non-linear relation of input and the output of sample.The most frequently used multilayer feedforward neural network algorithm one backpropagation BP (Back Propagation) algorithm, that minimization process by cost function completes the Nonlinear Mapping that is input to output, its basic thought is: if utilize existing weight forward-propagating to can not get the output of expectation, backpropagation, repeatedly revise the weight of (iteration) each network node, progressively reduce cost function, until reach predefined requirement, be generally when cost function is less than a certain quite little positive number or iteration and no longer reduce, but till repeatedly vibrating.
Y-PSNR mSE is the least mean-square error of two width images.
PSNR can set according to system requirements, generally can be made as 40dB or more than.
To noise like image B 0j transformation parameter (μ, initial value X 0) encode, after parameter coding, total amount of data is H byte, total amount of data Cda ta is 8H bit.H=H1+H2+H3+H4+H5+H6, because Cdata is very little, so compression ratio R=MNQ/Cdata is very large.Form is as follows:
Table 1 parameter coding form (unit: byte)
Table 2 parameter coding example one (unit: byte)
H=32, Cdata=256 bit
The compression ratio R of table 2 correspondence can be calculated as follows:
R=QMN/(8H)=8MN/(8H)=MN/H
M=1024, N=1024, H=32, R=32768 times, how a 30000 times of compression!
M=256, N=256, H=32, R=2048 times, how a 2000 times of compression!
M=128, N=128, H=32, R=512 times, how a 500 times of compression!
Table 3 parameter coding example two (units: byte)
H=64 byte, Cdata=512 bit.
The corresponding compression ratio R of table 3 can be calculated as follows:
R=QMN/(8H)=8MN/(8H)=MN/H
M=1024, N=1024, H=64, R=16384 times, how a 10000 times of compression!
M=256, N=256, H=64, R=1024 times, how a 1000 times of compression!
M=128, N=128, H=64, R=256 times, how a 200 times of compression!
Figure 4 shows that Inverse Image Warping schematic diagram, network configuration is N4-N5-N6.
Inverse mapping be also Nonlinear Mapping FBi (i=1 ... K), this is mapped as the Nonlinear Mapping of K based on multilayer feedforward neural network, the network configuration of each multilayer feedforward neural network is N4-N5-N6, N4 is input layer number, and N5 is middle layer node number, and N6 is output layer nodes.Select K class image B 0i (i=1, ... K), be B01, B02, ..., B0i, ..., B0K, the corresponding target image Pi (i=1 of every class image, ... K), adopt backpropagation BP class algorithm to learn respectively K neural net, every class image obtain one with the similar image A i (i=1 of target image Pi, ... K), set in advance Y-PSNR PSNR or the least mean-square error MSE between Pi and Ai, when PSNR or MSE meet the demands, algorithm finishes, form K cover neural network weight Vi (i=1, ... K), the relation of least mean-square error MSE and PSNR is as follows:
PSNR = 10 &CenterDot; log ( ( 2 Q - 1 ) 2 MSE ) ( dB )
Q is the quantization bit of original image A; When input noise like image B 0j, multilayer feedforward neural network carries out Nonlinear Mapping according to the j cover weights Vj (weights that PSNR the maximum is corresponding) in above-mentioned K cover weights, draws j image A j, the original image A recovering.
The content not being described in detail in specification of the present invention belongs to those skilled in the art's known technology.

Claims (4)

1. image super large compression transmitting method, is characterized in that comprising the following steps:
(1) the former gray level image A sending as required determines K noise like image B Oi (i=1, ... K), noise like image B Oi is identical with former gray level image A size, wherein, described K is positive integer, the size of described former gray level image A is that M is capable, N is listed as, Q bit quantization, and total bit number is MNQ;
(2) to former gray level image A carry out respectively K Nonlinear Mapping FAi (i=1 ..., K), be transformed to K width map image Bi (i=1 ... K), the original image that Bi is corresponding be BOi (i=1 ... K); The Y-PSNR PSNR of calculating K width map image Bi and noise like image B Oi, selects noise like image B Oj corresponding to Y-PSNR PSNR maximum and Nonlinear Mapping FAj;
(3) according to required compression ratio R, the parameter information of noise like image B Oj and nonlinear transformation FAj is encoded, be then transferred to receiving terminal;
(4) receiving terminal, after data receiver, extracts parameter wherein, by decoding, is processed and is obtained map image BOj;
(5) noise like image B Oj is carried out to inverse mapping, obtain former gray level image A, gray level image A is carried out to Data Transform, obtain the image of the various forms that user wishes;
The method of determining noise like image B Oi in described step (1) is as follows:
According to original image A size, select the noise like image of formed objects, noise like image size is that M is capable, N is listed as, Q bit quantization, total bit number is MNQ; The gray value Gi of noise like image is between 0 to 2 qpositive integer between-1, compression ratio R is MNQ/Cdata, Cdata is total bit number of noise like image transmitting desired parameters;
Nonlinear Mapping FAi (i=1 in described step (2), ... K) be K the Nonlinear Mapping based on multilayer feedforward neural network, the network configuration of each multilayer feedforward neural network is N1-N2-N3, N1 is input layer number, N2 is middle layer node number, N3 is output layer nodes, select K class image Pi (i=1, ... K), Pi and original image A are with size, the corresponding target image BOi (i=1 of every class image, ... K), adopt backpropagation BP class algorithm to learn respectively K neural net, every class image obtain one with the similar image B i (i=1 of target image BOi, ... K), set in advance Y-PSNR PSNR or the least mean-square error MSE between BOi and Bi, when PSNR or MSE meet (dB), time, algorithm finishes, and formation K cover neural network weight Wi (i=1 ... K), wherein Q is the quantization bit of original image A, when the former gray level image A of input, K multilayer feedforward neural network is according to above-mentioned K cover weights Wi (i=1, ... K) carry out Nonlinear Mapping, draw K different map image, select wherein Y-PSNR PSNR the maximum as final Nonlinear Mapping Faj,
Inverse transformation in described step (5) is Nonlinear Mapping FBi (i=1, ... K), this is mapped as K the Nonlinear Mapping based on multilayer feedforward neural network, the network configuration of each multilayer feedforward neural network is N4-N5-N6, N4 is input layer number, N5 is middle layer node number, N6 is output layer nodes, select K class image B Oi (i=1, ... K), be BO1, BO2, ..., BOK, BOi and original image A are with size, the corresponding target image Pi (i=1 of every class image, ... K), adopt backpropagation BP class algorithm to learn respectively K neural net, every class image obtain one with the similar image A i (i=1 of target image Pi, ... K), set in advance Y-PSNR PSNR or the least mean-square error MSE between Ai and Pi, when PSNR or MSE meet (dB) time, algorithm finishes, form K cover neural network weight Vi (i=1, ... K), wherein Q is the quantization bit of original image A, and when input noise like image B Oj, multilayer feedforward neural network carries out Nonlinear Mapping according to the j cover weights Vj that in above-mentioned K cover weights, Y-PSNR PSNR the maximum is corresponding, draw j image, be the original image A of recovery.
2. image super large compression transmitting method according to claim 1, is characterized in that: described noise like image B Oi is a class pseudorandom image, based on Logistic chaotic maps, produces, and its model is X n+1=μ X n(1-X n), n ∈ 0,1,2 ... };
Wherein, μ is called branch parameter, 3.5699456< μ≤4; X 0for initial value, 0<X 0<1; Provide μ value and initial value X 0, can calculate and produce X n+1; To X n+1adjudicate can obtain 0-1 sequence Si (i=1 ... MNQ), work as X n+1>=0.5 o'clock, Si=1; 0<X n+1<0.5, Si=0; With Q bit, sequence Si is divided, Q bit arrives metric conversion by binary system and forms gray value, thereby obtains a width noise like image; Provide again other μ value and initial value X 0, can obtain all the other noise like images, so just form K width noise like image B Oi, i.e. BO1, BO2 ..., BOk.
3. image super large compression transmitting method according to claim 1, it is characterized in that: the noise like image B Oj described in step (3) and the transformation parameter of nonlinear transformation FAj are encoded, after coding, total amount of data is H byte, compression ratio R=MNQ/ (8H), H=H1+H2+H3+H4+H5+H6 wherein, Hi (i=1 ... 6) be positive integer; Transformation parameter coded format is specific as follows: start bit is H1, and picture numbers j is H2, image size M, and N, Q is H3, image parameter μ, X 0for H4, network sequence number is H5, and reserved data is H6; Wherein, start bit, picture numbers, image size, image parameter, reserved data are the transformation parameter of noise like image B Oj, and network sequence number is the transformation parameter of nonlinear transformation FAj.
4. image super large compression transmission system, is characterized in that: comprise transmitting terminal processing module and receiving terminal processing module, described transmitting terminal processing module comprises image input unit, image mapped unit and image parameter coding unit; Described receiving terminal processing module comprises image parameter decoding unit, Inverse Image Warping unit and image output unit;
Image input unit: by the K choosing a noise like image B Oi (i=1 ... K) pre-stored at image input unit, input original-gray image A, forms the needed input data of Nonlinear Mapping; Described noise like image B Oi is identical with former gray level image A size, and wherein, described K is positive integer, and the size of described former gray level image A is that M is capable, N row, Q bit quantization, total bit number are MNQ; The method of determining noise like image B Oi is as follows: according to original image A size, the noise like image B Oi of K formed objects of selection (i=1,2 ..., K), the gray value Gi of noise like image is between 0 to 2 qpositive integer between-1, compression ratio R is MNQ/Cdata, Cdata is total bit number of noise like image transmitting desired parameters;
Image mapped unit: to former gray level image A carry out respectively K Nonlinear Mapping FAi (i=1 ... K), be transformed to K width map image Bi (i=1 ... K), the original image that Bi is corresponding be BOi (i=1 ... K), the Y-PSNR PSNR of calculating K width map image Bi and noise like image B Oi, selects noise like image B Oj corresponding to Y-PSNR PSNR maximum and Nonlinear Mapping FAj, Nonlinear Mapping FAi (i=1, ... K) be K the Nonlinear Mapping based on multilayer feedforward neural network, the network configuration of each multilayer feedforward neural network is N1-N2-N3, N1 is input layer number, N2 is middle layer node number, N3 is output layer nodes, select K class image Pi (i=1, ... K), Pi and original image A are with size, the corresponding target image BOi (i=1 of every class image, ... K), adopt backpropagation BP class algorithm to learn respectively K neural net, every class image obtain one with the similar image B i (i=1 of target image BOi, ... K), set in advance Y-PSNR PSNR or the least mean-square error MSE between BOi and Bi, when PSNR or MSE meet (dB), time, algorithm finishes, and formation K cover neural network weight Wi (i=1 ... K), wherein Q is the quantization bit of original image A, when the former gray level image A of input, K multilayer feedforward neural network is according to above-mentioned K cover weights Wi (i=1, ... K) carry out Nonlinear Mapping, draw K different map image, select wherein Y-PSNR PSNR the maximum as final Nonlinear Mapping FAj,
Image parameter coding unit: according to required compression ratio R, the transmission parameter information of noise like image B Oj and nonlinear transformation FAj is encoded, be then transferred to the image parameter decoding unit of receiving terminal;
Image parameter decoding unit: receive the information of image parameter coding unit, and process the parameter extracting wherein by decoding, by calculating map image BOj;
Inverse Image Warping unit: the noise like image B Oj calculating is carried out to inverse mapping FBj, and inverse mapping FBj is corresponding with Nonlinear Mapping FAj, view data A after being restored, inverse transformation is Nonlinear Mapping FBi (i=1, ... K), this is mapped as K the Nonlinear Mapping based on multilayer feedforward neural network, the network configuration of each multilayer feedforward neural network is N4-N5-N6, N4 is input layer number, N5 is middle layer node number, N6 is output layer nodes, select K class image B Oi (i=1, ... K), be BO1, BO2, ..., BOK, BOi and original image A are with size, the corresponding target image Pi (i=1 of every class image, ... K), adopt backpropagation BP class algorithm to learn respectively K neural net, every class image obtain one with the similar image A i (i=1 of target image Pi, ... K), set in advance Y-PSNR PSNR or the least mean-square error MSE between Ai and Pi, when PSNR or MSE meet (dB), time, algorithm finishes, and formation K cover neural network weight Vi (i=1 ... K), wherein Q is the bit quantization of original image A, when input noise like image B Oj, multilayer feedforward neural network carries out Nonlinear Mapping according to the j cover weights Vj that in above-mentioned K cover weights, Y-PSNR PSNR the maximum is corresponding, draws j image, is the original image A of recovery.
Image output unit: to recovering rear view data A, the image data format of wishing according to user output.
CN201110460855.9A 2011-12-29 2011-12-29 Super large compression method and transmission system for images Active CN102523453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110460855.9A CN102523453B (en) 2011-12-29 2011-12-29 Super large compression method and transmission system for images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110460855.9A CN102523453B (en) 2011-12-29 2011-12-29 Super large compression method and transmission system for images

Publications (2)

Publication Number Publication Date
CN102523453A CN102523453A (en) 2012-06-27
CN102523453B true CN102523453B (en) 2014-11-19

Family

ID=46294232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110460855.9A Active CN102523453B (en) 2011-12-29 2011-12-29 Super large compression method and transmission system for images

Country Status (1)

Country Link
CN (1) CN102523453B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821277B (en) * 2012-07-20 2015-02-11 西安空间无线电技术研究所 Data compression method and data compression system based on image set
CN104144343B (en) * 2014-07-11 2017-06-30 东北大学 A kind of digital image compression encrypts joint coding method
CN105844330B (en) * 2016-03-22 2019-06-28 华为技术有限公司 The data processing method and neural network processor of neural network processor
CN106301766B (en) * 2016-11-14 2019-08-09 成都信息工程大学 A kind of One-Way Encryption method based on chaos system
CN108694734B (en) * 2018-04-20 2022-03-04 西安空间无线电技术研究所 Data compression method suitable for complex image
CN110322414B (en) * 2019-07-05 2021-08-10 北京探境科技有限公司 Image data online quantitative correction method and system based on AI processor
CN112184732B (en) * 2020-09-27 2022-05-24 佛山市三力智能设备科技有限公司 Intelligent image processing method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于生理视觉特性的压缩编码方法在航天遥测图像传输中的应用;陈安宏;《计算机测量与控制》;20081031;第16卷(第10期);1475-1477 *
张晓林,姚远.无人机载SAR图像压缩传输中的关键技术研究.《航空科学技术》.2008,(第3期),34-38. *
张锐菊,周诠.神经网络用于遥感图像压缩的一些研究结果.《中国体视学与图像分析》.2003,第8卷(第3期),183-186. *
陈安宏.基于生理视觉特性的压缩编码方法在航天遥测图像传输中的应用.《计算机测量与控制》.2008,第16卷(第10期),1475-1477. *

Also Published As

Publication number Publication date
CN102523453A (en) 2012-06-27

Similar Documents

Publication Publication Date Title
CN102523453B (en) Super large compression method and transmission system for images
CN102369522B (en) The parallel pipeline formula integrated circuit of computing engines realizes
CN103426141B (en) A kind of image content authentication method and system
CN103413269A (en) Image steganography method and secret information extraction method
CN105227962B (en) A kind of lossless information concealing method based on data difference
CN102938841B (en) Method for hiding information in bearing image, image quality evaluation method and information transmission method
CN103796018B (en) A kind of remote sensing image Real Time Compression and progressive transmission system
CN102523452B (en) Method for conversion, compression and transmission of images
Xuan et al. High capacity lossless data hiding based on integer wavelet transform
CN104751400A (en) Secret image sharing method based on pixel mapping matrix embedding
CN108521535B (en) A kind of Information hiding transmission method based on image blend processing
CN107920250A (en) A kind of compressed sensing image coding and transmission method
CN105100801A (en) Large compression ratio data compression method based on big data
CN107105245B (en) High speed JPEG method for compressing image based on TMS320C6678 chip
CN105956990A (en) General type non-destructive information hiding algorithm for a large capacity image
CN105049669A (en) Method for transmitting multiple images hidden in one image
CN104935928A (en) High-efficiency image compression method based on spatial domain downsampling mode
CN116743936A (en) Ciphertext domain multi-party mobile information hiding method based on residual error network
CN106851197A (en) Unmanned plane transmits image method and system, unmanned plane and receiving device
CN103985096A (en) Hyperspectral image regression prediction compression method based on off-line training
CN104065974A (en) Image compression method and system
CN105376578A (en) Image compression method and device
CN107358568B (en) Noise-disguised image information hiding transmission method
CN110276728B (en) Human face video enhancement method based on residual error generation countermeasure network
Naik et al. Joint Encryption and Compression scheme for a multimodal telebiometric system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant