CN104486631B - A kind of remote sensing image compression method based on human eye vision Yu adaptive scanning - Google Patents
A kind of remote sensing image compression method based on human eye vision Yu adaptive scanning Download PDFInfo
- Publication number
- CN104486631B CN104486631B CN201410853179.5A CN201410853179A CN104486631B CN 104486631 B CN104486631 B CN 104486631B CN 201410853179 A CN201410853179 A CN 201410853179A CN 104486631 B CN104486631 B CN 104486631B
- Authority
- CN
- China
- Prior art keywords
- subband
- image
- wavelet
- remote sensing
- scanning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A kind of remote sensing image compression method based on human eye vision Yu adaptive scanning, belongs to remote sensing images online browse technical field.The present invention is on the basis of hardly overhead bit is increased, the visual effect for rebuilding remote sensing images can be improved, although solving existing Normal squeezing method can provide the preferable reconstruction image of quality under square errors sense, the unsatisfactory problem of reconstruction image visual effect.The technical scheme is that:First mask is weighted using importance to weight changing image;The scanning sequency of then each sub-belt energy after calculating weighting, and the descending arrangement determination intersubband according to energy;Finally, to the scanning inside weighting subband, the characteristic according to subband determines scan method.The present invention effectively improves the visual quality for rebuilding remote sensing images, meets the demand of remote sensing images online browse growing at present.Online browse of the present invention suitable for remote sensing images.
Description
Technical field
It is more particularly to a kind of based on human eye vision and adaptive scanning the present invention relates to a kind of remote sensing image compression method
Remote sensing image compression method, belongs to remote sensing images online browse technical field.
Background technology
With the development of sensor technology, the spatial resolution and spectral resolution of remote sensing images have obtained greatly adding
By force, this brings great convenience for the application of remote sensing images.On the other hand, abundant data are as generation with huge data volume
Valency.Newest space satellite all produces High Dimensional Remote Sensing Data of the number in terms of T daily, and this brings for the storage and transmission of data
Greatly challenge, such case can be eased by using some traditional compress techniques, referring to document EZW [1], SPIHT
[2], SPECK [3], JPEG2000 [4], or some improvement to them, referring to document [30]~[36].Generally, these compressions
Method is usually the measurement compression effectiveness under mean-square error criteria, that is to say, that under the same terms, using the teaching of the invention it is possible to provide smaller is equal
The compression method of square error, it is considered to be more preferable compression method.However, for a width reconstruction image, less mean square error
It is not meant to that the image is adapted to all of application.In fact, the evaluation to compression method should depend on corresponding application.With
The popularization of remote sensing images, online browse of the substantial amounts of application with remote sensing images is relevant.Additionally, a current study hotspot,
Digital earth, it is also desirable to which the preferable remote sensing images of some visual effects are covered to it.In this case, regarded with reference to human eye
The compression method that feel system (HVS) is designed can more meet application demand.HVS is a system for complexity, and it has been demonstrated
It is not consistent with mean-square error criteria, referring to document [5].Therefore, it is necessary to from the vision mechanism of human eye, to remote sensing figure
The compression method of picture is studied.
The visually-perceptible of human eye is always the focus of some researchs, referring to document [5].Visually-perceptible is melted with coding method
Close there is several methods that.One kind is the method based on discrete cosine transform (DCT), referring to document [6]-[8];Another kind be based on from
The method for dissipating wavelet transformation (DWT), referring to document [9]~[12].Some coding methods are to distinguish error (JND) mould with reference to minimum
What type was designed, this kind method is can not to be detected by human eye this feature using some minor variations in image, to compression
Method is designed, referring to document [13] [14].Additionally, some compression methods relevant with vision are carried out based on JPEG2000
Design, referring to document [16] [17].Recently, some coding methods for being based on HVS are designed from the angle of information theory, example
Such as, document [15] proposes the coding method that a kind of view-based access control model is perceived, it is therefore an objective to retain second order yardstick in natural image not
Become feature, the visual quality of reconstruction image is ensured by this method.Document 18] some new models relevant with HVS are established,
And be deduced under specific occasion, the theoretical compression limit of virtually lossless can be reached.However, all these methods both for
What natural image was designed, not in view of the peculiar property of remote sensing images.
For natural image, after wavelet transformation, the dense list that can obtain signal reaches, and this contributes to the volume for obtaining
Code result.However, compared with natural image, remote sensing images have the property of uniqueness.Remote sensing images generally comprise substantial amounts of atural object
Information, this causes that the details of remote sensing images is extremely enriched, such as geological information, the profile at edge and texture etc., even Small object.
Therefore, for remote sensing images, good compression method should be able to retain more detailed information.
In recent years, some are suggested dedicated for the compression method of remote sensing images, referring to document [19]~[22].These pressures
Compression method is compressed in terms of several to image, such as directional wavelet transform (OWT) or sparse expression.Because remote sensing images are usual
It is that sensor is obtained in the way of pushing away and sweeping and size is very big, therefore, the compression method based on scanning is of concern.Document
[23] a kind of method based on scanning is proposed, data is obtained in the way of pushing away and sweep with JPEG2000.However, the height of JPEG2000
Coding efficiency is to be designed as cost with complicated.Consultative committee for space data system (CCSDS) compresses on star, issue
One standard, the CCSDS standards are a kind of methods based on scanning, but do not allow interactive decoder, and the wavelet decomposition number of plies
It is fixed as 3.2009, the document such as V í lchez [24] were extended to CCSDS standards, it is supported the small echo of any number of plies
Decompose, and allow the decoding of several forms.However, all these methods are all based on scanning constant, image is not accounted for
Content.
2012, document [25] proposed a kind of newest method based on scanning, referred to as self adaptation binary-tree coding
(binary tree coding adaptively, BTCA), the method is directed to the compression design of remote sensing images.The method
Coding efficiency can be significantly improved.Although however, the method is relevant with picture material to a certain extent, setting up binary tree
Before, the scanning to changing image still uses fixed scan mode.It is well known that different images have it is different interior
Hold.In other words, to different remote sensing images, the distribution of significant coefficient is different.Therefore, from the angle of scanning, using solid
The coding efficiency that fixed scanning sequency can not be obtained.
The content of the invention
The purpose of the present invention is to propose to a kind of remote sensing image compression method based on human eye vision Yu adaptive scanning, several
On the basis of not increasing overhead bit, it is possible to increase rebuild the visual effect of remote sensing images, solve existing Normal squeezing side
Although method can provide the preferable reconstruction image of quality under square errors sense, reconstruction image visual effect is unsatisfactory to ask
Topic.
The present invention is for the solution technical scheme that is used of above-mentioned technical problem:
A kind of remote sensing image compression method based on human eye vision Yu adaptive scanning of the present invention, is according to following
What step was realized:The wavelet field visual sensitivity model of step one, foundation based on retina;After step 2, completion step one, knot
The probability density function of human eye and remote sensing images observed range is closed, importance weighting mask is generated, and wavelet image is entered
Row weighting;Each wavelet sub-band energy after step 3, calculating weighting, and it is suitable according to the scanning of the descending arrangement determination intersubband of energy
Sequence;Step 4, the characteristic according to subband, determine the scanning sequency in subband;Step 5, determined according to step 3 and step 4
Scanning sequency in intersubband scanning sequency and subband, to the changing image X after weightingwAdaptive scanning is carried out, one dimensional system is generated
Number Sequence;Step 6, the one-dimensional coefficient sequence generated to step 5 using binary-tree coding method are encoded.
The beneficial effects of the invention are as follows:
1st, the present invention is carried out by visual importance on the basis of human vision model is considered to remote sensing images each position
Weighting, additionally, according to the characteristic of different sub-band, the different scan method to Scan Design in intersubband scanning and subband is real
Checking is bright, and in the case of to bit rates, the present invention can provide the visual quality of more preferable reconstruction image.
Brief description of the drawings
Fig. 1 is overall framework figure of the invention, represents the processing procedure to a width remote sensing images;
Fig. 2 is the mapping model schematic diagram of macula lutea and observed range, wherein 1 is retina, 2 is central fovea of retina, and 3 are
One width remote sensing images, 4 project for central fovea of retina, the mapping positions on the retina of the point p on p ' expression remote sensing images, pf
Projection of the central fovea of retina on remote sensing images is represented, r represents retina radius, and e represents vision eccentricity, and u is point p to pf
Distance, v be human eye to image observed range;
Fig. 3 is the probability Distribution Model schematic diagram of observed range v;
Fig. 4 is importance weighting mask schematic diagram;Fig. 5 is scan mode schematic diagram, wherein (a) is " horizontal z scannings ", is used
In the more subband of horizontal information, round dot represents the beginning of scanning, and arrow represents scanning direction, and (b) is " vertical z scannings ", uses
In the more subband of vertical information, round dot represents the beginning of scanning, and arrow represents scanning direction;
Fig. 6 for the present invention checking in original image and corresponding scanning process schematic diagram, wherein (a) be Lunar (8, greatly
It is small be 512 × 512), (b) is the process that adaptive scanning is carried out to the changing image after weighting, and arrow represents scanning direction,
C () is the one-dimensional coefficient sequence of morton scanning generations, (d) is the one-dimensional coefficient sequence of the inventive method generation;
Fig. 7 is to test image " coastal-b1 ", using the inventive method and other scanning sides during the present invention is verified
The reconstruction image comparison schematic diagram that method is obtained, wherein (a) is original image, when (b) is observed range v=5, central fovea of macula is seen
The region for observing, (c) and (d) is, using SPIHT compression methods, respectively when code check is 0.0313bpp and 0.0625bpp, to obtain
The reconstruction image for arriving, it, using JPEG2000 compression methods, is respectively 0.0313bpp and 0.0625bpp in code check that (e) and (f) is
When, the reconstruction image for obtaining, it, using BTCA compression methods, is respectively 0.0313bpp and 0.0625bpp in code check that (g) and (h) is
When, the reconstruction image for obtaining, (i) and (j) is to use the inventive method, is respectively 0.0313bpp and 0.0625bpp in code check
When, the reconstruction image for obtaining, wherein vision raising effect becomes apparent in white small square frame;
Fig. 8 is for, to test image " coastal-b1 ", under different bit rates, several quality evaluations refer in present invention checking
Target comparative result schematic diagram, wherein the result of (a) for FWQI, (b) is the result of VSNR, the result of (c) MS-SSIM;
Fig. 9 is the part remote sensing images schematic diagram used during the present invention is verified, wherein (a) is ocean_2kb1, (b) is
Pavia1, (c) is pavia2, and (d) is houston, and (e) is pleiades_portdebouc_pan1, and (f) is pleiades_
portdebouc_pan2。
Specific embodiment
Specific embodiment of the invention is further described with reference to accompanying drawing.
Human eye is gathered by retina and processes visual information, referring to document [5], in retina, the sky of photoreceptor
Between be unevenly distributed, it is maximum in the distribution density of macular region.With the increase with a distance from macula lutea, photoreceptor distribution is close
Degree is strongly reduced, therefore, corresponding spatial vision frequency band is also strongly reduced.For human eye, it can not be perceived more than given
The spatial frequency of cut-off frequency, that is to say, that from for the angle of HVS, it is not necessary that the extra high image of retaining space frequency
Information.Therefore, under given bit rate, if it is desired to improve the visual quality of reconstruction image, must just consider retina
Characteristic.
Specific embodiment one:Present embodiment is illustrated with reference to Fig. 1, the one kind described in present embodiment is based on human eye vision
With the remote sensing image compression method of adaptive scanning, comprise the following steps:The wavelet field vision of step one, foundation based on retina
Sensitivity model;After step 2, completion step one, with reference to human eye and the probability density function of remote sensing images observed range, generation
Importance weighting mask, and wavelet image is weighted;Each wavelet sub-band energy after step 3, calculating weighting, and press
Descending arrangement according to energy determines the scanning sequency of intersubband;Step 4, the characteristic according to subband, determine that the scanning in subband is suitable
Sequence;Step 5, the intersubband scanning sequency determined according to step 3 and step 4 and scanning sequency in subband, to the change after weighting
Change image XwAdaptive scanning is carried out, one-dimensional coefficient sequence is generated;Step 6, using binary-tree coding method to step 5 generate
One-dimensional coefficient sequence coding.
Afterwards, decoding end is decoded using Binomial model, the weighted transformation image after being rebuild, using vision plus
Mask is counter is weighted for power, is that can obtain reconstruction image by inverse wavelet transform.
Specific embodiment two:Present embodiment from unlike specific embodiment one:Base is set up described in step one
It is in the detailed process of the visual sensitivity model of retina:Step one by one, set up based on spatial domain visual sensitivity model;Step
Rapid 1, the visual sensitivity model of wavelet field is set up.
Specific embodiment three:Present embodiment is illustrated with reference to Fig. 2, present embodiment is with specific embodiment one or two not
Be:The detailed process of step described visual sensitivity model of the foundation based on spatial domain one by one is:For retina,
Sensitivity highest of the vision in macular region.In document [26], it establishes macula lutea and the mapping model of observed range, such as
Shown in Fig. 2.For a width remote sensing images, contrast threshold's value function in spatial domain is
Wherein f representation spaces frequency, e represents retinal eccentricity (degree), CT0Minimum contrast threshold value is represented, α represents empty
Between frequency decay constants, e2Half-resolution eccentricity constant is represented, CT (f, e) represents visual contrast threshold value, and is f's and e
Function;
To the eccentric ratio e for giving, corresponding visual cut-off frequency f is obtained using formula (1)c, that is to say, that it is any to be higher than
fcFrequency be all that vision is sightless.Make CT (f, e)=1, you can obtain cut-off frequency fcIt is as follows:
Obviously, cut-off frequency fcIt is only dependent upon eccentric ratio e.
Assuming that the width of remote sensing images is N number of pixel, the corresponding picture position of central fovea of macula isIts
In,Represent pixel pfCorresponding abscissa,Represent pixel pfCorresponding ordinate, observation from human eye to image away from
It is known from v (being measured by picture traverse), is measured by pixel, point p to point pfDistance for d (p)=| | p-pf||2, then by figure
Image width degree is measured, point p to point pfApart from u be u=d (p)/N, then eccentricity be
From formula (2) and (3), it can be seen that for the observed range for giving, cut-off frequency is the function of location of pixels.
On the other hand, maximum visual perceived resolution is limited by display resolution r, i.e.,
According to sampling thheorem, the highest frequency without aliasing that display can be represented, i.e. nyquist frequency is
According to (2) and (5), to optional position p, final visual cut-off frequency is
Visual sensitivity model based on spatial domain is
Specific embodiment four:Unlike one of present embodiment and specific embodiment one to three:Step institute one by one
The minimum contrast threshold value CT for stating0It is 1/64, spatial frequency attenuation constant α is 0.106, half-resolution eccentricity constant e2For
2.3。
Specific embodiment five:Illustrate present embodiment with reference to Fig. 2, present embodiment and specific embodiment one to four it
Unlike one:The detailed process of the visual sensitivity model for setting up wavelet field described in step one two is:Most image
Compression method is all carried out in wavelet field.For 9/7 wavelet transformation, error-detection threshold can be calculated using 2AFC methods, ginseng
See document [27].By the fitting to experimental result, the error-detection threshold of wavelet coefficient is
Wherein a is constant, k is constant, f0It is constant, gθIt is constant, Aλ,θIt is the amplitude of 9/7 wavelet transformation basic function, λ is
Wavelet decomposition layer, θ represents direction, and r is display resolution, a, k, f0,gθAnd Aλ,θValue see regarding for document [27] subband (λ, θ)
Feel distortion sensitivity Sw(λ, θ) is
Based on (7) and (9), for wavelet field, relative to a specified central fovea of macula point, the vision of wavelet coefficient is quick
Sensitivity model is
P represents the position of Arbitrary Coefficient in wavelet sub-band (λ, θ), β1And β2Represent respectively and be used for controlling SwAnd SfAmplitude
Parameter;From Fig. 2, it will be seen that tan (e)=u/v=d/Nv, that is to say, that d=Nv tan (e) ≈ Nve.Here, e
Unit be radian.For human visual system, highest visual acuity is in macular area and the visual angle (unit of 2 ° of covering:
Radian).Therefore, d=Nv π/90.This explanation macular region is the circle that radius is d.
For macular region, the set of multiple macula lutea central points can be seen as.Assuming that having k macula lutea pointFor the wavelet coefficient at the p of position, its visual sensitivity model S is calculated according to formula (10)i(v, p), i=1,
2 ..., k, finally, for a specified macular region, the visual sensitivity model of wavelet coefficient is
Specific embodiment six:Unlike one of present embodiment and specific embodiment one to five:The institute of step one two
The control S for statingwAnd SfThe parameter beta of amplitude1And β2Respectively 0.01,3.
Specific embodiment seven:With reference to Fig. 3, Fig. 4 explanation present embodiments, present embodiment and specific embodiment one to
Unlike one of six:Generation importance weighting mask described in step 2 and the specific mistake being weighted to wavelet image
Cheng Wei:The purpose of importance weighting mask is to ensure to contribute larger bit first to encode and transmit visual quality.According to formula
(10), visual sensitivity is closely connected with observed range.However, in actual applications, encoder is not aware that solution in advance
The observed range v of code end subscriber, document [26] takes the probability distribution of observed range into account when design factor is weighted, and leads to
Cross this mode and determine Weighted distance.
The probability density function of observed range is
Wherein, v represents human eye to the observed range of image, and μ is the average of function, and σ is the standard deviation of function;Observed range
Probability density curve it is as shown in Figure 3.
The importance weighting mask of wavelet coefficient is at the p of position
Assuming that the observation of central fovea of macula point is picture centre, according to formula (11)~(13), generation importance weighting is covered
Mould, as shown in Figure 4.
Specific embodiment eight:Illustrate present embodiment with reference to Fig. 4, present embodiment and specific embodiment one to seven it
Unlike one:The standard deviation sigma of function is 0.4 in the probability density function of observed range, and the mean μ of function is 1.2586.
Adaptive scanning (HAS) based on HVS;Mask is weighted according to importance to be weighted each wavelet coefficient;
Assuming that changing image is X, its size is M × N, to the scanning process of image, be defined as from closed interval [1,
2 ..., M × N] arrive scanning sequency { (i, j):1≤i≤M, 1≤j≤N } bijective function f, the coordinate pair in its scanning sequency
The position of coefficient in image is represented, after being scanned referring to document [28], the changing image of two dimension is converted into one-dimensional sequence, can
It is expressed as [Xf(1),Xf(2),...,Xf(MN)].Different scan methods, such as morton are scanned, zigzag scannings, raster scannings
Deng the bijective function f actually with different definition.Once it is determined that the definition of function f, to the encoded of changing image
Journey, has reformed into one-dimensional sequenceCompression process.
For a width changing image, the coefficient for first scanning first is encoded.Generally, code stream is organized according to the order of scanning
's.This explanation, for the coefficient for first scanning, corresponding code stream is general before whole code stream.Obviously, this partial code streams
Can first be decoded, be directly displayed after being decoded when necessary.Therefore, under identical bit rate, different scanning sequencies can give birth to
Into the quality of different reconstruction images.If those coefficients bigger to image reconstruction contribution can be scanned first, then reconstruction image
Quality be bound to be improved.
Usually, two dimensional image is to be converted into one-dimensional sequence, such as raster scanning by some classical scan methods,
Zigzag is scanned, and morton scannings and Hilbert (Hilbert) are scanned.Different scan methods is suitable for different applications.
Wherein, it is the Predicting Technique based on context that Lossless Compression is designed, typically using raster scanning.Document [28] is pointed out, adopted
During with based on the technology predicted, the spatial coherence of image makes raster scanning be better than other scan modes in itself.It is right
In the coding method based on dct transform, zigzag scan methods can effectively organize conversion coefficient.However, zigzag is scanned
Substantially it is a kind of " line " scanning, that is to say, that it is only capable of representing unessential " line ".For wavelet image,
Wavelet field, unessential " block " is more common.If these unessential " blocks " can be scanned reasonably, coding efficiency is incited somebody to action
To very big improvement.
Morton scanning can preferably using this " block " relation, some classical scan methods, such as EZW,
SPIHT, is all based on morton scannings.But, although morton scannings can in the way of " block " scan image, but not
There is the characteristic in view of subband.Remote sensing images usually contain the detailed information compared with horn of plenty, after this causes wavelet transformation, high frequency
Information in band is still relatively abundanter.And, the information content that each subband is included may differ by a lot, and this depends on the interior of image
Hold.Therefore, fixed scanning may cause very big influence to coding efficiency.Additionally, Hilbert scannings can also regard one as
" block " scanning is planted, because with good local retention performance, Hilbert scannings were once considered as the best of compression of images
Scan method.However, similar with morton scannings, Hilbert scannings still do not take the characteristic of subband into account.
According to analysis above, the scanning to wavelet image, the scan method based on " block " is more suitable for.Problem
It is:For giving wavelet sub-band, which kind of " block " scan method is more suitable forAdditionally, relative to natural image, the height of remote sensing images
Frequency sub-band information typically more enriches, and it is larger that this causes that different intersubband scanning sequencies can be produced to the quality of reconstruction image
Influence.Therefore, how to determine the scan method of intersubband, be the problem that another needs to consider.
Specific embodiment nine:Unlike one of present embodiment and specific embodiment one to eight:Described in step 3
Calculating weighting after each wavelet sub-band energy, and according to energy descending arrangement determine intersubband scanning sequency detailed process
For:If changing image is X, the wavelet decomposition number of plies is J, step 3 one, to changing image X, calculates corresponding using formula (13)
Importance weighting mask W;Step 3 two, with importance weight mask changing image is weighted, by the Transformation Graphs after weighting
As being expressed as Xw, i.e. Xw=XW;Step 3 three, calculating XwIn each subband energy, and be denoted as Eλ,θ, λ represents this
The corresponding small echo number of plies of subband (λ=1,2 ..., J), θ represents the direction of the subband, θ=1,2,3,4, wherein, " 1 " represents minimum
Frequency subband, " 2 " represent horizontal direction subband, and " 3 " represent diagonally opposed subband, and " 4 " represent vertical direction subband.To weighting subband
Xw(λ, θ), corresponding subband energy is
R and C represents the line number and columns of the subband, X respectivelyw(λ, θ) (i, j) represents that in the small echo number of plies be λ, and direction is θ
Weighting subband in, positioned at the coefficient value at point (i, j) place;
Step 3 four, to all subbands, by ENERGY Eλ,θThe order of descending, determines the scanning sequency of intersubband;
From mathematical angle, it is expressed as
Permu represents the rearrangement of sub-band sequence.
Specific embodiment ten:Unlike one of present embodiment and specific embodiment one to nine:Described in step 4
The characteristic according to subband, the detailed process for determining the scanning sequency in subband is:
To each weighting subband Xw(λ,θ):
(1) if the subband direction is " 1 " or " 2 ", using horizontal_z scan modes;
(2) if the subband direction is " 4 ", using vertical_z scan modes;Vertical_z scan modes are shown in Fig. 5
(b).The scan mode is that the subband more to vertical information is designed, and the principle for designing scanning is " first vertical rear water
It is flat ", while scanning element is carried out in the way of " block ", take into account the advantage of " block " scanning.
(3) if the subband direction is " 3 ", scan mode is by this layer of direction for the subband of " 2 " and " 4 " is together decided on;
If Eλ,2≥Eλ,4, then the subband is using horizontal_z scan modes;
If Eλ,2<Eλ,4, then the subband is using vertical_z scan modes.
Main thought of the invention is as follows:First mask is weighted using importance to weight changing image;Then weighting is calculated
The scanning sequency of each sub-belt energy, and the descending arrangement determination intersubband according to energy afterwards;Finally, to sweeping inside weighting subband
Retouch, the characteristic according to subband determines scan method.To horizontal direction subband, its reflection is image information in the horizontal direction,
Therefore " horizontal_z scannings " is used as scan method.Equally, to those vertical direction subbands, its reflection is that image hangs down
Nogata to information, if scanning can be carried out along vertical direction, under to constant bit rate, more vertical informations can be protected
Stay.In the present invention, the method that a kind of " vertical_z scannings " is proposed to vertical subband." horizontal_z scannings " and
" vertical_z scannings " respectively such as (a) in Fig. 5, shown in (b).For diagonal subband, scan method depends on image in itself.If
In current wavelet layer, the horizontal information of image is more than vertical information, then diagonal subband uses " horizontal_z scannings ", instead
It, diagonal subband uses " vertical_z scannings ".
It is as follows to checking of the invention:
In original image such as Fig. 6 shown in (a).For the image for giving, its visual sensitivity model is first calculated, and combine sight
Find range from probability density function, generation importance weighting mask.Then mask is weighted to each wavelet coefficient according to importance
It is weighted.If the wavelet decomposition number of plies is 3, the method according to the invention, the energy of subband after each weighting is calculated, the results are shown in Table
1.According to table 1, it may be determined that the scanning sequency of subband, i.e. LL after weighting3、LH3、HH3、HL3、LH2、HH2、HL2、LH1、HL1、HH1。
Then, to each weighting subband, its inner scanning order is determined by subband characteristic.To subband LL3、HL3、HL2And
HL1, using " horizontal_z scannings " mode;To subband LH3、LH2And LH1, by the way of " vertical_z scannings ".
According to table 1, it can be seen that to every layer of wavelet sub-band, vertical direction information is more than horizontal direction information, therefore to all to silver coin
Band HH3、HH2And HH1, using " vertical_z scannings ".In whole scanning process such as Fig. 6 shown in (b).Finally, one can be generated
The coefficient sequence of dimension.(c) and (d) represents the one-dimensional coefficient matrix using morton scannings and present invention generation respectively in Fig. 6.From
Fig. 6 can be seen that the inventive method and can come one-dimensional sequence with the scan convertion image of self adaptation, the coefficient for making those important
Above.Simultaneously as the interior scanning of subband is for the purpose of retaining image texture information as far as possible, therefore can be in the base of visual weight
On plinth, the visual effect of reconstruction image is further improved.
The energy (× 10 of each subband in the changing image of table 19)
Overhead bit:
Because importance weighting mask is unrelated with picture material.Therefore, the mask need not pass to decoding end.Only by macula lutea
Center concave point, and picture traverse is sent to decoding end as side information.In the present invention, record yellow with four integers
(wherein, two integers record the abscissa positions of the point to spot center concave point, and two other integer records the ordinate position of the point
Put), two integers are used for recording picture traverse.Therefore, the side information needed for whole weighting mask, it is only necessary to six integers.
Additionally, for adaptive scanning, the scanning sequency of side packet enclosed tool interband and the scan method of diagonal subband.It is right
J layers of wavelet transformation, has 3J+1 subband, therefore needs 3J+1 integer altogether and represent the scanning sequency of intersubband.Additionally, also needing J
Individual integer represents the scan method of all diagonal subbands.Therefore, the side information required for adaptive scanning, needs (3J+1)+J altogether
=4J+1 integer.Based on analysis above, total side information of coding side needs 6+ (4J+1)=4J+7 integer altogether.
By taking (a) in Fig. 6 as an example, the size of image " lunar " is 512 × 512, and the wavelet decomposition number of plies is 3 layers, then the present invention
Total overhead bit be (4 × 3+7)/(512 × 512) ≈ 0.00725%.That is, image is bigger, the ratio shared by expense
It is smaller.And, if further using entropy code to overhead-bits, overhead bit can be smaller.It is right according to analysis above
In compression method proposed by the present invention, total overhead bit is very small, it might even be possible to ignore.
Binary-tree coding:
For compression of images, at present using the in the majority of embedded encoded method, because image is supported in this coding method
Progressive transmission, and can code stream optional position decode.
Document [25] proposes a kind of newest embedded encoded method based on binary tree, i.e. self adaptation binary-tree coding
Method (binary tree coding adaptively, BTCA).The method does not need complicated step, is such as based on context
Model, rate-distortion optimization etc. greatly reduces algorithm complex.Its basic process is as follows:First, according to scanning after one
Dimension sequence sets up binary tree.Then, the binary tree is encoded, coding is that order from left to right is entered according to from bottom to up
OK.The adjacent coefficient that make use of significant coefficient in an encoding process is generally also this important thought, is carried to a certain extent
Coding efficiency high.The implementation process of binary-tree coding is given in detail below.
Function code=BTCA (Tk)
Input:Binary tree is represented with Γ, the position of the upper node of tree, T are represented with ikRepresent threshold value, initial thresholdAnd Tk=T0/2k;
Initialization:If the height of tree of whole binary tree is D, d=D is made;
While(d>1)
{
(1)
(2) Let ct={ } .If Γ (i) >=Tk-1
If the adjacent node of Γ (i) is Γ (i+1), and Γ (i+1) is inessential, then ct=Bin_Tree_Enc (Γ, i+1,
Tk);
Otherwise
If the adjacent node of Γ (i) is Γ (i-1), and Γ (i-1) is inessential, then ct=Bin_Tree_Enc (Γ, i-1,
Tk);
(3) code={ code, ct },
(4) d=d-1;
Output:It is T in threshold valuekWhen, the code stream of correspondence bit plane output.
}
In above-mentioned BTCA algorithms, to giving nodes encoding, Bin_Tree_Enc algorithms are employed.The algorithmic procedure is such as
Under:
Function code=Bin_Tree_Enc (Γ, i, Tk)
Input:Binary tree is represented with Γ, the position of the upper node of tree, T are represented with ikRepresent threshold value, initial thresholdAnd Tk=T0/2k;
(1) if Γ (i) is encoded in bigger threshold value, i.e. Γ (i) >=Tk-1, then
If Γ (i) is not in the bottom of tree, two child nodes of coding Γ (i);Otherwise encode the sign bit of Γ (i).
(2) if Γ (i) has an important father node, and Γ (i) adjacent node be it is unessential, then:
If Γ (i) is not in the bottom of tree, two child nodes of coding Γ (i);Otherwise encode the sign bit of Γ (i).
(3) if Γ (i) >=Tk
If Γ (i) adds " 1 " not in the bottom of tree, two child nodes of coding Γ (i) before code stream;Otherwise compile
The sign bit of code Γ (i), and add " 1 " before code stream.
(4) otherwise
Output ' 0 '.
Output:The code stream of the subtree with Γ (i) as tree root.
Because binary-tree coding method is relatively easy, and efficiency is higher, therefore the present invention is carried out based on content to changing image
Adaptive scanning after, using binary-tree coding method to generate sequence encode.
Quality evaluation index:
Compression method proposed by the present invention is to meet the demand of growing remote sensing images online browse.Therefore,
The compression method of proposition should be weighed using the index related to human eye vision.In the present invention, using FWQI [26], VSNR
And MS-SSIM [38] is as evaluation index [37],.
A、FWQI
In document [29], Zhou Wang etc. propose a kind of image quality evaluation index, and the method regards image fault as
It is three kinds of functions of factor, i.e. degree of correlation distortion, luminance distortion, contrast distortion.Then, the distortion is extended into wavelet field,
Define wavelet field corresponding distortion FWQI be
Here, M represents wavelet coefficient number, c (xn) represent position in xnWavelet coefficient, Q (xn) represent in quality evaluation
Position x in figurenMass value.Due to S (v, xn) it is to change with v, therefore, for a width test image, its FWQI is the letter of v
Number.
B、VSNR
In document [37], Chandler etc. proposes a kind of effective matrix, i.e. peak vision signal to noise ratio matrix (VSNR),
Nearly threshold value and threshold value property excessively according to human eye, carry out the vision distortion of quantification image.Relative to other vision distortion matrixes,
VSNR more can effectively reflect the truly feels of human eye.VSNR (units:DB) can be defined as follows
Here, C (f) represents the contrast of original image f, dpcRepresent the contrast distortion that human eye is perceived, dgpRepresent whole
The distortion level of body, α is set as 0.04.
C、MS-SSIM
MS-SSIM is Multi-scale model similarity measure, and this is estimated and considers various observation conditions, is surveyed than structural similarity
Degree (SSIM) is more flexible.Therefore, the present invention uses MS-SSIM as a kind of measuring quality method.
Here, l (x, y), c (x, y) and s (x, y) represent brightness, contrast and structural comparing respectively.
Experiment and result:
In order to prove the performance of compression method proposed by the present invention, some experiments are done, and under different bit rates, with
Other methods for being based on scanning compare.
D, pretreatment:
The remote sensing images of a few width difference locating depths are have chosen in order to prove the validity of the inventive method, in experiment, it is most of
Remote sensing images all have resolution ratio higher.
Some test remote sensing images are obtained from CCSDS test libraries, referring to document [39], including " lunar ",
" coastal-b1 ", " ocean-2kb1 ", " pleiades_portdebouc_pan ".To these images, we intercept the upper left corner
Size tested for 512 × 512 image, it is therefore an objective to be compared under the same conditions.Additionally, also have chosen other two
Width remote sensing images are tested.A wherein width " pavia " is the Pavia ground in North of Italy by QuickBird sensors
What area obtained, resolution ratio is 0.6m.Another piece image " houston " is in U.S. by sensor WorldView-2 in 2013
What the Houston area of state obtained, resolution ratio is 0.5m.The size of image " pavia " and " houston " is 512 × 512.
Whole test image is concentrated, and the locating depth of " lunar " and " coastal-b1 " is 8, and the locating depth of " ocean_2kb1 " is 10,
The locating depth of " pavia " and " houston " is 11bit, and the locating depth of " pleiades_portdebouc_pan " is 12.Image
In " lunar " such as Fig. 6 shown in (a), image " coastal-b1 " is shown in that (a) is shown in Fig. 7.Remaining test image is shown in Fig. 9.
Subjective quality compares:
In order to compare regarding for the reconstruction images that compression method proposed by the present invention and other methods for being based on scanning are obtained
Feel quality, first tested using image " coastal-b1 ".Assuming that central fovea of macula point observation be image center, and see
It is 5 to find range from v, and the wavelet decomposition number of plies is 5 layers.SPIHT, JPEG2000, BTCA is respectively adopted, and the inventive method is pressed
Contracting.The visual quality comparative result of reconstruction image is as shown in Figure 7 under different bit rates.
From Fig. 7, it can be seen that under to bit rates, using compression method proposed by the present invention, its reconstruction for obtaining
The total quality of image is better than other methods based on scanning.One reason for this is that, the method for proposition employs vision
Weighting mask is weighted to Wavelet image, and this can guarantee that contributes big bit to be first scanned and compile to rebuilding visual quality of images
Code.Further it is proposed that adaptive scanning process can retain more texture informations.Therefore, the entirety of reconstruction image
Visual quality is necessarily improved.
In order to further prove the validity of the inventive method, for the test image (a) in Fig. 7, do more real
Test into line justification.When bitrate range is 0.0313bpp to 1bpp, FWQI, VSNR and MS-SSIM that all methods are obtained
Result, respectively such as (a) in Fig. 8, shown in (b) and (c).As can be seen that from the angle of objective evaluation, in whole given bit
In the range of rate, the visual quality of the reconstruction image that method proposed by the present invention is obtained still is based on the method for scanning better than other.
The present invention is based on the Performance comparision of scan method with other:
Usually, different images has different contents, including complexity, texture etc..From the angle of evaluation algorithms, no
It should be evaluated only with piece image, and multiple image should be taken and tested, and take the average of result.The present invention has done more
The validity of the extracting method to verify is tested more.In experiment, choose several test images and tested, including in Fig. 6
(a), (a) in Fig. 7, used as test image, each image carries out the decomposition of the biorthogonal wavelet of Pyatyi 9/7 to (a)~(f) in Fig. 9.
According to the probability distribution of the observed range be given in Fig. 3, it can be seen that the observed range v of maximum possible is 3.Therefore, at these
In experiment, the value of v is set to 3.Under different bit rates, the result point of institute methodical FWQI, VSNR, and MS-SSIM
2, table 3, table 4, table 5, table 6 and table 7 are not shown in Table.
In 2~table of table 7, with " J2K " expression " JPEG2000 ", it can be seen that for all given bit rates, the present invention
The average of the FWQI, VSNR, and the MS-SSIM that obtain all is highest.This explanation, the method for scanning is based on compared to other, this
Invention can provide more preferable reconstructed image quality.
The FWQI of the inventive method of table 2 and other compression methods for being based on scanning compares
The FWQI of the inventive method of table 3 and other compression methods for being based on scanning compares
The VSNR (dB) of the inventive method of table 4 and other compression methods for being based on scanning compares
The VSNR (dB) of the inventive method of table 5 and other compression methods for being based on scanning compares
The MS-SSIM of the inventive method of table 6 and other compression methods for being based on scanning compares
The MS-SSIM of the inventive method of table 7 and other compression methods for being based on scanning compares
Result and conclusion:
The present invention generates visual weight mask first according to the visual characteristic of human eye.Secondly, to the Transformation Graphs after weighting
Picture, according to the importance of visual weight subband, designs the scanning sequency of intersubband.For the scanning in subband, purpose of design is
Retain more detailed information as far as possible, this contributes on the basis of excessive data is not increased, improve the vision of reconstruction image
Quality.Finally, the one-dimensional coefficient sequence of generation is encoded using binary-tree coding device.The compression method expense of proposition is minimum, very
To can be ignored.The results show, the method for being based on scanning with other is compared, and the present invention can provide preferably reconstruction
The visual quality of image.Whole scanning process of the invention can be regarded as being realized by two stages.The purpose of first stage
It is the visual characteristic according to human eye, generates importance weighting mask, this helps to ensure that those contribute bigger to visual quality
Bit is first scanned.Second stage, for being scanned in intersubband scanning and subband, sets respectively to the changing image after visual weight
The different scanning sequency of meter.Finally, the one-dimensional coefficient sequence for generating is encoded using binary-tree coding device.
The present invention effectively improves the visual quality for rebuilding remote sensing images, meets remote sensing images growing at present online
The demand for browsing.Online browse of the present invention suitable for remote sensing images.
Bibliography of the present invention is as follows:
[1]J.M.Shapiro,“Embedded image coding using zerotrees of wavelet
coefficients,”IEEE Trans.Signal Process.,vol.41,no.12,pp.3445–3462,Dec.1993.
[2]A.Said and W.A.Pearlman,“A new,fast,and efficient image codec
based on set partitioning in hierarchical trees,”IEEE Trans.Circuits
Syst.Video Technol.,vol.6,no.3,pp.243–250,Jun.1996.
[3]W.A.Pearlman,A.Islam,N.Nagaraj,and A.Said,“Efficient low
complexity image coding with a set-partitioning embedded block coder,”IEEE
Trans.Circuits Syst.Video Technol.,vol.14,no.11,pp.1219–1235,Nov.2004.
[4]JPEG2000Image Coding System,ISO/IEC Std.15 444-1,2000.
[5]A.Beghdadi,M.C.Larabi,A.Bouzerdoum,and K.M.Lftekharuddin,“A survey
of perceptual image processing methods,”Signal Processing:Image
Communication,vol.28,no.8,pp.811-831,Sep.2013.
[6]B.Macq,and H.Q.Shi,“Perceptually weighted vector quantization in
the DCT domain,”Electronics Letters,vol.29,no.15,pp.1382–1384,Jul.1993.
[7]I.Hontsch and L.J.Karam,“Locally adaptive perceptual image
coding,”IEEE Trans.Image Process.,vol.9,no.9,pp.1472–1483,Sep.2000.
[8]I.Hontsch and L.J.Karam,“Adaptive image coding with perceptual
distortion control,”IEEE Trans.Image Process.,vol.11,no.9,pp.213–222,
Mar.2002.
[9]M.G.Albanesi and F.Guerrini,“An HVS-based adaptive coder for
perceptually lossy image compression,”Pattern Recognition,vol.36,no.4,pp.997–
1007,Apr.2003.
[10]M.J.Nadenau,J.Reichel,and M.Kunt,“Wavelet-based color image
compression:exploring the contrast sensitivity function,”IEEE Trans.Image
Process.,vol.12,no.1,pp.58–70,Jan.2003.
[11]Z.Liu,L.J.Karam,and A.B.Watson,“JPEG2000 encoding with perceptual
distortion control,”IEEE Trans.Image Process.,vol.15,no.7,pp.1763–1778,
Jul.2006.
[12]G.Sreelekha,P.S.Sathidevi,“An HVS based adaptive quantization
scheme for the compression of color images,”Digital Signal Processing,
vol.20.no.4.pp.1129–1149,Jul.2010.
[13]D.Wu,D.M.Tan,M.Baird,and J.DeCampo,etc,“Perceptually lossless
medical image coding,”IEEE Trans.Medical Image,vol.25,no.3,pp.335–344,
Mar.2006.
[14]X.H.Zhang,W.S.Lin,P.Xue,“Just-noticeable difference estimation
with pixels in images,”J.Vis.Commun.Image R,vol.19,no.1,pp.30-41,Jan.2008.
[15]Y.Niu,X.L.Wu,G.M.Shi,and X.T.Wang,“Edge-based perceptual image
coding,”IEEE Trans.Image Process.,vol.21,no.4,pp.1899–1910,Apr.2012.
[16]D.M.Tan,C.S.Tan,and H.R.Wu,“Perceptual Color Image Coding With
JPEG2000,”IEEE Trans.Image Process.,vol.19,no.2,pp.374–383,Feb.2010.
[17]H.Oh,A.Bilgin,and M.W.Marcellin,“Visually Lossless Encoding for
JPEG2000,”IEEE Trans.Image Process.,vol.22,no.1,pp.189–201,Jan.2013.
[18]A.L.N,T.D.Costa,M.N.Do,“A Retina-Based Perceptually Lossless
Limit and a Gaussian Foveation Scheme With Loss Control,”IEEE
J.Sel.Topics.Signal Process,vol.8,no.3,pp.438–453,Jun.2014.
[19]B.Li,R.Yang,and H.X.Jiang.“Remote-sensing image compression using
two-dimensional oriented wavelet transform,”IEEE Trans.Geosci.Remote Sens.,
vol.49,no.1,pp.236–250,Jan.2011.
[20]A.Karami,M.Yazdi,and G.Mercier.“Compression of hyperspectral
images using discerete wavelet transform and tucker decomposition,”IEEE
J.Sel.Topics Appl.Earth Observ.,vol.5,no.2,pp.444–450,Apr.2012.
[21]X.Zhan,R.Zhang,D.Yin,and A.Z.Hu.“Remote sensing image compression
based on double-sparsity dictionary learning and universal trellis coded
quantization,”in Proc.IEEE Int.Conf.Image Process.,2013,pp.1665-1669.
[22]C.Jiang,H.Y.zhang,H.F.Shen,and L.P.Zhang.“Two-Step Sparse Coding
for the Pan-Sharpening of Remote Sensing Images,”IEEE J.Sel.Topics Appl.Earth
Observ.,vol.7,no.5,pp.1792–1805,May.2014.
[23]P.Kulkarni,A.Bilgin,M.W.Marcellin,and J.C.Dagher.“Compression of
earth science data with JPEG2000,”in Hyperspectral Data Compression.pp.347–
378,2006.
[24]F.García-Vílchez and J.Serra-Sagristà,“Extending the CCSDS
recommendation for image data compression for remote sensing scenarios,”IEEE
Trans.Geosci.Remote Sens.,vol.47,no.10,pp.3431–3445,Oct.2009.
[25]K.K.Huang and D.Q.Dai,“A new on-board image codec based on binary
tree with adaptive scanning order in scan-based mode,”IEEE
Trans.Geosci.Remote Sens.,vol.50,no.10,pp.3737-3750,Oct.2012.
[26]Z.Wang and A.C.Bovik,“embedded foveation image coding,”IEEE
Trans.Image Process.,vol.10,no.10,pp.1397-1410,Oct.2001.
[27]A.B.Watson,G.Y.Yang,J.A.Solomon,and J.Villasenor.“Visibility of
wavelet quantization noise,”IEEE Trans.Image Process.,vol.6,no.8,pp.1164-
1175,Aug.1997.
[28]N.Memon,D.L.Neuhoff,and S.Shende,“An analysis of some common
scanning techniques for lossless image coding,”IEEE Trans.Image Process.,
vol.9,no.11,pp.1837-1848,Nov.2000.
[29]Z.Wang and A.C.Bovik,“A universal image quality index,”IEEE
Signal Process.Lett.,vol.9,no.3,pp.81–84,Mar.2002.
[30]S.Patel,and S.Srinivasan,“Modified embedded zerotree wavelet
algorithm for fast implementation of wavelet image codec,”Electronics
Letters,vol.36,no.20,pp.1713-1714,Sep.2000.
[31]V.N.Ramaswamy,K.R.Namuduri,and N.Ranganathan,“Context-based
lossless image coding using EZW framework,”IEEE Trans.Circuits Syst.Video
Technol.,vol.11,no.4,pp.554–559,Apr.2001.
[32]S.R.Chang and L.Carin,“A modified SPIHT algorithm for image
coding with a joint MSE and classification distortion measure,”IEEE
Trans.Image Process.,vol.15,no.3,pp.713-725,Mar.2006.
[33]Z.J.Fang,N.X.Xiong,L.T.Yang,X.M.Sun,and Y.Yang,“Interpolation-
Based Direction-Adaptive Lifting DWT and Modified SPIHT for Image Compression
in Multimedia Communications,”IEEE Systems Journal,vol.5,no.4,pp.584-593,
Dec.2011.
[34]Y.Jin and H.J.Lee,“A block-based pass-parallel SPIHT algorithm,”
IEEE Trans.Circuits Syst.Video Technol.,vol.22,no.7,pp.1064-1075,Jul.2012
[35]Z.Y.Wu,A.Bilgin,and M.W.Marcellin,“Joint source/channel coding
for image transmission with JPEG2000 over memoryless channels,”IEEE
Trans.Image Process.,vol.14,no.8,pp.1020-1032,Aug.2005.
[36]J.Y.Yang,Y.Wang,W.L.Xu,and Q.H.Dai,“Image coding using dual-tree
discrete wavelet transform,”IEEE Trans.Image Process.,vol.17,no.9,pp.1555-
1569,Sep.2008.
[37]D.M.Chandler and S.S.Hemami.“VSNR:A wavelet-based visual signal-
to-noise ratio for natural images,”IEEE Trans.Image Process.,vol.16,no.9,
pp.2284-2298,Sep.2007.
[38]Z.Wang,E.P.Simoncelli and A.C.Bovik,″Multiscale structural
similarity for image quality assessment,″in Proc.IEEE Asilomar Conf.Signals,
Syst.,Comput.,Pacific Grove,CA,Nov.2003,pp.1398–1402.
[39]CCSDS reference test image set,Apr.2007.[Online].Available:
http://cwe.ccsds.org/sls/docs/sls-dc/.
Claims (2)
1. a kind of remote sensing image compression method based on human eye vision Yu adaptive scanning, it is characterised in that methods described include with
Lower step:
The wavelet field visual sensitivity model of step one, foundation based on retina;
After step 2, completion step one, with reference to human eye and the probability density function of remote sensing images observed range, generation importance adds
Power mask, and wavelet image is weighted;
Step 3, calculate weighting after each wavelet sub-band energy, and according to energy descending arrangement determine intersubband scanning sequency;
Step 4, the characteristic according to subband, determine the scanning sequency in subband;
Step 5, the intersubband scanning sequency determined according to step 3 and step 4 and scanning sequency in subband, after weighting
Changing image XwAdaptive scanning is carried out, one-dimensional coefficient sequence is generated;
Step 6, the one-dimensional coefficient sequence generated to step 5 using binary-tree coding method are encoded;
The detailed process of visual sensitivity model of the foundation based on retina described in step one be:
Step one by one, set up based on spatial domain visual sensitivity model;
Step one two, the visual sensitivity model for setting up wavelet field;
The detailed process of step described visual sensitivity model of the foundation based on spatial domain one by one is:
For a width remote sensing images, contrast threshold's value function in spatial domain is
Wherein f representation spaces frequency, e represents retinal eccentricity, CT0Minimum contrast threshold value is represented, α representation space frequencies decline
Subtract constant, e2Half-resolution eccentricity constant is represented, CT (f, e) represents visual contrast threshold value, and is the function of f and e;
To the eccentric ratio e for giving, corresponding visual cut-off frequency f is obtained using formula (1)c, CT (f, e)=1 is made, ended
Frequency fcIt is as follows:
Assuming that the width of remote sensing images is N number of pixel, the corresponding picture position of central fovea of macula isWherein,
Represent pixel pfCorresponding abscissa,Represent pixel pfCorresponding ordinate, the observed range v from human eye to image is
It is known, measured by pixel, point p to point pfDistance for d (p)=| | p-pf||2, then measured by picture traverse, point p to point pf
Apart from u be u=d (p)/N, then eccentricity be
Maximum visual perceived resolution is limited by display resolution r, i.e.,
According to sampling thheorem, the highest frequency without aliasing that display can be represented, i.e. nyquist frequency is
According to (2) and (5), to optional position p, final visual cut-off frequency is
Visual sensitivity model based on spatial domain is
Step minimum contrast threshold value CT described one by one0It is 1/64, spatial frequency attenuation constant α is 0.106, and half-resolution is inclined
Heart rate constant e2It is 2.3;
The detailed process of the visual sensitivity model for setting up wavelet field described in step one two is:
The error-detection threshold of wavelet coefficient is
Wherein a is constant, k is constant, f0It is constant, gθIt is constant, Aλ,θIt is the amplitude of 9/7 wavelet transformation basic function, λ is small echo
Decomposition layer, θ represents direction;
The vision distortion susceptibility S of subband (λ, θ)w(λ, θ) is
Based on (7) and (9), for wavelet field, relative to a specified central fovea of macula point, the visual sensitivity of wavelet coefficient
Model is
P represents the position of Arbitrary Coefficient in wavelet sub-band (λ, θ), β1And β2Represent respectively and be used for controlling SwAnd SfThe parameter of amplitude;
Assuming that having k macula lutea pointFor the wavelet coefficient at the p of position, its vision is calculated according to formula (10) quick
Sensitivity model Si(v, p), i=1,2 ..., k, finally, for a specified macular region, the visual sensitivity of wavelet coefficient
Model is
Control S described in step one twowAnd SfThe parameter beta of amplitude1And β2Respectively 0.01,3;
Generation importance weighting mask described in step 2 simultaneously to the detailed process that wavelet image is weighted is:
Using the probability density function of following observed range:
Wherein, v represents human eye to the observed range of image, and μ is the average of function, and σ is the standard deviation of function;
The importance weighting mask of wavelet coefficient is at the p of position
Assuming that the observation of central fovea of macula point is picture centre, according to formula (11)~(13), importance weighting mask is generated, used
The importance weighting mask of generation is weighted to wavelet image, the changing image after generation weighting;
The standard deviation sigma of function is 0.4 in the probability density function of the observed range, and the mean μ of function is 1.2586;
Described in step 3 calculating weighting after each wavelet sub-band energy, and according to energy descending arrangement determine intersubband scanning
The detailed process of order is:If changing image is X, the wavelet decomposition number of plies is J,
Step 3 one, to changing image X, corresponding importance weighting mask W is calculated using formula (13);
Step 3 two, with importance weight mask changing image is weighted, the changing image after weighting is expressed as Xw, i.e.,
Xw=XW;
Step 3 three, calculating XwIn each subband energy, and be denoted as Eλ,θ, λ represents the corresponding small echo number of plies of the subband,
λ=1,2 ..., J, θ represent the direction of the subband, θ=1,2,3,4, wherein, " 1 " represents lowest frequency subband, and " 2 " represent level side
To subband, " 3 " represent diagonally opposed subband, and " 4 " represent vertical direction subband, to weighting subband Xw(λ, θ), corresponding subband energy
ForR and C represents the line number and columns of the subband, X respectivelyw(λ, θ) (i, j) is represented in small echo
The number of plies is λ, during direction is for the weighting subband of θ, positioned at the coefficient value at point (i, j) place;
Step 3 four, to all subbands, by ENERGY Eλ,θThe order of descending, determines the scanning sequency of intersubband.
2. a kind of remote sensing image compression method based on human eye vision Yu adaptive scanning according to claim 1, it is special
It is the characteristic according to subband described in step 4 to levy, and the detailed process for determining the scanning sequency in subband is:
To each weighting subband Xw(λ,θ):
(1) if the subband direction is " 1 " or " 2 ", using horizontal_z scan modes;
(2) if the subband direction is " 4 ", using vertical_z scan modes;
(3) if the subband direction is " 3 ", scan mode is by wavelet decomposition layer direction for the subband of " 2 " and " 4 " is together decided on;
If Eλ,2≥Eλ,4, then the subband is using horizontal_z scan modes;
If Eλ,2<Eλ,4, then the subband is using vertical_z scan modes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410853179.5A CN104486631B (en) | 2014-12-31 | 2014-12-31 | A kind of remote sensing image compression method based on human eye vision Yu adaptive scanning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410853179.5A CN104486631B (en) | 2014-12-31 | 2014-12-31 | A kind of remote sensing image compression method based on human eye vision Yu adaptive scanning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104486631A CN104486631A (en) | 2015-04-01 |
CN104486631B true CN104486631B (en) | 2017-06-06 |
Family
ID=52761123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410853179.5A Active CN104486631B (en) | 2014-12-31 | 2014-12-31 | A kind of remote sensing image compression method based on human eye vision Yu adaptive scanning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104486631B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6944402B2 (en) * | 2018-03-08 | 2021-10-06 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Absence determination method, program, sensor processing system, and sensor system |
CN110175965B (en) * | 2019-05-30 | 2020-12-18 | 齐齐哈尔大学 | Block compressed sensing method based on self-adaptive sampling and smooth projection |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1322442A (en) * | 1999-07-20 | 2001-11-14 | 皇家菲利浦电子有限公司 | Encoding method for compression of video sequence |
CN1926883A (en) * | 2004-01-13 | 2007-03-07 | 三星电子株式会社 | Video/image coding method and system enabling region-of-interest |
CN102572423A (en) * | 2011-12-16 | 2012-07-11 | 辽宁师范大学 | Video coding method based on important probability balanced tree |
CN103581691A (en) * | 2013-11-14 | 2014-02-12 | 北京航空航天大学 | Efficient and parallelable image coding method oriented to sparse coefficients |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7916960B2 (en) * | 2005-09-06 | 2011-03-29 | Megachips Corporation | Compression encoder, compression encoding method and program |
-
2014
- 2014-12-31 CN CN201410853179.5A patent/CN104486631B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1322442A (en) * | 1999-07-20 | 2001-11-14 | 皇家菲利浦电子有限公司 | Encoding method for compression of video sequence |
CN1926883A (en) * | 2004-01-13 | 2007-03-07 | 三星电子株式会社 | Video/image coding method and system enabling region-of-interest |
CN102572423A (en) * | 2011-12-16 | 2012-07-11 | 辽宁师范大学 | Video coding method based on important probability balanced tree |
CN103581691A (en) * | 2013-11-14 | 2014-02-12 | 北京航空航天大学 | Efficient and parallelable image coding method oriented to sparse coefficients |
Non-Patent Citations (4)
Title |
---|
A New On-Board Image Codec Based on Binary Tree With Adaptive Scanning Order in Scan-Based Mode;Ke-Kun Huang,Dao-Qing Dai;《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》;20121031;第50卷(第10期);全文 * |
A New, Fast, and Efficient Image Codec Based on Set Partitioning in Hierarchical Trees;Amir Said,William A. Pearlman,;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;19960630;第6卷(第3期);全文 * |
Embedded Foveation Image Coding;Zhou Wang,Alan Conrad Bovik;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20011031;第10卷(第10期);摘要,正文第1399页第1栏第1-35行、第1400页第1栏第1行-第2栏第3行、第1401页第1栏第13行、第1401页第2栏第7行-第1402页第1栏第25行、第1403页第1栏第11-19行、第1403页第2栏第38-50行、第1404页第1栏-第2栏、第1405页第1栏-第2栏 * |
Visibility of Wavelet Quantization Noise;Andrew B. Watson, Gloria Y. Yang, Joshua A. Solomon, John Vi;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;19970808;第6卷(第8期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN104486631A (en) | 2015-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104331913B (en) | Polarimetric SAR Image compression method based on sparse K SVD | |
Shi et al. | A novel vision-based adaptive scanning for the compression of remote sensing images | |
Yadav et al. | A review on image compression techniques | |
Reddy et al. | Lossless compression of medical images for better diagnosis | |
Aulí-Llinàs et al. | Lossy-to-lossless 3D image coding through prior coefficient lookup tables | |
CN108810534B (en) | Image compression method based on direction lifting wavelet and improved SPIHT under Internet of things | |
Kalavathi et al. | A wavelet based image compression with RLC encoder | |
CN104486631B (en) | A kind of remote sensing image compression method based on human eye vision Yu adaptive scanning | |
CN108718409A (en) | The remote sensing image compression method encoded based on Block direction Lifting Wavelet and adative quadtree | |
Demaret et al. | Advances in digital image compression by adaptive thinning | |
CN101056406B (en) | Medical ultrasonic image compression method based on the mixed wavelet coding | |
Deshlahra | Analysis of Image Compression Methods Based On Transform and Fractal Coding | |
Galan-Hernandez et al. | Wavelet-based frame video coding algorithms using fovea and SPECK | |
Zhu et al. | An improved SPIHT algorithm based on wavelet coefficient blocks for image coding | |
Kekre et al. | Image compression based on hybrid wavelet transform generated using orthogonal component transforms of different sizes | |
Liu et al. | Zerotree wavelet image compression with weighted sub-block-trees and adaptive coding order | |
CN101379831A (en) | Image coding/decoding method and apparatus | |
Li et al. | Compression quality prediction model for JPEG2000 | |
CN103402043A (en) | Image compression unit for large visual field TDICCD camera | |
Ismail et al. | Quality assessment of medical image compressed by contourlet quincunx and SPIHT coding | |
Vidhya et al. | Performance analysis of medical image compression | |
TWI533236B (en) | A method of cs-waveletbased image coding for dvc system | |
Amgothu et al. | Image Compression Using Adaptively Scanned Wavelet Difference Reduction Technique (ASWDRT) | |
Zhou et al. | Satellite hyperspectral imagery compression algorithm based on adaptive band regrouping | |
Al-Sammaraie | Medical Images Compression Using Modified SPIHT Algorithm and Multiwavelets Transformation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20190611 Address after: 150000 Heilongjiang Harbin Dalian economic and Trade Zone, the North Road and Xingkai Road intersection Patentee after: Harbin University of Technology Robot Group Co., Ltd. Address before: 150001 No. 92 West straight street, Nangang District, Heilongjiang, Harbin Patentee before: Harbin Institute of Technology |
|
TR01 | Transfer of patent right |