CN103475881B - The image JND threshold value computational methods of view-based access control model attention mechanism in DCT domain - Google Patents

The image JND threshold value computational methods of view-based access control model attention mechanism in DCT domain Download PDF

Info

Publication number
CN103475881B
CN103475881B CN201310413594.4A CN201310413594A CN103475881B CN 103475881 B CN103475881 B CN 103475881B CN 201310413594 A CN201310413594 A CN 201310413594A CN 103475881 B CN103475881 B CN 103475881B
Authority
CN
China
Prior art keywords
image
threshold value
block
jnd
factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310413594.4A
Other languages
Chinese (zh)
Other versions
CN103475881A (en
Inventor
张冬冬
高利晶
臧笛
孙杳如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201310413594.4A priority Critical patent/CN103475881B/en
Publication of CN103475881A publication Critical patent/CN103475881A/en
Application granted granted Critical
Publication of CN103475881B publication Critical patent/CN103475881B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The image JND threshold value computational methods of view-based access control model attention mechanism in a kind of DCT domain.The present invention proposes two kinds of schemes significance and block sort combined, a kind of is that the masking factor of the visual attention masking factor with a single point and block sort combines according to point-to-point mode, another kind is to represent the significance of whole piece with the average significance of each piece, is then combined according to the mode of block to block by the masking factor of visual attention masking factor based on each piece and block sort.Use the comprehensive calculated value of contrast masking sensitivity function that traditional JND threshold value is modulated, finally give JND threshold value more accurately.Both of which can be effectively improved the accuracy of JND threshold value, so that JND threshold value and human visual system are more mated.The model that the image JND threshold value computational methods that the present invention proposes realize can accommodate more noise, and in terms of PSNR, model averagely can improve 0.54DB.

Description

The image JND threshold value computational methods of view-based access control model attention mechanism in DCT domain
Technical field
The present invention relates to image/video coding technical field.
Technical background
Traditional image/video coding technology is carried out mainly for spatial domain redundancy, time domain redundancy and statistical redundancy Compressed encoding, but seldom in view of human visual system's characteristic and psychological effect, therefore a large amount of visual redundancy data are encoded also Transmission, in order to improve the efficiency of coding further, research worker has started the research being devoted to remove visual redundancy.Mesh is previous The effective ways of characterization of visual redundancy are namely based on the minimum of psychology and physiology and perceive distortion model, are called for short JND model, The most discernable imperceptible change of distortion model, i.e. human eye, due to the various screen effects of human eye, human eye can only Perceive the noise exceeding a certain threshold value, this threshold value be exactly human eye just can perceive distortion, represent the visual redundancy degree in image. JND model is commonly used to guide image or the perceptual coding of video and process, such as pretreatment, adaptive quantizing, bite rate control, motion Estimate.
Existing discernable distortion (JND) model can be roughly divided into two classes: the first kind is pixel domain JND model, its base Present principles is to model by characterizing brightness self adaptation effect and texture masking effect mostly, such as document 1 (see X.Yang, W.Lin,Z.Lu,E.P.Ong,and S.Yao,“Just-noticeable-distortion profile with nonlinear additivity model for perceptual masking color images”,IEEE Trans.Circuits Syst.Video Technol., vol.15, no.6, pp742-752, Jun.2005) in propose base In the coloured image JND model in spatial domain, but owing to it cannot be well by contrast sensitivity function (Contrast Sensitive Function, CSF) integrate, therefore this class model has no idea to obtain accurate JND value, frequently as meter The fast method calculating JND threshold value uses.
Equations of The Second Kind JND model is subband JND model, and this class model is to calculate in the transform domain as illustrated, such as DCT domain, little Wave zone, CONTOURLET territory etc..Owing to (such as JPEG, H.261/3/ most of image/video coding standards are all based on DCT domain 4, MPEG-1/2/4), therefore JND model based on DCT domain has obtained the concern of Many researchers, such as document 2 and (has seen Z.Wei and K.N.Ngan,“Spatial just noticeable distortion profile for image in DCT domain, " In Proc.IEEE Int.Conf.Multimeda and Expo, pp.925-928,2008.) middle combination The brightness adaptive characteristic of image, spatial contrast degree effect and contrast shielding effect based on block sort, but this model The not visual attention mechanism of the consideration human eye impact on JND model, therefore computational accuracy needs to be improved further.
Summary of the invention
On the basis of the model of WEI, the present invention combines visual attention mechanism and proposes image in a new DCT domain JND model modeling method, by considering vision attention stress effect and contrast masking sensitivity effect have devised a kind of comprehensive The method that spatial contrast degree sensitivity function is modulated together with brightness self adaptation effect by modulation function.
To this end, the present invention provides technical scheme enforcement step it is:
A kind of image method for computing perceptible distortion based on DCT domain, adopts the following technical scheme that, including following step Rapid:
Step S1: selected image is carried out the dct transform of 8x8, by its by spatial transform to DCT domain.
Step S2: in DCT domain, calculates according to the product of spatial contrast degree effect threshold value and the brightness Adaptive Modulation factor and obtains The basic perceptible distortion JND value obtained.
Step S3: utilize canny edge detector that image is carried out piecemeal, be divided into smooth block, edge block and texture block, To based on the block structured contrast masking sensitivity factor.
Step S4: utilize visual attention model that image is carried out significance detection, obtains the notable figure of image.
Step S5: split image according to the notable figure of step S4 gained, is divided into marking area and non-aobvious by image Write region, be then based on the saliency value of each point, obtain the contrast masking sensitivity factor of view-based access control model attention mechanism.
Or step S5 is: first divide the image into into marking area and non-significant region, then to step S4 gained Notable figure piecemeal, replaces the saliency value of whole piece, and saliency value based on each piece obtains by the meansigma methods of the saliency value of each piece The contrast masking sensitivity factor to view-based access control model attention mechanism.
Step S6: the salient region of image that step S5 is obtained and the segmentation result in non-significant region and step S3 gained Block sort result combines, and image carries out more careful piecemeal, and by the contrast masking sensitivity of view-based access control model attention mechanism The factor and combining according to linear relationship based on the block structured contrast masking sensitivity factor, obtains comprehensive contrast masking sensitivity modulation Function,
Step S7: calculated to step S2 for calculated for the step S6 modulation function value basic threshold value of JND is carried out Modulation, obtains final JND threshold value.
The key technology main points that technique scheme embodies:
1, just perceiving distortion model for traditional images and do not account for this problem of visual attention mechanism, the present invention carries Go out the image of view-based access control model attention mechanism in two kinds of DCT domain and just perceived the modeling algorithm of distortion model, Yi Zhongshi, passed through Calculate visual attention modulation factor based on image significance, and by this visual attention masking factor based on pixel and Combine based on the block structured contrast masking sensitivity factor, be calculated comprehensive contrast masking sensitivity function, to traditional based on sky Between the JND threshold value of contrast effect and brightness self adaptation effect be modulated;The second is, by calculating the notable figure of image, And the significance factor of whole piece is replaced by the significance meansigma methods of each piece, then set up block-based visual attention and shelter The factor and combining based on the block structured contrast masking sensitivity factor, obtains comprehensive contrast masking sensitivity function to basic JND threshold Value is modulated.Both of which can be effectively improved the accuracy of JND threshold value, so that JND threshold value and human eye vision system System more mates.
2, the present invention proposes the concept of visual attention masking effect, by visual saliency simulation human eye to image slices The attention rate of vegetarian refreshments, thus set up the visual attention masking effect factor of view-based access control model significance.
3, the present invention proposes two kinds of schemes significance and block sort combined, and a kind of is with the vision note of a single point The masking factor of meaning power masking factor and block sort combines according to point-to-point mode, and another kind is average aobvious with each piece Work degree represents the significance of whole piece, then by visual attention masking factor based on each piece and the masking factor of block sort Combine according to the mode of block to block.
4, by visual saliency and block sort being combined, image is carried out more fine and piecemeal accurately.
5, significant properties based on each piece and block structure characteristic, by arranging different significance modulation factors and agllutination Structure modulation factor, thus visual attention masking factor and the block sort contrast masking sensitivity factor are combined, obtain comprehensive Contrast masking sensitivity function.
Having the beneficial effect that of the inventive method uses the comprehensive calculated value of contrast masking sensitivity function to traditional JND threshold value is modulated, and finally gives JND threshold value more accurately.On the premise of ensureing same vision subjective quality, this The model that the image JND threshold value computational methods that invention proposes realize can accommodate more noise.
Accompanying drawing explanation
Fig. 1 be DCT domain of the present invention based on view-based access control model attention mechanism image perceptible distortion model framework chart.
Fig. 2 is present example Airplane image.
Fig. 3 is that present example Airplane image carries out the image after block sort.
Fig. 4 is that present example Airplane image carries out the image after significance classification.
Fig. 5 is that present example Airplane image synthesis considers significance and block structured careful piecemeal result.
Fig. 6 is the image JND threshold value computational methods flow chart of view-based access control model attention mechanism in DCT domain of the present invention.
Detailed description of the invention
The invention will be further described to combine accompanying drawing with instantiation below:
The example employing MATLAB7 that the present invention provides is as Simulation Experimental Platform, with the bmp gray level image of 512 × 512 Airplane as selected test image, describes this example in detail below in conjunction with each step:
Step (1), the bmp gray level image of selected 512 × 512, as the image of input test, is carried out the DCT of 8 × 8 Conversion, by its by spatial transform to DCT domain;
Step (2), in DCT domain, calculates according to the product of the basic threshold value of spatial contrast degree and the brightness Adaptive Modulation factor Acquisition can perceive distortion JND value, and its computing formula is as follows:
TJND(n, i, j)=TBasic(n,i,j)×Flum(n) (1)
Wherein, TBasic(n, i j) represent spatial contrast degree sensitivity threshold, FlumN () represents the brightness self-adaptative adjustment factor, n For the index of DCT block,W is the width of image, and H is the height of image, and i, j are the rope of coefficient in DCT block Draw, 1≤i≤64,1≤j≤64.General DCT block is all 8x8 size, and so this test image is just divided into Individual DCT block, n takes numerical value between 1 to 4096, and i, j take numerical value between 1 to 64.
Basic threshold value T in above formula (l)Basic(n, i, j) adopt and calculate acquisition with the following method:
The frequency of DCT subband can be expressed as follows:
ω i , j = 1 2 N ( i / θ x ) 2 + ( j / θ y ) 2 - - - ( 2 )
Wherein θxy=2 arctan (γ/2 l) are the horizontal and vertical visual angles of a pixel, and l is the observation of image Distance, in the invention, l is 3 times of picture traverse, and γ represents on display width or the length that a pixel shows.By Upper described, the spatial contrast degree sensitivity threshold of DCT block is:
In formula (3): behalf spatial aggregation effect, take 0.25 in the invention,Represent effect Should, wherein r=0.6, Фi、ФjIt is DCT normalization coefficient,Represent the deflection of corresponding DCT coefficient:
Φ h = 1 / N m = 0 2 / N m > 0 - - - ( 4 )
Parameter a=l.33, b=0.11, c=0.005.
Brightness Adaptive Modulation factor F in above formula (1)lumN () is adopted and is calculated acquisition with the following method:
F lu n ( n ) = ( 60 - I ) / 150 + 1 I &le; 60 1 60 < I < 170 ( I - 170 ) / 425 + 1 I &GreaterEqual; 170 - - - ( 6 )
In formula (6), I represents each 8 × 8 pieces of big average brightness values.
Then according to formula (1) is calculated traditional JND threshold value.
Step (3), carries out block sort to image, calculates contrast masking sensitivity factor F based on block sortcontrast(n,i,j)。
All 8 × 8 size DCT blocks in image are classified, is divided into texture, smooth and edge three class (such as Fig. 2 institute Show);The marginal information calculated in image is detected, by the edge pixel density in calculating each 8 × 8 pieces with canny operator ρedgeClassifying block, detailed process is as follows:
ρedge=(∑8X8edge)/N2 (7)
Wherein, ρedgeRepresent the sum of the edge pixel obtained in each 8X8 block by canny operator;N is that DCT block is big Little, take N=8 in this example;
The sorting technique of block is:
From the DCT block type of 8 × 8, between block, the masking effect factor is:
It is also contemplated that the masking effect between adjacent sub-bands, traditional contrast masking sensitivity function expression based on block sort is such as Under:
Step (4), carries out significance detection to image, obtains the notable figure of image.
Utilize spectrum residual error method that image carries out significance detection in this example:
Wherein, the phase spectrum after P (f) and R (f) representative image Fourier transformation respectively and amplitude spectrum,Represent in Fu Leaf inverse transformation, g (σ) represents Gaussian filter, and σ is filtering window size, and in this example, σ takes 0.32 times of picture traverse. S (x) represents the notable figure of each image.
Step (5), if selection scheme one, then describes with reference to (a), if selection scheme two, then describes with reference to (b):
(a) notable figure based on image, the significance contrast masking sensitivity factor of calculating image:
Fvs(n, i, j)=μ (Smax-S(n,i,j)) (12)
The notable angle value the biggest explanation human eye of image is the highest to the attention rate of this point, wherein SmaxRepresent notable figure normalization Maximum afterwards, (n, i j) represent after notable figure normalization correspondence position (i, significance j) in each piece at each to S. μ represents modulation factor, takes μ=1.0 in this example.
B () notable figure based on image, represents the significance of whole piece by the significance meansigma methods of each piece.And based on often The significance of individual block, the significance contrast masking sensitivity factor of calculating image:
Fvs(n)=μ (Smax(n)-S(n)) (13)
Wherein, SmaxN () represents the maximum in the significance of all pieces, S (n) represents the significance of each piece.μ represents Modulation factor, takes μ=1.0 in this example.FvsN () represents the significance masking factor of each piece.
Step (6), comprehensively in terms of block structure angle and significance angle two, image is carried out more careful dividing Class.
It is first according to the significance of image and image is divided into notable district and non-significant region (as shown in Figure 3):
Wherein T represents segmentation threshold, and T is the average of notable figure in this example.
In conjunction with significance segmentation result and block sort result, image is carried out finer classification, as shown in Figure 4.
Step (7), for different block structures, arranges different significance Dynamic gene and block structure Dynamic gene, from And calculate comprehensive modulation function:
F c o n t r a s t v s ( n , i , j ) = &alpha; &times; F v s + &beta; &times; F c o n t r a s t ( n , i , j ) - - - ( 15 )
In formula (15), according to scheme one, then FvsRepresent use formula (12) and calculate the significance masking factor of gained, if adopting With scheme two, then FvsRepresent use formula (13) and calculate the significance masking factor of gained.Fcontrast(n, i, j) representative formula (10) Calculated based on the block structured contrast masking sensitivity factor.Significance modulation factor α and the value base of block structure Dynamic gene β In different block structures:
Step (8), based on the basic threshold value of step (2) calculated JND, utilizes step (7) to be calculated comprehensive tune Function processed, threshold value basic to JND is modulated:
T J N D ( n , i , j ) = T B a s i c ( n , i , j ) &times; F l u m ( n ) &times; F c o n t r a s t v s ( n , i , j ) - - - ( 17 )
Comprehensive all above step is calculated the JND threshold value of image, and this threshold value has considered spatial contrast degree effect, Brightness self adaptation effect, block sort contrast masking sensitivity effect and visual attention mechanism, so this threshold value and the vision of human eye System is more identical, more accurately.
The innovative point of the present invention:
Propose and image significance is applied to improves image just perceives the thinking of distortion model.
Proposing visual attention masking factor based on image significance, this factor describes the human eye pass to image Note degree.
Consider significant characteristics and the block structure feature of image, image is carried out more accurate and careful classification.
Significant properties based on different masses and block structure characteristic consider, and construct comprehensive modulation function to tradition JND model be modulated, be calculated the JND threshold value of the visual system that more coincide.

Claims (1)

1. an image method for computing perceptible distortion based on DCT domain, comprises the following steps:
Step S1: selected image is carried out the dct transform of 8x8, by its by spatial transform to DCT domain;
Step S2: in DCT domain, calculate acquisition according to the product of spatial contrast degree effect threshold value and the brightness Adaptive Modulation factor Basic perceptible distortion JND value;
Step S3: utilize canny edge detector that image is carried out piecemeal, be divided into smooth block, edge block and texture block, obtain base In the block structured contrast masking sensitivity factor;
Step S4: utilize visual attention model that image is carried out significance detection, obtains the notable figure of image;
Step S5:
According to the notable figure of step S4 gained, image is split, image is divided into marking area and non-significant region, then Saliency value based on each point, obtains the contrast masking sensitivity factor of view-based access control model attention mechanism;
Or step S5 is: first divide the image into into marking area and non-significant region, then notable to step S4 gained Figure piecemeal, replaces the saliency value of whole piece by the meansigma methods of the saliency value of each piece, and saliency value based on each piece obtains base The contrast masking sensitivity factor in visual attention mechanism;
Step S6: salient region of image step S5 obtained and the block of the segmentation result in non-significant region and step S3 gained divide Class result combines, and image carries out more careful piecemeal, and by the contrast masking sensitivity factor of view-based access control model attention mechanism Combine according to linear relationship with based on the block structured contrast masking sensitivity factor, obtain comprehensive contrast masking sensitivity modulation letter Number;
Step S7: by calculated for the step S6 contrast masking sensitivity modulation function value basic threshold of JND calculated to step S2 Value is modulated, and obtains final JND threshold value.
CN201310413594.4A 2013-09-12 2013-09-12 The image JND threshold value computational methods of view-based access control model attention mechanism in DCT domain Expired - Fee Related CN103475881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310413594.4A CN103475881B (en) 2013-09-12 2013-09-12 The image JND threshold value computational methods of view-based access control model attention mechanism in DCT domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310413594.4A CN103475881B (en) 2013-09-12 2013-09-12 The image JND threshold value computational methods of view-based access control model attention mechanism in DCT domain

Publications (2)

Publication Number Publication Date
CN103475881A CN103475881A (en) 2013-12-25
CN103475881B true CN103475881B (en) 2016-11-23

Family

ID=49800559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310413594.4A Expired - Fee Related CN103475881B (en) 2013-09-12 2013-09-12 The image JND threshold value computational methods of view-based access control model attention mechanism in DCT domain

Country Status (1)

Country Link
CN (1) CN103475881B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219525B (en) * 2014-09-01 2017-07-18 国家广播电影电视总局广播科学研究院 Perception method for video coding based on conspicuousness and minimum discernable distortion
CN105491391A (en) * 2014-09-15 2016-04-13 联想(北京)有限公司 Image compression method and electronic equipment
CN104754320B (en) * 2015-03-27 2017-05-31 同济大学 A kind of 3D JND threshold values computational methods
CN109525847B (en) * 2018-11-13 2021-04-30 华侨大学 Just noticeable distortion model threshold calculation method
CN109948699B (en) * 2019-03-19 2020-05-15 北京字节跳动网络技术有限公司 Method and device for generating feature map
CN109902763B (en) * 2019-03-19 2020-05-15 北京字节跳动网络技术有限公司 Method and device for generating feature map
CN109948700B (en) * 2019-03-19 2020-07-24 北京字节跳动网络技术有限公司 Method and device for generating feature map
CN110251076B (en) * 2019-06-21 2021-10-22 安徽大学 Method and device for detecting significance based on contrast and fusing visual attention
CN112437302B (en) * 2020-11-12 2022-09-13 深圳大学 JND prediction method and device for screen content image, computer device and storage medium
CN112435188B (en) * 2020-11-23 2023-09-22 深圳大学 JND prediction method and device based on direction weight, computer equipment and storage medium
CN114666619B (en) * 2022-03-11 2024-05-03 平安国际智慧城市科技股份有限公司 Watermarking method, device and equipment for video file and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621708A (en) * 2009-07-29 2010-01-06 武汉大学 Method for computing perceptible distortion of color image based on DCT field
CN101710995A (en) * 2009-12-10 2010-05-19 武汉大学 Video coding system based on vision characteristic
CN102750706A (en) * 2012-07-13 2012-10-24 武汉大学 Depth significance-based stereopicture just noticeable difference (JND) model building method
CN102905130A (en) * 2012-09-29 2013-01-30 浙江大学 Multi-resolution JND (Just Noticeable Difference) model building method based on visual perception

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100750138B1 (en) * 2005-11-16 2007-08-21 삼성전자주식회사 Method and apparatus for image encoding and decoding considering the characteristic of human visual system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621708A (en) * 2009-07-29 2010-01-06 武汉大学 Method for computing perceptible distortion of color image based on DCT field
CN101710995A (en) * 2009-12-10 2010-05-19 武汉大学 Video coding system based on vision characteristic
CN102750706A (en) * 2012-07-13 2012-10-24 武汉大学 Depth significance-based stereopicture just noticeable difference (JND) model building method
CN102905130A (en) * 2012-09-29 2013-01-30 浙江大学 Multi-resolution JND (Just Noticeable Difference) model building method based on visual perception

Also Published As

Publication number Publication date
CN103475881A (en) 2013-12-25

Similar Documents

Publication Publication Date Title
CN103475881B (en) The image JND threshold value computational methods of view-based access control model attention mechanism in DCT domain
Gao et al. Image quality assessment based on multiscale geometric analysis
Stamm et al. Anti-forensics of digital image compression
Wan et al. A novel just noticeable difference model via orientation regularity in DCT domain
Masry et al. A scalable wavelet-based video distortion metric and applications
DE60314305T2 (en) Header-based processing of multi-scale transformed images
Bhowmik et al. Visual attention-based image watermarking
CN101621708B (en) Method for computing perceptible distortion of color image based on DCT field
CN108921800A (en) Non-local mean denoising method based on form adaptive search window
CN104378636B (en) A kind of video encoding method and device
Stamm et al. Wavelet-based image compression anti-forensics
US8260067B2 (en) Detection technique for digitally altered images
CN103200421A (en) No-reference image quality evaluation method based on Curvelet transformation and phase coincidence
Zhang et al. Multi-focus image fusion algorithm based on compound PCNN in Surfacelet domain
CN103313047A (en) Video coding method and apparatus
CN104268590A (en) Blind image quality evaluation method based on complementarity combination characteristics and multiphase regression
CN107454413A (en) A kind of method for video coding of keeping characteristics
Li et al. A natural image quality evaluation metric
Baviskar et al. Performance evaluation of high quality image compression techniques
CN108648180A (en) A kind of full reference picture assessment method for encoding quality of view-based access control model multiple characteristics depth integration processing
CN108550152A (en) Full reference picture assessment method for encoding quality based on depth characteristic perceptual inference
Wu et al. An improved model of pixel adaptive just-noticeable difference estimation
Abd-Elhafiez Image compression algorithm using a fast curvelet transform
Albanesi et al. An HVS-based adaptive coder for perceptually lossy image compression
CN109300086B (en) Image blocking method based on definition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20161123

Termination date: 20190912