CN109525847A - A kind of just discernable distortion model threshold value calculation method - Google Patents

A kind of just discernable distortion model threshold value calculation method Download PDF

Info

Publication number
CN109525847A
CN109525847A CN201811345416.1A CN201811345416A CN109525847A CN 109525847 A CN109525847 A CN 109525847A CN 201811345416 A CN201811345416 A CN 201811345416A CN 109525847 A CN109525847 A CN 109525847A
Authority
CN
China
Prior art keywords
block
dct
threshold value
calculation method
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811345416.1A
Other languages
Chinese (zh)
Other versions
CN109525847B (en
Inventor
曾焕强
曾志鹏
陈婧
朱建清
蔡灿辉
马凯光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiao University
Original Assignee
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiao University filed Critical Huaqiao University
Priority to CN201811345416.1A priority Critical patent/CN109525847B/en
Publication of CN109525847A publication Critical patent/CN109525847A/en
Application granted granted Critical
Publication of CN109525847B publication Critical patent/CN109525847B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a kind of just discernable distortion model threshold value calculation methods comprising: dct transform is carried out to original image, calculates corresponding brightness adaptation module value and spatial contrast sensitivity function module value;Using the frequency energy characteristic distributions of 8 × 8DCT block, more careful classification is carried out to the texture block of image, the contrast masking sensitivity factor is obtained, calculates contrast masking sensitivity module value;The textural characteristics that current image block is extracted using the spatial frequency distribution of DCT coefficient calculate the texture difference between two different masses, obtain the visual perception Dynamic gene of different masses;Above-mentioned module is integrated, final JND threshold value is obtained.The algorithm that the present invention is mentioned, under the premise of guaranteeing visual quality, mentioned JND model can accommodate more noises.The model can be widely used for perceptual image/Video coding, watermark and quality evaluation etc..

Description

A kind of just discernable distortion model threshold value calculation method
Technical field
The present invention relates to image/video coding fields, are related to a kind of visual redundancy estimation model of perception.
Background technique
With the fast development of multimedia technology, image/video is increasingly becoming the main carriers that people obtain information, still People also have to be faced with by challenges such as the large nuber of images/transmission of video data bring, storages.Therefore, how efficiently into Row image/video coding becomes the research hotspot of current academia and industry.Traditional image/video coding technology is mainly By removing spatially and temporally redundancy but the visual impression of human eye is often had ignored in this way to achieve the purpose that improve code efficiency By.In order to effectively further promote code efficiency, researcher starts to be dedicated to the visual redundancy estimation to image/video Research, just discernable distortion model (Just noticeable difference, JND) are exactly that a kind of energy Efficient Characterization vision is superfluous Remaining sensor model.
Which can be roughly divided into two types for the type of JND model: pixel domain JND model and DCT domain JND model.This patent stresses In the research to DCT domain JND model.Since traditional DCT domain JND model contrast masking sensitivity usually only takes into account smooth block, side Three kinds of block type bring visual masking effects of edge block and texture block ignore the influence that different complexity texture blocks generate, together When also have ignored contacting between the block of image and block.Therefore traditional DCT domain JND model is estimated visual redundancy accurate Degree needs to be further improved.
Summary of the invention
It is a primary object of the present invention to overcome drawbacks described above in the prior art, a kind of just discernable distortion model is proposed Threshold value calculation method further considers the possible different visual masking effects of the texture block of different complexities, and considers To contacting between image block and block, more accurate visual redundancy appraising model is proposed.
The present invention adopts the following technical scheme:
A kind of just discernable distortion model threshold value calculation method, which is characterized in that steps are as follows:
1) dct transform is carried out to the image of input;
2) DCT coefficient is operated, calculates brightness adaptation module value FLAWith spatial contrast sensitivity function module Value;
3) contrast is calculated to the type further division of texture block using 8 × 8DCT block frequency energy characteristic distributions Masking block value;
4) different frequency based on 8 × 8DCT block divides, and extracts the textural characteristics of current block, calculates current block and its The texture difference of his block acquires each piece of visual perception Dynamic gene;
5) brightness adaptation module value is integrated, spatial contrast sensitivity function module value, contrast masking sensitivity module value, and The visual perception Dynamic gene of different masses, obtains the JND threshold value of image.
The brightness adaptation module value and spatial contrast sensitivity function module value pass through following formula respectively and obtain:
Wherein, the value range of (i, j) expression 8 × 8DCT block coefficient positions, i and j are integer 0~7, FLARepresent brightness certainly Adapt to module value, FCSF(i, j) represents spatial contrast sensitivity function module value,Indicate the mean intensity of 8 × 8DCT block, φi And φjFor the normalization factor of dct transform, ωijRepresentation space frequency,For the orientation angle of corresponding DCT ingredient, a, b, c, R, s are constant.
According to the frequency distribution of 8 × 8DCT block coefficient, to the type further division of texture block, the contrast masking sensitivity mould Block is obtained by following formula:
Wherein ψ is the contrast masking sensitivity factor, C (n1,n2, i, j) be corresponding position DCT coefficient, (n1,n2) it is current The location index of DCT block in the picture, ε are set as 0.36.
The contrast masking sensitivity factor is obtained by following formula:
Wherein texture block ENERGY Et=E+H, E and H respectively indicate the sum of the region MF and the region HF DCT coefficient absolute value.
The visual perception Dynamic gene is obtained by following formula:
αk=1-Sk
Wherein SkFor k-th of DCT block shared weight size in the picture, the value range of k is integer Wherein W and H is respectively the width and height of input picture,Expression rounds up.
Shared weight is obtained the DCT block by following formula in the picture:
Wherein σ is Gauss model parameter, dklFor the Euclidean distance of k-th of DCT block and first of DCT block, DklIt is k-th The texture difference of DCT block and first of DCT block.
Texture difference DklIt is obtained by following formula:
Dkl=max (h (Tk,Tl),h(Tl,Tk))
Wherein T indicates the textural characteristics of DCT block, h (Tk,Tl) it is k-th of DCT block and first of DCT block textural characteristics Hausdorff distance.
The Hausdorff distance and DCT block textural characteristics of two difference DCT block textural characteristics are obtained by following formula:
T={ tLF,tMF,tHF}
Wherein tLF, tMFAnd tHFThe sum of the coefficient in the region respectively LF, MF and HF, | | | | indicate 2 norms.
The JND threshold value of the DCT domain is obtained by following formula:
JND threshold value=visual perception Dynamic gene × spatial contrast sensitivity function module value × brightness adaptation module value × contrast masking sensitivity module value.
By the above-mentioned description of this invention it is found that compared with prior art, the invention has the following beneficial effects:
1, the method for the present invention carries out more careful classification to texture block according to the frequency energy of DCT block, covers contrast The estimation covered more meets human-eye visual characteristic;
2, the method for the present invention considers contacting between image block and block, is extracted the textural characteristics of DCT block, passes through calculating The texture difference of current block and other blocks obtains the visual perception Dynamic gene of image block, is conducive to accurately estimating for visual redundancy Meter.
Detailed description of the invention
Fig. 1 is the main flow chart of the method for the present invention
Fig. 2 is the frequency distribution of 8 × 8DCT block.
Specific embodiment
Below by way of specific embodiment, the invention will be further described.
The present invention is to carry out more accurate estimation to visual redundancy, provides a kind of just discernable distortion model threshold value meter Calculation method, as shown in Figure 1, specific implementation step is as follows:
1) dct transform is carried out to the image of input.
2) DCT coefficient is operated, calculates brightness adaptation module value and spatial contrast sensitivity function module value.
3) contrast is calculated to the type further division of texture block using 8 × 8DCT block frequency energy characteristic distributions Masking block value.
4) different frequency based on 8 × 8DCT block divides, and extracts the textural characteristics of current block, calculates current block and its The texture difference of his block acquires each piece of visual perception Dynamic gene.
5) brightness adaptation module, spatial contrast sensitivity function module, contrast masking sensitivity module and different masses are integrated Weight size, obtain the JND threshold value of image.
Step 1) carries out dct transform to the image of input.
Step 2) calculates brightness adaptation module value FLAWith spatial contrast sensitivity function module value FCSF(i,j).Tool Body is as follows:
Wherein, the value range of (i, j) expression 8 × 8DCT block coefficient positions, i and j are integers 0~7,Indicate 8 × 8DCT The mean intensity of block, φiAnd φjFor the normalization factor of dct transform, ωijRepresentation space frequency,For corresponding DCT ingredient Orientation angle, a, b, c, r, s is constant, and value may respectively be 1.33,0.11,0.18,0.6 and 0.25.
Step 3) calculates contrast masking sensitivity module value.
Specifically, contrast masking sensitivity factor ψ is calculated, it is specific as follows:
Wherein texture block ENERGY Et=E+H, E and H respectively indicate the sum of the region MF and the region HF DCT coefficient absolute value.
Contrast masking sensitivity module value F is obtained by following formulaCM
Wherein ψ is the contrast masking sensitivity factor, C (n1,n2, i, j) be corresponding position DCT coefficient, (n1,n2) it is current The location index of DCT block in the picture, and (i, j) is coordinate position of the DCT coefficient in current DCT block, ε is set as 0.36.
Step 4) calculates the perception Dynamic gene of each image block.
Specifically, passing through following 8 × 8DCT of formulas Extraction block textural characteristics T and two difference DCT block textural characteristics Hausdorff distance:
T={ tLF,tMF,tHF}
Wherein tLF, tMFAnd tHFThe sum of the coefficient in the region respectively LF, MF and HF, | | | | 2 norms of expression, k and l's takes Value range is integerWherein W and H is respectively the width and height of input picture,Expression rounds up.
Texture difference D is obtained by following formulakl:
Dkl=max (h (Tk,Tl),h(Tk,Tl))
Wherein h (Tk,Tl) be k-th of DCT block and first of DCT block textural characteristics Hausdorff distance.
DCT block shared weight in the picture is obtained by following formula:
Wherein SkFor k-th of DCT block shared weight size in the picture, σ is Gauss model parameter, is set as 5, dklIt is The Euclidean distance of k DCT block and first of DCT block, DklFor the texture difference of k-th of DCT block and first of DCT block.
Based on DCT block shared weight S in the picturek, the visual perception Dynamic gene of current block is calculated, specific as follows:
αk=1-Sk
Wherein SkFor k-th of DCT block shared weight size in the picture.The value range of k is integer Wherein W and H is respectively the width and height of input picture,Expression rounds up.
Step 5) obtains JND threshold value by following formula:
JND threshold value=visual perception Dynamic gene × spatial contrast sensitivity function module value × brightness adaptation module value × contrast masking sensitivity module value, it may be assumed that
JND(n1,n2, i, j) and=α (n1,n2)·FCSF(n1,n2,i,j)·FLA(n1,n2)·FCM(n1,n2,i,j)。
The above is only a specific embodiment of the present invention, but the design concept of the present invention is not limited to this, all to utilize this Design makes a non-material change to the present invention, and should all belong to behavior that violates the scope of protection of the present invention.

Claims (9)

1. a kind of just discernable distortion model threshold value calculation method, which is characterized in that steps are as follows:
1) dct transform is carried out to the image of input;
2) DCT coefficient is operated, calculates brightness adaptation module value FLAWith spatial contrast sensitivity function module value;
3) contrast masking sensitivity is calculated to the type further division of texture block using 8 × 8DCT block frequency energy characteristic distributions Module value;
4) different frequency based on 8 × 8DCT block divides, and extracts the textural characteristics of current block, calculates current block and other blocks Texture difference, acquire each piece of visual perception Dynamic gene;
5) brightness adaptation module value, spatial contrast sensitivity function module value, contrast masking sensitivity module value, and difference are integrated The visual perception Dynamic gene of block, obtains the JND threshold value of image.
2. a kind of just discernable distortion model threshold value calculation method according to claim 1, which is characterized in that the brightness Adaptation module value and spatial contrast sensitivity function module value pass through following formula respectively and obtain:
Wherein, the value range of (i, j) expression 8 × 8DCT block coefficient positions, i and j are integer 0~7, FLAIt is adaptive to represent brightness Module value, FCSF(i, j) represents spatial contrast sensitivity function module value,Indicate the mean intensity of 8 × 8DCT block, φiAnd φj For the normalization factor of dct transform, ωijRepresentation space frequency,For the orientation angle of corresponding DCT ingredient, a, b, c, r, s For constant.
3. a kind of just discernable distortion model threshold value calculation method according to claim 2, which is characterized in that according to 8 × The frequency distribution of 8DCT block coefficient, to the type further division of texture block, the contrast masking sensitivity module passes through following formula It obtains:
Wherein ψ is the contrast masking sensitivity factor, C (n1,n2, i, j) be corresponding position DCT coefficient, (n1,n2) it is current DCT block Location index in the picture, ε are set as 0.36.
4. a kind of just discernable distortion model threshold value calculation method according to claim 3, which is characterized in that the comparison Degree masking factor is obtained by following formula:
Wherein texture block ENERGY Et=E+H, E and H respectively indicate the sum of the region MF and the region HF DCT coefficient absolute value.
5. a kind of just discernable distortion model threshold value calculation method according to claim 1, which is characterized in that the vision Perception Dynamic gene is obtained by following formula:
αk=1-Sk
Wherein SkFor k-th of DCT block shared weight size in the picture, the value range of k is integer Wherein W and H is respectively the width and height of input picture,Expression rounds up.
6. a kind of just discernable distortion model threshold value calculation method according to claim 5, which is characterized in that the DCT Shared weight is obtained block by following formula in the picture:
Wherein σ is Gauss model parameter, dklFor the Euclidean distance of k-th of DCT block and first of DCT block, DklFor k-th DCT block with The texture difference of first of DCT block.
7. a kind of just discernable distortion model threshold value calculation method according to claim 6, which is characterized in that texture difference DklIt is obtained by following formula:
Dkl=max (h (Tk,Tl),h(Tl,Tk))
Wherein T indicates the textural characteristics of DCT block, h (Tk,Tl) it is k-th of DCT block and first of DCT block textural characteristics Hausdorff distance.
8. a kind of just discernable distortion model threshold value calculation method according to claim 7, which is characterized in that two differences The Hausdorff distance and DCT block textural characteristics of DCT block textural characteristics are obtained by following formula:
T={ tLF,tMF,tHF}
Wherein tLF, tMFAnd tHFThe sum of the coefficient in the region respectively LF, MF and HF, | | | | indicate 2 norms.
9. a kind of just discernable distortion model threshold value calculation method according to claim 1, which is characterized in that the DCT The JND threshold value in domain is obtained by following formula:
JND threshold value=visual perception Dynamic gene × spatial contrast sensitivity function module value × brightness adaptation module value × right Than degree masking block value.
CN201811345416.1A 2018-11-13 2018-11-13 Just noticeable distortion model threshold calculation method Active CN109525847B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811345416.1A CN109525847B (en) 2018-11-13 2018-11-13 Just noticeable distortion model threshold calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811345416.1A CN109525847B (en) 2018-11-13 2018-11-13 Just noticeable distortion model threshold calculation method

Publications (2)

Publication Number Publication Date
CN109525847A true CN109525847A (en) 2019-03-26
CN109525847B CN109525847B (en) 2021-04-30

Family

ID=65776416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811345416.1A Active CN109525847B (en) 2018-11-13 2018-11-13 Just noticeable distortion model threshold calculation method

Country Status (1)

Country Link
CN (1) CN109525847B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109982022A (en) * 2019-04-17 2019-07-05 南京大学 The video refreshing method of minimum color difference can be examined based on human eye
CN110062234A (en) * 2019-04-29 2019-07-26 同济大学 A kind of perception method for video coding based on the just discernable distortion in region
CN112437302A (en) * 2020-11-12 2021-03-02 深圳大学 JND prediction method and device for screen content image, computer device and storage medium
CN112584153A (en) * 2020-12-15 2021-03-30 深圳大学 Video compression method and device based on just noticeable distortion model
CN112634278A (en) * 2020-10-30 2021-04-09 上海大学 Superpixel-based just noticeable distortion model
CN112866820A (en) * 2020-12-31 2021-05-28 宁波大学科学技术学院 Robust HDR video watermark embedding and extracting method and system based on JND model and T-QR and storage medium
CN112967229A (en) * 2021-02-03 2021-06-15 杭州电子科技大学 Method for calculating just noticeable distortion threshold based on video perception characteristic parameter measurement
CN113192083A (en) * 2021-05-07 2021-07-30 宁波大学 Texture masking effect-based image just noticeable distortion threshold estimation method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621708A (en) * 2009-07-29 2010-01-06 武汉大学 Method for computing perceptible distortion of color image based on DCT field
US20110243228A1 (en) * 2010-03-30 2011-10-06 Hong Kong Applied Science and Technology Research Institute Company Limited Method and apparatus for video coding by abt-based just noticeable difference model
CN102420988A (en) * 2011-12-02 2012-04-18 上海大学 Multi-view video coding system utilizing visual characteristics
CN103475881A (en) * 2013-09-12 2013-12-25 同济大学 Image JND threshold value computing method in DCT domain and based on visual attention mechanism
CN104469386A (en) * 2014-12-15 2015-03-25 西安电子科技大学 Stereoscopic video perception and coding method for just-noticeable error model based on DOF
US20150117775A1 (en) * 2012-03-30 2015-04-30 Eizo Corporation Method for correcting gradations and device or method for determining threshold of epsilon filter

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621708A (en) * 2009-07-29 2010-01-06 武汉大学 Method for computing perceptible distortion of color image based on DCT field
US20110243228A1 (en) * 2010-03-30 2011-10-06 Hong Kong Applied Science and Technology Research Institute Company Limited Method and apparatus for video coding by abt-based just noticeable difference model
CN102420988A (en) * 2011-12-02 2012-04-18 上海大学 Multi-view video coding system utilizing visual characteristics
US20150117775A1 (en) * 2012-03-30 2015-04-30 Eizo Corporation Method for correcting gradations and device or method for determining threshold of epsilon filter
CN103475881A (en) * 2013-09-12 2013-12-25 同济大学 Image JND threshold value computing method in DCT domain and based on visual attention mechanism
CN104469386A (en) * 2014-12-15 2015-03-25 西安电子科技大学 Stereoscopic video perception and coding method for just-noticeable error model based on DOF

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SUNG-HO BAE: "A DCT-Based Total JND Profile for Spatiotemporal and Foveated Masking Effects", 《 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
SUNG-HO BAE: "A Novel DCT-Based JND Model for Luminance Adaptation Effect in DCT Frequency", 《IEEE SIGNAL PROCESSING LETTERS》 *
ZHIPENG ZENG: "A novel direction-based JND model for perceptual HEVC intra coding", 《2017 INTERNATIONAL SYMPOSIUM ON INTELLIGENT SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ISPACS)》 *
郑明魁: "基于纹理分解的变换域JND模型及图像编码方法", 《通信学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109982022A (en) * 2019-04-17 2019-07-05 南京大学 The video refreshing method of minimum color difference can be examined based on human eye
CN110062234A (en) * 2019-04-29 2019-07-26 同济大学 A kind of perception method for video coding based on the just discernable distortion in region
CN112634278A (en) * 2020-10-30 2021-04-09 上海大学 Superpixel-based just noticeable distortion model
CN112634278B (en) * 2020-10-30 2022-06-14 上海大学 Super-pixel-based just noticeable distortion method
CN112437302A (en) * 2020-11-12 2021-03-02 深圳大学 JND prediction method and device for screen content image, computer device and storage medium
CN112584153A (en) * 2020-12-15 2021-03-30 深圳大学 Video compression method and device based on just noticeable distortion model
CN112866820A (en) * 2020-12-31 2021-05-28 宁波大学科学技术学院 Robust HDR video watermark embedding and extracting method and system based on JND model and T-QR and storage medium
CN112866820B (en) * 2020-12-31 2022-03-08 宁波大学科学技术学院 Robust HDR video watermark embedding and extracting method and system based on JND model and T-QR and storage medium
CN112967229A (en) * 2021-02-03 2021-06-15 杭州电子科技大学 Method for calculating just noticeable distortion threshold based on video perception characteristic parameter measurement
CN112967229B (en) * 2021-02-03 2024-04-26 杭州电子科技大学 Method for calculating just-perceived distortion threshold based on video perception characteristic parameter measurement
CN113192083A (en) * 2021-05-07 2021-07-30 宁波大学 Texture masking effect-based image just noticeable distortion threshold estimation method
CN113192083B (en) * 2021-05-07 2023-06-06 宁波大学 Image just noticeable distortion threshold estimation method based on texture masking effect

Also Published As

Publication number Publication date
CN109525847B (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN109525847A (en) A kind of just discernable distortion model threshold value calculation method
CN110428433B (en) Canny edge detection algorithm based on local threshold
Li et al. Quality assessment of DIBR-synthesized images by measuring local geometric distortions and global sharpness
CN109872285B (en) Retinex low-illumination color image enhancement method based on variational constraint
CN105740945B (en) A kind of people counting method based on video analysis
CN103002289B (en) Video constant quality coding device for monitoring application and coding method thereof
CN108921800A (en) Non-local mean denoising method based on form adaptive search window
CN103679173B (en) Method for detecting image salient region
CN108063944B (en) Perception code rate control method based on visual saliency
CN104574404B (en) A kind of stereo-picture method for relocating
CN104994375A (en) Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency
CN111353552A (en) Image similarity contrast method based on perceptual hash algorithm
CN105049851A (en) Channel no-reference image quality evaluation method based on color perception
CN112801896B (en) Backlight image enhancement method based on foreground extraction
WO2017181575A1 (en) Two-dimensional image depth-of-field generating method and device
CN104463814A (en) Image enhancement method based on local texture directionality
CN110378893A (en) Image quality evaluating method, device and electronic equipment
Dawod et al. A new method for hand segmentation using free-form skin color model
CN117558068B (en) Intelligent device gesture recognition method based on multi-source data fusion
CN108491883B (en) Saliency detection optimization method based on conditional random field
CN106021610B (en) A kind of method for extracting video fingerprints based on marking area
CN111724325B (en) Trilateral filtering image processing method and trilateral filtering image processing device
CN109509137A (en) A kind of picture watermark insertion and Blind extracting method of the insertion than 1/16th
CN108924542A (en) Based on conspicuousness and sparsity without reference three-dimensional video quality evaluation method
Fan et al. Learning-based satisfied user ratio prediction for symmetrically and asymmetrically compressed stereoscopic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant