CN108965879B - Space-time domain self-adaptive just noticeable distortion measurement method - Google Patents

Space-time domain self-adaptive just noticeable distortion measurement method Download PDF

Info

Publication number
CN108965879B
CN108965879B CN201811016478.8A CN201811016478A CN108965879B CN 108965879 B CN108965879 B CN 108965879B CN 201811016478 A CN201811016478 A CN 201811016478A CN 108965879 B CN108965879 B CN 108965879B
Authority
CN
China
Prior art keywords
jnd
time domain
space
information
adaptive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811016478.8A
Other languages
Chinese (zh)
Other versions
CN108965879A (en
Inventor
殷海兵
夏光晶
黄晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201811016478.8A priority Critical patent/CN108965879B/en
Publication of CN108965879A publication Critical patent/CN108965879A/en
Application granted granted Critical
Publication of CN108965879B publication Critical patent/CN108965879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a measuring method of space-time domain self-adaptive just noticeable distortion, which is characterized by comprising the following steps: step one, fusing a mode masking effect PM and a brightness adaptability LA to obtain a spatial domain JND threshold JNDS(ii) a Step two, relative movement I (v)r) Background motion U (v)g) Fusing four time domain sensing parameters of time domain duration I (tau) and residual fluctuation intensity U () to obtain a time domain JND regulation weight coefficient z; step three, JND threshold value JND in airspaceSOn the basis, the weight coefficient z is adjusted by using a time domain JND adjusting weight coefficient z to obtain a space-time domain JND threshold value JNDST

Description

Space-time domain self-adaptive just noticeable distortion measurement method
Technical Field
The invention belongs to the field of image videos, and particularly relates to a method for measuring space-time domain self-adaptive just noticeable distortion.
Background
The Human Visual System (HVS) has perceptual characteristics such as temporal masking, spatial masking, contrast sensitivity, luminance adaptability, Visual attention, foveal characteristics, and near-causal effect, and these HVS perceptual characteristics affect the subjective perception of a target image or video by an observer, and thus affect the subjective perception quality of the image or video.
Due to various masking effects of the human eye, the human eye can only perceive noise/Distortion exceeding a certain threshold, i.e. Just Noticeable Distortion (JND), which is the minimum perceivable Distortion threshold. Currently, the mainstream JND models are mainly classified into two types:
(1) closest prior art 1: transform domain JND model
The method applies a spatial frequency response Curve (CSF) function to a DCT transform domain to construct a JND model; or simultaneously combining the brightness contrast masking effect to the CSF curve to construct a JND model; or taking the time domain masking effect into consideration, and constructing a JND model based on the space-time domain CSF curve; or dividing the transformation block into flat, edge and texture classes, considering the directional factor of motion, and constructing a JND model based on the space-time domain CSF curve; bae provides a space-time domain self-adaptive transform domain JND model aiming at the characteristics of transform blocks with different sizes and based on the space-time domain masking characteristic and the central concave characteristic.
The method directly applies a contrast sensitivity function to a model, can better depict the sensitivity of human eyes to different frequency components, but has the defects that: the transform block divides the correlation between image blocks and blocks, and can only use the correlation between pixels within a block.
(2) Closest prior art 2: JND model in pixel domain
The early image pixel domain JND model mainly considers the characteristics of brightness masking and contrast masking and considers the mutual influence between the two kinds of masking; chen et al integrate foveal properties into JND models, with weighting coefficients adjusted according to the eccentricity of the retina; zhao et al studied the JND model under binocular stereoscopic vision; liu and the like adopt a variational method to distinguish structural edge pixels from texture pixels and improve a spatial domain JND model; wu and the like provide a measurement method based on structural uncertainty according to a free energy theory and a Bayesian prediction theory, and provide a spatial domain JND model based on the measurement method.
The main problems with this type of process are:
(1) the current pixel domain estimation method is not comprehensive, particularly the relation between time domain pixels and the influence of interframe motion factors on a JND model;
(2) the method comprises the following steps that a plurality of factors influencing human eye visual perception are provided, and the influence of human eye visual interest, space-time domain masking, fovea characteristics, near-causal effect and other characteristics on JND modeling cannot be fully considered;
in the bottom-to-top analysis method, the influence of various features on JND perception is quantitatively evaluated based on human eye visual characteristics by analyzing the features in a video image; however, how to accurately fuse the image features of different dimensions has certain difficulty.
Disclosure of Invention
The technical problems to be solved by the invention are as follows:
(1) video characteristic parameter measurement based on human visual perception:
analyzing the relation between the visual perception significance and the uncertainty in the video scene according to the HVS perception characteristic; analyzing the measurement method of the visual attention excitation source such as relative motion, time domain duration and the like; the measurement method for analyzing visual perception uncertainty sources such as background motion and residual fluctuation intensity provides support for constructing an empty time domain just noticeable distortion model.
(2) Heterogeneous perceptual parameter homogenization and space-time domain JND model modeling:
in the perception analysis method of bottom-to-top excitation driving, perception parameters such as relative motion, background motion, time domain duration, residual fluctuation intensity and the like all affect distortion perception, heterogeneous perception parameters and homogenization methods are explored and measured, a multi-parameter homogenization air-time domain perception distortion measurement method is explored, and an air-time domain Just Noticeable Distortion (JND) model is constructed.
The invention provides a measuring method of space-time domain self-adaptive just noticeable distortion, which comprises the following steps:
step one, fusing a mode masking effect PM and a brightness adaptability LA to obtain a spatial domain JND threshold JNDS
Step two, self-information I (v) of relative motionr) Information entropy of background motion U (v)g) Fusing four time domain perception parameters including self information I (tau) of time domain duration and information entropy U () of residual fluctuation intensity to obtain a time domain JND regulation weight coefficient z;
step three, JND threshold value JND in airspaceSOn the basis, the weight coefficient z is adjusted by using a time domain JND adjusting weight coefficient z to obtain a space-time domain JND threshold value JNDST
Furthermore, in the second step, the four time domain perception parameters are mapped to a parameter space measured by information quantity, the visual perception significance is measured by adopting self-information, the visual perception uncertainty is measured by adopting information entropy, and then the homogenization measurement of different parameters is realized on the uniform scale of the information quantity.
Further, the self-information of the relative motion I (v)r) The homogeneity measure of (a) is as follows:
the relative motion prior probability distribution can be expressed as follows:
Figure GDA0002572935120000031
where the model parameters α, β are constants greater than 0, the visual perception saliency is represented by calculating the self-information of the relative motion:
I(vr)=-logp(vr)=αlogvr-logβ (4)。
further, the information entropy U (v) of the background motiong) The homogeneity measure of (a) is as follows:
lognormal distribution to describe the equivalent noise produced by background motion as a stimulus:
Figure GDA0002572935120000032
wherein the equivalent noise m is sensed during the observation of the video, and the width parameter sigma of the Gaussian curve is measured against vgIt is a constant and inversely proportional to the contrast threshold c:
Figure GDA0002572935120000033
wherein the model parameters λ, γ are constants greater than 0; representing the visual perception uncertainty caused by the background motion by calculating the information entropy of the background motion;
Figure GDA0002572935120000034
further, the homogeneity measure for the time domain duration is as follows:
by statistical analysis function fitting, a prior probability distribution of time domain duration is constructed as follows:
Figure GDA0002572935120000041
wherein the model parameters χ, κ are constants greater than 0, and when τ < χ, p (τ) is 0; calculating self-information of time domain duration to represent visual perception significance based on a probability distribution function:
I(τ)=-logp(τ)=κlog(τ-χ)-log (9)。
further, the homogeneity measure of the residual fluctuation intensity is as follows:
the prediction residual fluctuation intensity on the motion trail, the uncertainty is equivalent to the perception of equivalent noise m in the video observation process, and the residual fluctuation intensity is expressed by lognormal distribution as equivalent noise generated by stimulation:
Figure GDA0002572935120000042
wherein the width parameter σ is a constant and is inversely related to the luminance adaptive threshold LA:
Figure GDA0002572935120000043
where ξ and η are constants greater than 0, the visual perception uncertainty caused by it is represented by calculating the information entropy of the residual fluctuation intensity:
Figure GDA0002572935120000044
further, in step one, the HVS is insensitive to the darker or lighter background region and highly sensitive to the medium brightness background region, and the brightness adaptive threshold may be calculated by the following formula:
Figure GDA0002572935120000045
wherein B (x)c) Is a pixel xcThe background brightness of (1);
the mode masking effect function is derived from the luminance contrast and the structural uncertainty:
PM(xc)=f(E(xc),HU(xc)) (14)
wherein PM (x)c) Is a visibility threshold caused by pattern masking, HU(xc) Is based on the structural uncertainty, E (x), of the entropy measure of the informationc) Is a parameter describing the brightness contrast, measured by the detected edge height;
the spatial domain JND threshold is determined by a brightness adaptive threshold LA and a mode masking effect corresponding threshold PM, and the spatial domain JND threshold JND is determined by a mode masking effect corresponding threshold PMSIs composed of
JNDs=LA+PM-Cgr×min{LA,PM} (15)
Wherein C isgrIs 0.3.
Further, in the second step, four time domain feature parameters mapped to the same dimension are fused to obtain a time domain JND weight adjustment coefficient z as follows:
Figure GDA0002572935120000051
where μ, θ are constants greater than 0.
Further, in the third step, on the basis of the spatial domain JND threshold, the temporal domain JND adjusting weight coefficient is used for adjusting the spatial domain JND adjusting weight coefficient to obtain a spatial-temporal domain JND model JNDST
JNDST=(1+z)*JNDs(17)。
In the perception analysis method of bottom-to-top excitation driving, perception parameters such as brightness, chrominance, contrast, edge, motion, residual fluctuation intensity, visual focus point and the like all affect perception distortion. According to the method, the relationship between the visual perception significance and the uncertainty in the video scene is analyzed according to the space-time domain HVS perception characteristics. Measuring the visual perception significance according to the relative motion of a video target object, the time domain duration and other visual attention excitation sources; and measuring the visual perception uncertainty according to the background motion of the video target object, the motion prediction residual fluctuation intensity and other uncertainty sources. Based on the visual perception significance and uncertainty, homogeneity mapping is carried out on the heterogeneous perception characteristic parameters by adopting a self-information and information entropy measurement method, homogeneity measurement is carried out on the parameters by using the same dimension, and a JND model for space-time domain perception is constructed based on the homogeneity measurement.
Technical effects
Random noise smaller than the JND threshold of the model is added into the test sequence, and the subjective perceptual quality is the same as that of the original video without the added noise;
the noise added according to the model is less perceptible to the human eye in a test sequence of the same noise energy.
Drawings
FIG. 1 is a graph of relative motion, background motion, temporal duration, and residual fluctuation intensity;
FIG. 2a is a probability distribution plot of relative motion;
FIG. 2b is a probability distribution plot for time domain duration;
fig. 3 is a block diagram of the system as a whole.
Detailed Description
The invention will be further explained with reference to the drawings.
Based on HVS time domain perception characteristics, relative motion of a video target between adjacent time domains can attract visual attention, and the video target is a typical visual attention excitation source; on the other hand, according to the HVS motion suppression effect, the background motion caused by the motion of the camera consumes the attention energy of the human eye vision, reduces the sensitivity of the human eye vision perception, and is a typical source of uncertainty in the visual perception. The method firstly estimates the time domain motion track of the video object and calculates the relative motion v of the objectrAnd video background motion vgTheir prior probability distribution model p (v) is constructed by statistical analysisr) And an equivalent noise conditional probability model p (m | v)g) M is assumed to be equivalent noise of uncertainty, and self information I (v) of relative motion is finally calculatedr) To measure the visual perception significance, and calculate the information entropy U (v) of the background motiong) To comeAnd measuring the visual perception uncertainty, thereby realizing the homogenization measurement of the two characteristic parameters on the information quantity scale.
Based on the HVS time domain perception characteristic, the duration of the video target on the time domain motion track influences the time domain perception sensitivity, and the video target is another typical visual attention excitation source; the degree of change of the gray value of the pixel on the motion track can be regarded as a time domain uncertainty source, and the visual perception uncertainty is measured by the fluctuation intensity of the prediction residual error on the motion track. The method comprises the steps of firstly estimating a time domain motion track of a video object, calculating the time domain duration tau of a video object on the motion track, calculating the predicted residual fluctuation intensity on the motion track, constructing a prior probability distribution model p (tau) and an equivalent noise condition probability model p (m |) of the video object and the motion track through statistical analysis, supposing that m is equivalent noise with uncertainty, finally calculating the self-information I (tau) of the time domain duration to measure the visual perception significance, and calculating the information entropy U () of the time domain motion predicted residual fluctuation intensity to measure the visual perception uncertainty, thereby realizing the homogenization measurement of the two characteristic parameters on the information quantity scale.
In the aspect of spatial domain visual perception, the method mainly considers the mode masking effect PM and the brightness adaptability LA, and the mode masking effect PM and the brightness adaptability LA are fused to obtain a spatial domain JND threshold JNDS
Will I (v)r)、U(vg) And fusing the four homogenization parameters I (tau) and U () to obtain a time domain JND regulation weight coefficient z.
JND threshold JND in the spatial domainSOn the basis, the weight coefficient z is adjusted by using a time domain JND adjusting weight coefficient z to obtain a space-time domain JND threshold value JNDST
Examples
1. Time domain perceptual feature and homogeneity metric
1.1 time-domain perceptual feature parameters
The invention mainly researches three types of video object motion: absolute motion, background motion, and relative motion. Absolute motion vaThe relative displacement of pixels between two adjacent frames of the video is obtained by calculation through an optical flow estimation method; background motion vgIs determined by the peak of the absolute motion histogram; relative movement vrIs the difference between the absolute motion and the background motion, calculated using the following equation:
vr=va-vg(1)。
assuming that the t-th frame is the current frame, (t-1) frame is the reference frame, the pixel at the (i, j) position in the t-th frame is located at (p, q) at the best matching point in the (t-1) frame, and the inter-frame motion prediction residual is e. The information is stored in a coordinate matrix f of a forward matching point and a residual error matrix efAnd deducing a backward matching point coordinate matrix b and a residual error matrix ebAs shown in equation 2.
Figure GDA0002572935120000071
Through the coordinate matrixes f and b, time-domain motion trajectories can be drawn for all pixels. Taking a certain pixel (i, j) of the current frame t as an example, according to the coordinates recorded in f and b, a motion trajectory can be drawn in several adjacent frames (time domain windows) from front to back, as shown by the multi-arrow line in fig. 1. The time duration τ is the number of frames that last on the motion trajectory, and due to occlusion and exposure, the durations of the pixels at different positions in the time window may be different, for example, the motion trajectory of the pixel in fig. 1 disappears in the (t +4) th frame. In this example τ is equal to 5. And on the other hand, according to the coordinates recorded in the matrixes f and b and the prediction residual matrix e, constructing a motion track, and calculating the standard deviation of the prediction residual on the motion track in a time domain window for measuring the residual fluctuation intensity.
1.2 homogeneity metric for relative motion
The motion is an important visual excitation source, and the moving target with larger relative motion can attract the attention of human eyes more and corresponds to larger visual perception significance. The prior probability distribution of relative motion found by statistical analysis can be expressed as follows:
Figure GDA0002572935120000081
where the model parameters α, β are constants greater than 0, the probability distribution of relative motion is shown in fig. 2 (a). Its visual perception prominence is represented by calculating the self-information of the relative motion.
I(vr)=-logp(vr)=αlogvr-logβ (4)
Equation (4) shows that the visual perception prominence increases with increasing relative motion.
1.3 homogeneity metric for background motion
Background motion can lead to a reduction in the ability of the human eye to resolve details of the video content, which is known as a motion suppression effect, which can be understood as the uncertainty in visual perception due to motion excitation. The uncertainty is equivalent to sensing equivalent noise m in the process of observing a video, and the method uses a lognormal distribution to describe background motion as equivalent noise generated by stimulation:
Figure GDA0002572935120000082
wherein the width parameter of the Gaussian curve is sigma to vgIt is a constant but inversely proportional to the contrast threshold c.
Figure GDA0002572935120000083
Wherein the model parameters λ, γ are constants larger than 0. The visual perception uncertainty that it leads to is represented by calculating the information entropy of the background motion.
Figure GDA0002572935120000084
Equation (7) shows that the visual perception uncertainty increases with increasing background motion and decreases with increasing contrast threshold.
1.4 homogeneity metric for time-domain duration
The human eye perception system has a near cause effect, the near cause effect shows that the human eye perception system has a transient memory effect, and the human eye has relatively high perception sensitivity for image contents with longer transient memory time in the brain. Therefore, the duration of the same video object pixel point on the time domain motion trail has an important influence on video quality perception. By statistical analysis function fitting, a prior probability distribution of time domain duration is constructed as follows:
Figure GDA0002572935120000091
wherein the model parameters χ, κ are constants greater than 0, the probability distribution plot of the time domain duration is shown in fig. 2(b), wherein the highest histogram is the threshold χ. When τ < χ, p (τ) is 0. Based on the probability distribution function, self-information of the temporal duration is calculated to represent the visual perception saliency.
I(τ)=-logp(τ)=κlog(τ-χ)-log (9)
Equation (9) shows that when τ is greater than χ, the visual perception prominence increases with increasing temporal duration. 1.5 homogeneity measure of residual fluctuation intensity
The longer the temporal duration of the video object objects (regions), the more easily the distortion fluctuations are perceived by the human eye, and these regions may affect the visual perception of the human eye more. The method focuses on the predicted residual fluctuation intensity on the motion track, and can be understood as a perception uncertainty, the uncertainty is equivalent to the perception of equivalent noise m in the process of observing a video, and the residual fluctuation intensity is expressed by lognormal distribution as equivalent noise generated by stimulation:
Figure GDA0002572935120000092
the gaussian width parameter σ pair is a constant but inversely proportional to the luminance adaptive threshold LA.
Figure GDA0002572935120000093
Where ξ, η are constants greater than 0. The visual perception uncertainty caused by the residual fluctuation intensity is represented by calculating the information entropy of the residual fluctuation intensity.
Figure GDA0002572935120000094
Equation (12) shows that the visual perception uncertainty increases with increasing residual fluctuation intensity and decreases with increasing luminance adaptation threshold.
2. JND model based on space-time domain
Different background brightness in the image has direct influence on the visual perception sensitivity, so the method considers the brightness adaptability according to the HVS spatial domain perception characteristic. Subjective experiments show that the HVS is insensitive to darker or brighter background regions and highly sensitive to medium brightness background regions, and the brightness adaptive threshold can be calculated with the following formula:
Figure GDA0002572935120000101
wherein B (x)c) Is a pixel xcThe background brightness of (1).
According to the Bayesian brain perception theory, when human eyes watch images, a process of predicting the image content is inherent, the relevance of some spatial domains of the images can be predicted based on priori knowledge, and the structural information of the images conveys the main spatial domain visual content of the images, so that the understanding of the spatial domain structural information of the images is realized. Bayesian brain theory shows that the human visual system has a tendency to minimize the structured prediction residual as much as possible. The structural uncertainty is the prediction error, and the effect of this prediction error on the visual perception of the HVS is the so-called pattern masking effect. Since the HVS is very sensitive to both luminance variations and structural prediction uncertainty, the pattern masking effect function is derived from luminance contrast and structural uncertainty.
PM(xc)=f(E(xc),HU(xc)) (14)
Wherein PM (x)c) Is a visibility threshold caused by pattern masking, HU(xc) Is based on the structural uncertainty, E (x), of the entropy measure of the informationc) Is a parameter describing the contrast of the luminance, measured by the height of the detected edge.
The spatial domain JND threshold is determined by a brightness adaptive threshold LA and a mode masking effect corresponding threshold PM, and the spatial domain JND threshold JND is determined by a mode masking effect corresponding threshold PMSIs composed of
JNDs=LA+PM-Cgr×min{LA,PM} (15)
Wherein C isgrIs 0.3. Here LA and PM may be measured in other ways.
And fusing the four time domain characteristic parameters mapped to the same dimension to obtain a time domain JND regulation weight coefficient z as follows:
Figure GDA0002572935120000111
where μ, θ are constants greater than 0.
On the basis of a spatial domain JND threshold value, a time domain JND weight adjusting coefficient is used for adjusting the JND to obtain a spatial domain JND model JNDST. The overall block diagram is shown in fig. 3.
JNDST=(1+z)*JNDs(17)。

Claims (9)

1. A method for measuring space-time domain adaptive just noticeable distortion is characterized by comprising the following steps:
step one, fusing a mode masking effect PM and a brightness adaptability LA to obtain a spatial domain JND threshold JNDS
Step two, self-information I (v) of relative motionr) Information entropy of background motion U (v)g) Fusing four time domain perception parameters including self information I (tau) of time domain duration and information entropy U () of residual fluctuation intensity to obtain a time domain JND regulation weight coefficient z;
step three, JND threshold value JND in airspaceSOn the basis, the weight coefficient is adjusted by using time domain JNDz, adjusting the JND to obtain a JND threshold value JND of a space-time domainST
2. The space-time domain adaptive just noticeable distortion measurement method of claim 1, characterized by:
and in the second step, the four time domain perception parameters are mapped to a parameter space measured by information quantity, the visual perception significance is measured by adopting self-information, the visual perception uncertainty is measured by adopting information entropy, and then the homogenization measurement of different parameters is realized on the uniform scale of the information quantity.
3. The space-time domain adaptive just noticeable distortion measurement method of claim 2, characterized by: self information of relative motion I (v)r) The homogeneity measure of (a) is as follows:
the relative motion prior probability distribution can be expressed as follows:
Figure FDA0002572935110000011
where the model parameters α, β are constants greater than 0, the visual perception saliency is represented by calculating the self-information of the relative motion:
I(vr)=-logp(vr)=αlogvr-logβ (4)。
4. the space-time domain adaptive just noticeable distortion measurement method of claim 2, characterized by: information entropy U (v) of background motiong) The homogeneity measure of (a) is as follows:
lognormal distribution to describe the equivalent noise produced by background motion as a stimulus:
Figure FDA0002572935110000012
wherein the equivalent noise m is sensed during the observation of the video, and the width parameter sigma of the Gaussian curve is measured against vgIs a constant, and a contrast thresholdc is in inverse proportion:
Figure FDA0002572935110000021
wherein the model parameters λ, γ are constants greater than 0; representing the visual perception uncertainty caused by the background motion by calculating the information entropy of the background motion;
Figure FDA0002572935110000022
5. the space-time domain adaptive just noticeable distortion measurement method of claim 2, characterized by: the homogeneity measure for the time domain duration is as follows:
by statistical analysis function fitting, a prior probability distribution of time domain duration is constructed as follows:
Figure FDA0002572935110000023
wherein the model parameters χ, κ are constants greater than 0, and when τ < χ, p (τ) is 0; calculating self-information of time domain duration to represent visual perception significance based on a probability distribution function:
I(τ)=-logp(τ)=κlog(τ-χ)-log (9)。
6. the space-time domain adaptive just noticeable distortion measurement method of claim 2, characterized by: the homogeneity measure of the residual fluctuation intensity is as follows:
the prediction residual fluctuation intensity on the motion trail, the uncertainty is equivalent to the perception of equivalent noise m in the video observation process, and the residual fluctuation intensity is expressed by lognormal distribution as equivalent noise generated by stimulation:
Figure FDA0002572935110000024
wherein the width parameter σ is a constant and is inversely related to the luminance adaptive threshold LA:
Figure FDA0002572935110000025
where ξ and η are constants greater than 0, the visual perception uncertainty caused by it is represented by calculating the information entropy of the residual fluctuation intensity:
Figure FDA0002572935110000031
7. the space-time domain adaptive just noticeable distortion measurement method of claim 1, characterized by:
in step one, the HVS is insensitive to the darker or brighter background region and highly sensitive to the medium brightness background region, and the brightness adaptive threshold may be calculated using the following formula:
Figure FDA0002572935110000032
wherein B (x)c) Is a pixel xcThe background brightness of (1);
the mode masking effect function is derived from the luminance contrast and the structural uncertainty:
PM(xc)=f(E(xc),HU(xc)) (14)
wherein PM (x)c) Is a visibility threshold caused by pattern masking, HU(xc) Is based on the structural uncertainty, E (x), of the entropy measure of the informationc) Is a parameter describing the brightness contrast, measured by the detected edge height;
the spatial domain JND threshold is determined by a brightness adaptive threshold LA and a mode masking effect corresponding threshold PM, and the spatial domain JND threshold JND is determined by a mode masking effect corresponding threshold PMSIs composed of
JNDs=LA+PM-Cgr×min{LA,PM} (15)
Wherein C isgrIs 0.3.
8. The space-time domain adaptive just noticeable distortion measurement method of claim 2, characterized by:
in the second step, four time domain characteristic parameters mapped to the same dimension are fused to obtain a time domain JND weight adjusting coefficient z as follows:
Figure FDA0002572935110000033
where μ, θ are constants greater than 0.
9. The space-time domain adaptive just noticeable distortion measurement method of claim 1, characterized by:
in the third step, on the basis of the spatial domain JND threshold value, the temporal domain JND is used for adjusting the weight coefficient to obtain a spatial domain JND model JNDST
JNDST=(1+z)*JNDs(17)。
CN201811016478.8A 2018-08-31 2018-08-31 Space-time domain self-adaptive just noticeable distortion measurement method Active CN108965879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811016478.8A CN108965879B (en) 2018-08-31 2018-08-31 Space-time domain self-adaptive just noticeable distortion measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811016478.8A CN108965879B (en) 2018-08-31 2018-08-31 Space-time domain self-adaptive just noticeable distortion measurement method

Publications (2)

Publication Number Publication Date
CN108965879A CN108965879A (en) 2018-12-07
CN108965879B true CN108965879B (en) 2020-08-25

Family

ID=64475464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811016478.8A Active CN108965879B (en) 2018-08-31 2018-08-31 Space-time domain self-adaptive just noticeable distortion measurement method

Country Status (1)

Country Link
CN (1) CN108965879B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872302A (en) * 2019-01-15 2019-06-11 宁波大学科学技术学院 A kind of natural image JND threshold value estimation method based on rarefaction representation
CN110062236B (en) * 2019-05-10 2021-04-23 上海大学 Code rate allocation method, system and medium based on just-perceivable distortion of space-time domain
CN112967229B (en) * 2021-02-03 2024-04-26 杭州电子科技大学 Method for calculating just-perceived distortion threshold based on video perception characteristic parameter measurement
CN113361599B (en) * 2021-06-04 2024-04-05 杭州电子科技大学 Video time domain saliency measurement method based on perception characteristic parameter measurement
CN114024891B (en) * 2021-09-30 2023-05-26 哈尔滨工程大学 Naive Bayesian-assisted contact graph routing method and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010040013A1 (en) * 2008-10-02 2010-04-08 Apple Inc. Quality metrics for coded video using just noticeable difference models
CN101795411A (en) * 2010-03-10 2010-08-04 宁波大学 Analytical method for minimum discernable change of stereopicture of human eyes
CN103458265A (en) * 2013-02-01 2013-12-18 深圳信息职业技术学院 Method and device for evaluating video quality
CN103607589A (en) * 2013-11-14 2014-02-26 同济大学 Level selection visual attention mechanism-based image JND threshold calculating method in pixel domain
CN105635743A (en) * 2015-12-30 2016-06-01 福建师范大学 Minimum noticeable distortion method and system based on saliency detection and total variation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010040013A1 (en) * 2008-10-02 2010-04-08 Apple Inc. Quality metrics for coded video using just noticeable difference models
CN101795411A (en) * 2010-03-10 2010-08-04 宁波大学 Analytical method for minimum discernable change of stereopicture of human eyes
CN103458265A (en) * 2013-02-01 2013-12-18 深圳信息职业技术学院 Method and device for evaluating video quality
CN103607589A (en) * 2013-11-14 2014-02-26 同济大学 Level selection visual attention mechanism-based image JND threshold calculating method in pixel domain
CN105635743A (en) * 2015-12-30 2016-06-01 福建师范大学 Minimum noticeable distortion method and system based on saliency detection and total variation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Depth Masking Based Binocular Just-Noticeable-Distortion Model;Kai Zheng等;《2018 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)》;20180727;第1-5页 *
一种基于像素域与变换域联合估计的JND改进模型;郑明魁等;《福州大学学报(自然科学版)》;20140708;第42卷(第2期);第225-230页 *
基于区域划分的JND快速求取算法;张冠军等;《2012年互联网技术与应用国际学术会议》;20120101;第393-396页 *

Also Published As

Publication number Publication date
CN108965879A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN108965879B (en) Space-time domain self-adaptive just noticeable distortion measurement method
US6670963B2 (en) Visual attention model
JP2009273127A (en) Method for assessing quality of distorted version of frame sequence
CN108198155B (en) Self-adaptive tone mapping method and system
US10721477B2 (en) Techniques for predicting perceptual video quality based on complementary perceptual quality models
JP2017537494A (en) Dual-end metadata for judder visibility control
JP5107342B2 (en) Image improvement to increase accuracy smoothing characteristics
US11700383B2 (en) Techniques for modeling temporal distortions when predicting perceptual video quality
WO2020098751A1 (en) Video data encoding processing method and computer storage medium
Barri et al. A locally adaptive system for the fusion of objective quality measures
Di Claudio et al. A detail-based method for linear full reference image quality prediction
Kocić et al. Image quality parameters: A short review and applicability analysis
Petrović et al. Evaluation of image fusion performance with visible differences
KR20170106333A (en) Methods and apparatus for motion-based video tonal stabilization
Zelmati et al. Study of subjective and objective quality assessment of infrared compressed images
Bosse et al. A perceptually relevant shearlet-based adaptation of the PSNR
Fry et al. Bridging the gap between imaging performance and image quality measures
CN112967229B (en) Method for calculating just-perceived distortion threshold based on video perception characteristic parameter measurement
EP4107936B1 (en) Determining pixel intensity values in imaging
CN113361599B (en) Video time domain saliency measurement method based on perception characteristic parameter measurement
Zhou et al. Visual comfort prediction for stereoscopic image using stereoscopic visual saliency
Yang et al. A method of image quality assessment based on region of interest
Wang et al. PVC-STIM: Perceptual video coding based on spatio-temporal influence map
KR101829580B1 (en) Method and apparatus for quantifying visual presence
KR101069255B1 (en) Method and Apparatus For Processing Laser Speckle Image With Contrast

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant