CN111614962B - Perceptual image compression method based on region block level JND prediction - Google Patents

Perceptual image compression method based on region block level JND prediction Download PDF

Info

Publication number
CN111614962B
CN111614962B CN202010313187.6A CN202010313187A CN111614962B CN 111614962 B CN111614962 B CN 111614962B CN 202010313187 A CN202010313187 A CN 202010313187A CN 111614962 B CN111614962 B CN 111614962B
Authority
CN
China
Prior art keywords
jnd
value
image
block
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010313187.6A
Other languages
Chinese (zh)
Other versions
CN111614962A (en
Inventor
王瀚漓
田涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202010313187.6A priority Critical patent/CN111614962B/en
Publication of CN111614962A publication Critical patent/CN111614962A/en
Application granted granted Critical
Publication of CN111614962B publication Critical patent/CN111614962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]

Abstract

The invention relates to a perceptual image compression method based on region block level JND prediction, which comprises the following steps: 1) generating a JND value of a region block level by using a Otsu threshold method according to the images in the data set and the corresponding JND information; 2) establishing a CNN-based region block level JND prediction model according to the generated region block level JND value; 3) compressing the test image under a plurality of fixed QF values to obtain a plurality of corresponding distorted images, dividing all the distorted images into a plurality of non-overlapping area blocks, predicting a JND label of each area block, and finally acquiring a final JND value of each area block by adopting a label processing method; 4) and preprocessing the test image according to the target compressed QF value and the final JND value of each area block, selecting the largest area block sensing QF value as a compression parameter, and compressing the preprocessed test image by JPEG. Compared with the prior art, the method has the advantages of self-adaptive prediction, good compression quality, high compression efficiency and the like.

Description

Perceptual image compression method based on region block level JND prediction
Technical Field
The invention relates to the field of image compression, in particular to a perceptual image compression method based on region block level JND prediction.
Background
With the development of social networks and multimedia technologies, a great deal of picture information is generated on the internet. Based on recent statistics, Instagram users upload approximately 9000 ten thousand pictures per day. Therefore, how to store and transmit the images is a very challenging task, and existing image compression standards such as JPEG, h.264 and HEVC all use PSNR and MSE as standards for measuring distortion, however, PSNR considers each pixel point as important in the calculation process and is inconsistent with the human eye visual system, and therefore, it is very important to research an image compression algorithm oriented to the human eye visual system.
Various approaches have been proposed to address this challenge, including JND-based approaches, attention model-based approaches, and the like. Currently, image/video perceptual compression methods based on JND (Just Noticeable distortion) are the focus of research. The existing JND models are mainly divided into two types: pixel-based domain and dct (discrete Cosine transform) based domain. The pixel domain-based method mainly considers the brightness masking effect and the contrast masking effect in a human eye visual system; the JND model of the DCT domain is added with a spatial contrast function on the basis of a pixel domain model. Although the existing perceptual coding model can reduce perceptual redundant information in coding to a certain extent, only limited visual characteristics are considered and the perceptual redundant information is not changed along with the change of quantization parameters, and the latest perceptual experiment shows that the perception of a human visual system on image quality presents a staircase shape and is not continuously changed, and each mutation point can be regarded as a JND value. However, for a single image, a large amount of subjective experiments are required to obtain a final JND value, and the final JND value cannot be applied in reality.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned drawbacks of the prior art and providing a perceptual image compression method based on region block level JND prediction.
The purpose of the invention can be realized by the following technical scheme:
a perceptual image compression method based on region block level JND prediction comprises the following steps:
1) generating a JND value of a region block level by using a Otsu threshold method according to the images in the data set and the corresponding JND information;
2) establishing a CNN-based region block level JND prediction model according to the generated region block level JND value;
3) compressing the test image under a plurality of fixed QF values to obtain a plurality of corresponding distortion images, dividing all the distortion images into a plurality of non-overlapping area blocks, predicting a JND label of each area block, and finally acquiring a final JND value of each area block by adopting a label processing method;
4) and preprocessing the test image according to the target compressed QF value and the final JND value of each area block, selecting the largest area block sensing QF value as a compression parameter, and compressing the preprocessed test image by JPEG.
The step 1) specifically comprises the following steps:
11) for a smooth region, setting the region block level JND value in the smooth region to be consistent with the image level JND value, there are:
Figure BDA0002458616380000021
Figure BDA0002458616380000022
wherein S isITo test the picture level JND value of picture I,
Figure BDA0002458616380000023
for the ith area block b in the test image IiThe compression parameters of (1);
12) for the region with complex texture, the SSIM value of each region block is obtained under the image level JND value;
13) taking the quality difference delta SSIM of each region block under the continuous JND value as the intensity of each region block, and adaptively judging a distortion region under the current image level JND value by using the region block as a basic unit by using a greater fluid threshold method;
14) and circularly executing the steps 12) -13) until all image levels JND of each image are completely executed, and generating a final region block level JND value.
In the step 13), the expression of the quality difference Δ SSIM of each region block under the continuous JND values is:
Figure BDA0002458616380000024
wherein the content of the first and second substances,
Figure BDA0002458616380000031
when the compression parameter is
Figure BDA0002458616380000032
Then, the ith area block b in the test image IiSSIM value of.
The step 2) specifically comprises the following steps:
21) sorting the generated JND values of the region block level from small to large, and marking training label values after classification to form a data set;
22) 90% of the area blocks in the data set were used for training and 10% for testing;
23) and training a region block level JND prediction model by adopting an AlexNet network.
In the step 23), in the training process of the region block level JND prediction model, the size of the image block is set to 64 × 64, the initial learning rate is set to 0.001, the maximum iteration number is set to 250000, and the size of the batch size is set to 64.
The step 3) specifically comprises the following steps:
31) compressing the test image under a plurality of fixed QF values to obtain a plurality of distorted images;
32) dividing all distorted images into a plurality of non-overlapping area blocks, and predicting a JND label of each area block by adopting an area block level JND prediction model;
33) when the prediction JND labels of the area blocks at the same position of a plurality of distorted images meet the judgment formula
Figure BDA0002458616380000033
Step 34) is performed, if not, step 35) is performed, wherein q is performedi、qjQF values, b area blocks and L (-) prediction JND labels respectively;
34) the QF value corresponding to the JND label is the JND value of the current area block;
35) sorting the JND label values from small to large to enable the JND label values to meet a judgment formula in 33), and acquiring a corresponding QF value as the JND value of the current area block.
The fixed QF values are 9 in total, 15, 20, 25, 30, 35, 40, 45, 50 and 55, and the non-overlapping area blocks are 64 × 64 in size.
The step 4) specifically comprises the following steps:
41) obtaining the ith area block b of the test image IiPredicted JND value of
Figure BDA0002458616380000034
Then there are:
Figure BDA0002458616380000035
wherein the content of the first and second substances,
Figure BDA0002458616380000036
for the k-th JND value,
Figure BDA0002458616380000037
is the total number of predicted JND values;
42) the target compressed QF value is preset to
Figure BDA0002458616380000038
The final adopted perceived QF value
Figure BDA0002458616380000039
Comprises the following steps:
Figure BDA0002458616380000041
wherein the content of the first and second substances,
Figure BDA0002458616380000042
is the 1 st JND value and is,
Figure BDA0002458616380000043
is as follows
Figure BDA0002458616380000044
A JND value;
43) selecting the largest region block perception QF value as the compression parameter of the image level
Figure BDA0002458616380000045
The expression is as follows:
Figure BDA0002458616380000046
wherein, NBIThe number of the area blocks in the test image I;
44) if the JND value of the area block is smaller than the compression parameter of the image level, preprocessing the image;
45) after all DCT coefficients are preprocessed, inverse DCT transform operation is carried out to generate a preprocessed test image, and the image-level compression parameters in the step 43) are adopted
Figure BDA0002458616380000047
And (5) compressing by adopting standard JPEG to obtain a compressed image.
In the step 43), if the predicted JND value of the partial region block is small, the following steps are performed:
Figure BDA0002458616380000048
in the step 44), the preprocessing the image specifically includes:
Figure BDA0002458616380000049
wherein the content of the first and second substances,
Figure BDA00024586163800000410
is the DCT coefficient at the quantized position (m, n).
Compared with the prior art, the invention has the following advantages:
firstly, adaptive prediction: the method and the device do not need to carry out subjective experiments, and adaptively predict the JND information of the block level of the region according to the content of the input test image, and are used for perceptual coding.
Secondly, the compression quality is good: the method avoids the condition that subjective quality is reduced due to the fact that the JND value of individual region prediction is low, protects the quality of the region with simple texture, improves compression efficiency, and obtains subjective quality similar to JPEG.
Thirdly, the compression efficiency is high: the method can adaptively predict JND information according to the content of the region block of the test image, improves the compression efficiency of the image, selects 10 images in a Kodak data set as tests, respectively tests 3 QF values which are 75, 50 and 30 from high to low, respectively, and compared with a JPEG algorithm, under the condition of similar subjective perception quality, the code rate is respectively saved by 43.91 percent, 18.76 percent and 13.11 percent, and the compression efficiency exceeds that of other similar models.
Drawings
Fig. 1 is a flow chart of model-based training and perceptual coding in the present invention, where fig. 1a is a flow chart of model-based training and fig. 1b is a flow chart of perceptual coding.
FIG. 2 is a schematic diagram of a selected test image.
FIG. 3 is a flow chart of the method of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
As shown in fig. 1, the present invention provides a perceptual image compression method based on region block level JND prediction, comprising the steps of:
1) the method comprises the following steps of calculating a JND value of a region block level by using a Otsu threshold method according to images in a data set and corresponding JND information:
11) suppose picture I, its corresponding picture level JND value is:
Figure BDA0002458616380000051
wherein N isIAs the number of the picture levels JND,
Figure BDA0002458616380000052
is the kth compression parameter;
12) for smooth region smooth, we consider its region block biIndicates that the JND value of (1) is consistent with the image level JND valueComprises the following steps:
Figure BDA0002458616380000053
13) for complex regions, the quality difference under consecutive JND values is calculated for each region block, expressed as:
Figure BDA0002458616380000054
wherein the content of the first and second substances,
Figure BDA0002458616380000055
when the compression parameter is
Figure BDA0002458616380000056
In time, the ith area block biAn SSIM value of;
14) taking the quality difference in the step 13) as the strength of each region block, selecting an optimal threshold by utilizing an Otsu threshold algorithm, and adaptively dividing the image into two categories, namely a distortion region and a non-distortion region;
15) circularly executing the operation of the step 14), until all image levels JND of each image are executed, generating final region block level JND information, namely a region block level JND value;
2) establishing a CNN-based region block level JND prediction model according to the generated region block level JND information;
21) sorting the generated JND values of the region block level from small to large, dividing the JND values into 43 classes in total, and setting the training label value to be 0 to 42;
22) 90% of the area blocks in the data set were used for training and 10% for testing;
23) the method comprises the steps that a classical AlexNet network is utilized to train a region block level JND prediction model, in the training process of the region block level JND prediction model, the size of an image block is 64 x 64, the initial learning rate is set to be 0.001, the maximum iteration number is 250000, the size of a batch size is 64, and the test accuracy is 89.52%.
3) The test image was compressed at 9 fixed QF values (quality factors) to obtain 9 distorted images. Then dividing the 9 distorted images into 64 multiplied by 64 non-overlapping area blocks, predicting a JND label of each area block, and finally solving a final JND value of each area block by using a designed label processing method;
31) compressing the test image under 9 fixed QF values to obtain 9 distorted images, wherein the QF values are respectively 15, 20, 25, 30, 35, 40, 45, 50 and 55;
32) dividing 9 distorted images into 64 multiplied by 64 non-overlapping area blocks, and predicting a JND label of each area block;
33) if the prediction JND tags of the co-located area blocks of the 9 distorted images satisfy the following formula,
Figure BDA0002458616380000061
the QF value corresponding to the JND label is the JND value of the current area block;
34) if the predicted JND labels of the area blocks of the 9 distorted images at the same position do not meet the formula in the step 33), sorting the JND label values from small to large to enable the JND label values to meet the formula in the step 33), and then solving the corresponding QF value;
4) according to the target compressed QF value and the JND value of each area block, preprocessing the test image, compressing the preprocessed test image by JPEG, and selecting the maximum JND value of the area block as a compression parameter, wherein the method specifically comprises the following steps:
41) assuming that the test image is I, the predicted JND value of the ith area block is:
Figure BDA0002458616380000071
wherein the content of the first and second substances,
Figure BDA0002458616380000072
represents the kth JND value and,
Figure BDA0002458616380000073
represents the number of JND values;
42) assume a predetermined compressed QF value of qtarThe final adopted perceptual QF can then be expressed as:
Figure BDA0002458616380000074
43) selecting the maximum perception QF value of the region block as the compressed QF of the image level, and expressing as follows:
Figure BDA0002458616380000075
wherein, NBIFor testing the number of the area blocks in the image, if the predicted JND value of a part of the area blocks is smaller, the following formula is adopted for processing:
Figure BDA0002458616380000076
in an actual compression process, in order to guarantee the quality of the smooth region, a predicted JND value of the smooth region is set as a compressed QF value at an image level.
44) If the JND value of the region block is smaller than the compression parameter at the image level, the image is preprocessed using the following formula,
Figure BDA0002458616380000077
wherein the content of the first and second substances,
Figure BDA0002458616380000078
represents the DCT coefficients at the (m, n) positions after quantization;
45) after all DCT coefficients are preprocessed, inverse DCT transform operation is carried out to generate a preprocessed test image, and then standard JPEG is used for compression, wherein the compression parameter is the image level QF value in 43).
To verify the performance of the method of the present application, the following experiment was designed.
Randomly selecting 10 test images from the Kodak data set, predicting the JND value of the area block in 10 test images according to the method of the present invention as shown in fig. 2, then compressing 10 test images at given QF of 75, 50 and 30 respectively, covering high, medium and low quality, JPEG being the comparison algorithm of the present invention, the comparison results with JPEG being shown in table 1, table 2 and table 3,
Figure BDA0002458616380000081
DMOS=MOSΩ-MOSori
wherein Ω represents different compression methods, BPP represents the number of bits consumed by each pixel, MOS represents the subjective score, and ori represents the original JPEG algorithm.
TABLE 1 Performance of the invention at a QF of 75
Figure BDA0002458616380000082
TABLE 2 Performance of the invention at a QF of 50
Figure BDA0002458616380000083
Figure BDA0002458616380000091
TABLE 3 Performance of the invention at a QF of 30
Figure BDA0002458616380000092

Claims (5)

1. A perceptual image compression method based on region block level JND prediction is characterized by comprising the following steps:
1) generating a JND value of a region block level by using an Otsu threshold method according to the images in the data set and the corresponding JND information, and specifically comprising the following steps of:
11) for a smooth region, setting the region block level JND value in the smooth region to be consistent with the image level JND value, there are:
Figure FDA0003482804040000011
Figure FDA0003482804040000012
wherein S isITo test the picture level JND values of picture I,
Figure FDA0003482804040000013
for the ith area block b in the test image IiThe compression parameters of (2);
12) for the region with complex texture, the SSIM value of each region block is obtained under the image level JND value;
13) taking the mass difference delta SSIM of each area block under the continuous JND value as the intensity of each area block, and adaptively judging a distortion area under the current image level JND value by using the Otsu threshold method and taking the area block as a basic unit;
14) circularly executing the step 12) -the step 13) until all image levels JND of each image are executed, and generating a final area block level JND value;
2) establishing a CNN-based region block level JND prediction model according to the generated region block level JND value;
3) compressing the test image under a plurality of fixed QF values to obtain a plurality of corresponding distorted images, dividing all the distorted images into a plurality of non-overlapping area blocks, predicting a JND label of each area block, and finally acquiring a final JND value of each area block by adopting a label processing method;
the step 3) specifically comprises the following steps:
31) compressing the test image under a plurality of fixed QF values to obtain a plurality of distorted images;
32) dividing all distorted images into a plurality of non-overlapping area blocks, and predicting a JND label of each area block by adopting an area block level JND prediction model;
33) when the prediction JND label of the area block at the same position of the multiple distorted images meets the judgment formula
Figure FDA0003482804040000014
Step 34) is performed, if not, step 35) is performed, wherein q is performedi、qjQF values, b area blocks and L (-) prediction JND labels respectively;
34) the QF value corresponding to the JND label is the JND value of the current area block;
35) sorting the JND label values from small to large to enable the JND label values to meet a judgment formula in 33), and then acquiring a corresponding QF value as a JND value of the current area block;
4) according to the target compressed QF value and the final JND value of each area block, preprocessing the test image, selecting the largest of the perceived QF values of the area blocks as a compression parameter, and compressing the preprocessed test image by JPEG (joint photographic experts group), wherein the method specifically comprises the following steps:
41) obtaining the ith area block b of the test image IiPredicted JND value of
Figure FDA0003482804040000021
Then there are:
Figure FDA0003482804040000022
wherein the content of the first and second substances,
Figure FDA0003482804040000023
for the k-th JND value,
Figure FDA0003482804040000024
is the total number of predicted JND values;
42) presetting a target compressed QF value of
Figure FDA0003482804040000025
The perceived QF value ultimately adopted
Figure FDA0003482804040000026
Comprises the following steps:
Figure FDA0003482804040000027
wherein the content of the first and second substances,
Figure FDA0003482804040000028
is the 1 st JND value and is,
Figure FDA0003482804040000029
is as follows
Figure FDA00034828040400000210
A JND value;
43) selecting the largest region block perception QF value as the compression parameter of the image level
Figure FDA00034828040400000211
The expression is as follows:
Figure FDA00034828040400000212
wherein, NBIThe number of the area blocks in the test image I;
if the predicted JND value of the partial area block is smaller, the following method is adopted:
Figure FDA00034828040400000213
44) if the JND value of the area block is smaller than the compression parameter of the image level, preprocessing the image;
45) after all DCT coefficients are preprocessed, inverse DCT transform operation is carried out to generate a preprocessed test image, and the image-level compression parameters in the step 43) are adopted
Figure FDA00034828040400000214
The method comprises the following steps of compressing by adopting standard JPEG to obtain a compressed image, and specifically preprocessing the image:
Figure FDA0003482804040000031
wherein the content of the first and second substances,
Figure FDA0003482804040000032
is the DCT coefficient at the quantized position (m, n).
2. The method as claimed in claim 1, wherein in step 13), the expression of the quality difference Δ SSIM of each region block at consecutive JND values is:
Figure FDA0003482804040000033
wherein the content of the first and second substances,
Figure FDA0003482804040000034
when the compression parameter is
Figure FDA0003482804040000035
Then, the ith area block b in the test image IiSSIM value of.
3. The method as claimed in claim 1, wherein the step 2) comprises the following steps:
21) sorting the generated JND values of the region block level from small to large, and marking training label values after classification to form a data set;
22) 90% of the area blocks in the data set were used for training and 10% for testing;
23) and training a region block level JND prediction model by adopting an AlexNet network.
4. The method as claimed in claim 3, wherein in step 23), in the training process of the JND prediction model, the image block size is set to 64 × 64, the initial learning rate is set to 0.001, the maximum iteration number is set to 250000, and the batch size is set to 64.
5. The method as claimed in claim 1, wherein the fixed QF values are 9 in total, 15, 20, 25, 30, 35, 40, 45, 50 and 55, and the non-overlapping region blocks are 64 × 64 in size.
CN202010313187.6A 2020-04-20 2020-04-20 Perceptual image compression method based on region block level JND prediction Active CN111614962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010313187.6A CN111614962B (en) 2020-04-20 2020-04-20 Perceptual image compression method based on region block level JND prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010313187.6A CN111614962B (en) 2020-04-20 2020-04-20 Perceptual image compression method based on region block level JND prediction

Publications (2)

Publication Number Publication Date
CN111614962A CN111614962A (en) 2020-09-01
CN111614962B true CN111614962B (en) 2022-06-24

Family

ID=72197897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010313187.6A Active CN111614962B (en) 2020-04-20 2020-04-20 Perceptual image compression method based on region block level JND prediction

Country Status (1)

Country Link
CN (1) CN111614962B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637597B (en) * 2020-12-24 2022-10-18 深圳大学 JPEG image compression method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009023188A2 (en) * 2007-08-15 2009-02-19 Thomson Licensing Method and apparatus for improved video encoding using region of interest (roi) information
CN107040787A (en) * 2017-03-30 2017-08-11 宁波大学 The 3D HEVC inter-frame information hidden methods that a kind of view-based access control model is perceived
WO2018140158A1 (en) * 2017-01-30 2018-08-02 Euclid Discoveries, Llc Video characterization for smart enconding based on perceptual quality optimization
CN110062234A (en) * 2019-04-29 2019-07-26 同济大学 A kind of perception method for video coding based on the just discernable distortion in region
CN110072104A (en) * 2019-04-12 2019-07-30 同济大学 A kind of perceptual image compression method based on image level JND prediction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009023188A2 (en) * 2007-08-15 2009-02-19 Thomson Licensing Method and apparatus for improved video encoding using region of interest (roi) information
WO2018140158A1 (en) * 2017-01-30 2018-08-02 Euclid Discoveries, Llc Video characterization for smart enconding based on perceptual quality optimization
CN107040787A (en) * 2017-03-30 2017-08-11 宁波大学 The 3D HEVC inter-frame information hidden methods that a kind of view-based access control model is perceived
CN110072104A (en) * 2019-04-12 2019-07-30 同济大学 A kind of perceptual image compression method based on image level JND prediction
CN110062234A (en) * 2019-04-29 2019-07-26 同济大学 A kind of perception method for video coding based on the just discernable distortion in region

Also Published As

Publication number Publication date
CN111614962A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
Li et al. Learning convolutional networks for content-weighted image compression
US9282330B1 (en) Method and apparatus for data compression using content-based features
Gu et al. Hybrid no-reference quality metric for singly and multiply distorted images
Ki et al. Learning-based just-noticeable-quantization-distortion modeling for perceptual video coding
Zhou et al. End-to-end Optimized Image Compression with Attention Mechanism.
Liu et al. Perceptual reduced-reference visual quality assessment for contrast alteration
Tan et al. A perceptually relevant MSE-based image quality metric
Zhang et al. Practical image quality metric applied to image coding
Ma et al. Reduced-reference video quality assessment of compressed video sequences
Rehman et al. Reduced-reference SSIM estimation
CN103051901B (en) Video data coding device and method for coding video data
CN104378636B (en) A kind of video encoding method and device
CN110717868B (en) Video high dynamic range inverse tone mapping model construction and mapping method and device
CN110139112B (en) Video coding method based on JND model
Tian et al. Just noticeable difference level prediction for perceptual image compression
CN112399176B (en) Video coding method and device, computer equipment and storage medium
Zhang et al. Reduced reference image quality assessment based on statistics of edge
CN111614962B (en) Perceptual image compression method based on region block level JND prediction
Zhang et al. Perceptual video coding with block-level staircase just noticeable distortion
CN109754390B (en) No-reference image quality evaluation method based on mixed visual features
CN112767385B (en) No-reference image quality evaluation method based on significance strategy and feature fusion
CN110072104B (en) Perceptual image compression method based on image-level JND prediction
CN111385577A (en) Video transcoding method, device, computer equipment and computer readable storage medium
Gao et al. Quality constrained compression using DWT-based image quality metric
Zhao et al. Fast CU partition decision strategy based on human visual system perceptual quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant