CN112584153A - Video compression method and device based on just noticeable distortion model - Google Patents

Video compression method and device based on just noticeable distortion model Download PDF

Info

Publication number
CN112584153A
CN112584153A CN202011480118.0A CN202011480118A CN112584153A CN 112584153 A CN112584153 A CN 112584153A CN 202011480118 A CN202011480118 A CN 202011480118A CN 112584153 A CN112584153 A CN 112584153A
Authority
CN
China
Prior art keywords
jnd
value
image block
values
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011480118.0A
Other languages
Chinese (zh)
Other versions
CN112584153B (en
Inventor
王妙辉
王树康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202011480118.0A priority Critical patent/CN112584153B/en
Publication of CN112584153A publication Critical patent/CN112584153A/en
Application granted granted Critical
Publication of CN112584153B publication Critical patent/CN112584153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]

Abstract

The invention provides a video compression method and a device based on just noticeable distortion model, the method comprises the steps of after a video frame is input, dividing an image into blocks, carrying out DCT (discrete cosine transformation) to obtain a spectrogram corresponding to each image block, and then calculating the texture complexity T of each image block; according to the texture complexity T of each image block, on the premise of not considering quantization distortion, calculating JND corresponding to each image block0A value; according to JND0The value is calculated, and the JND value corresponding to each image block is calculated on the premise of considering quantization distortion; and according to the JND value corresponding to each image block, carrying out coding preprocessing on the video frame, reducing the DCT coefficient value of the corresponding position of the video frame, and removing visual redundancy in the video frame to obtain a coded video frame. The invention has the beneficial effects that: visual redundancy in the video is effectively removed, larger compression gain is obtained, and the characteristics of a human visual system are better met.

Description

Video compression method and device based on just noticeable distortion model
Technical Field
The present invention relates to a video compression method and apparatus, and more particularly, to a video compression method and apparatus based on just noticeable distortion model.
Background
Nowadays, people's life is still far from video equipment such as smart phones, digital cameras and UHD televisions, people have stronger and stronger requirements for high-definition images/videos, massive image/video data are transmitted on the internet at any time, and a very large bandwidth and a very large storage space are required, so that it becomes very important to develop an image/video compression technology capable of reducing the size of a data file without seriously reducing the subjective visual quality of the data file. In view of this demand, a new video coding standard, called High Efficiency Video Coding (HEVC), is recently being developed and is becoming more and more popular due to its high coding efficiency. However, it is becoming more difficult to achieve further improvement of coding efficiency using conventional predictive video coding methods, because the prediction performance becomes saturated at a certain level of computational complexity, and using perceptual-based image/video coding methods becomes a research hotspot in the field of image/video compression, which can effectively reduce perceptual redundancy that cannot be perceived by the human visual system. This approach can achieve further compression gains by eliminating perceptual redundant information in the picture/video up to Just Noticeable Distortion (JND) levels.
Most of the existing coding methods using the JND model based on DCT transform remove invisible high-frequency information from the input end before coding, so as to improve coding efficiency, but quantization effect is not considered in the process, and the obtained compression gain is insufficient.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a video compression method and apparatus based on just noticeable distortion model are provided to improve coding efficiency and obtain sufficient compression gain.
In order to solve the technical problems, the invention adopts the technical scheme that: a video compression method based on just noticeable distortion model comprises the following steps,
s10, after a video frame is input, dividing the image into blocks, performing DCT (discrete cosine transformation) to obtain a spectrogram corresponding to each image block, and then calculating the texture complexity T of each image block;
s20, according to the texture complexity T of each image block, on the premise of not considering quantization distortion, calculating JND corresponding to each DCT coefficient0A value;
s30, according to JND0The value is calculated, and the JND value corresponding to each DCT coefficient is calculated on the premise of considering quantization distortion;
and S40, according to the JND value corresponding to each image block, performing coding preprocessing on the video frame, reducing the DCT coefficient value of the corresponding position of the video frame, and removing visual redundancy in the video frame to obtain a coded video frame.
Further, in step S10, the texture complexity T is calculated by:
Figure BDA0002837245790000021
b represents the bit depth of the pixel value of the DCT block, N represents the size of the DCT block, omega (i, j), C (i, j) respectively represents the frequency and the DCT coefficient corresponding to the position (i, j); χ ═ 1.4, ∈ ═ 0.7, φ ═ 2.1,. lamda ═ 0.1, c1=0.02,c2=0.1,c3=3.9,p0=2,p1=2,p2=2,p3=4。
Further, in step S20, JND is calculated0The model used, based on coefficient reduction, is:
Figure BDA0002837245790000022
wherein JND0(x, y) represents JND of block number (x, y)0The values a, b, c are parameter values, and the parameter values a, b, c are obtained by training a model based on coefficient reduction.
Further, the calculation process of the a, b, c parameters is as follows:
JND for acquiring a certain number of image blocks through subjective testing0Values of a, b, c are then obtained by fitting based on a model with coefficient reduction.
Further, step S30 specifically includes,
s31, a feature V is proposed to measure the compression distortion visibility of the image block, and the calculation formula is as follows:
Figure BDA0002837245790000031
wherein Q isqAnd Qq -1Representing quantization and inverse quantization with a quantization step q;
s32, predicting a JND value by using a support vector regression method;
the step S32 specifically includes the steps of,
s321, JND of image block0The (x, y) and q values are input to train a just noticeable distortion model of the support vector machine, and an alpha value is output; wherein q is a quantization step;
s322, traversing JND values from 0 to JND0Calculating a V value, wherein when the V value is closest to 1, the corresponding alpha value is the label of the image block;
s323, JND of image block0Inputting the (x, y) and q values into a trained prediction model to obtain corresponding alpha values, and finally obtaining a JND value JND (alpha × JND) corresponding to each DCT coefficient0
The invention also provides a video compression device based on just noticeable distortion model, comprising,
the texture complexity calculation module is used for dividing the image into blocks and then performing DCT (discrete cosine transformation) after a video frame is input to obtain a spectrogram corresponding to each image block and then calculating the texture complexity T of each image block;
JND0a value calculating module, configured to calculate, according to the texture complexity T of each image block, a JND corresponding to each DCT coefficient without considering quantization distortion0A value;
JND value calculation module for calculating JND value according to JND0The value is calculated, and the JND value corresponding to each image block is calculated on the premise of considering quantization distortion;
and the video frame coding module is used for carrying out coding preprocessing on the video frames according to the JND value corresponding to each DCT coefficient, reducing the DCT coefficient value of the corresponding position of the video frames, and removing visual redundancy in the video frames to obtain the coded video frames.
Further, in the texture complexity calculating module, the texture complexity T is calculated in the following manner:
Figure BDA0002837245790000032
b represents the bit depth of the pixel value of the DCT block, N represents the size of the DCT block, omega (i, j), C (i, j) respectively represents the frequency and the DCT coefficient corresponding to the position (i, j); χ ═ 1.4, ∈ ═ 0.7, φ ═ 2.1,. lamda ═ 0.1, c1=0.02,c2=0.1,c3=3.9,p0=2,p1=2,p2=2,p3=4。
Further, the JND0In the value calculation module, JND is calculated0The model used, based on coefficient reduction, is:
Figure BDA0002837245790000041
wherein JND0(x, y) represents JND of block number (x, y)0The values a, b, c are parameter values, and the parameter values a, b, c are obtained by training a model based on coefficient reduction.
Further, the method comprises the following steps of,
a model training module for obtaining JND of a certain number of image blocks through subjective test0Values of a, b, c are then obtained by fitting based on a model with coefficient reduction.
Further, the JND value calculating module specifically includes,
a compression distortion visibility measuring unit for providing a feature V to measure the compression distortion visibility of the image block, wherein the calculation formula is as follows:
Figure BDA0002837245790000042
wherein Q isqAnd Qq -1Representing quantization and inverse quantization with a quantization step q;
the JND value prediction unit is used for predicting the JND value by using a support vector regression method;
the JND value prediction unit is specifically configured to,
JND of image blocks0The (x, y) and q values are input to train a just noticeable distortion model of the support vector machine, and an alpha value is output; wherein q is a quantization step;
traversing JND values from 0 to JND0Calculating a V value, wherein when the V value is closest to 1, the corresponding alpha value is the label of the image block;
JND of image blocks0Inputting the (x, y) and q values into a trained prediction model to obtain corresponding alpha values, and finally obtaining a JND value JND (alpha × JND) corresponding to each DCT coefficient0
The invention has the beneficial effects that: the quantization effect in the encoding process ignored in the traditional JND modeling is considered, the JND modeling is carried out according to different visual redundancy quantities of DCT blocks with different texture complexity, the threshold value corresponding to a smooth area is small, the threshold value of a complex area is large, the visual redundancy in pictures/videos is effectively removed, larger compression gain is obtained, and the characteristics of a human eye visual system are better met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the mechanisms shown in the drawings without creative efforts.
FIG. 1 is a flow chart of a video compression method based on just noticeable distortion model according to the present invention;
fig. 2 is a block diagram of a video compression apparatus based on just noticeable distortion model according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the description of the invention relating to "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying any relative importance or implicit indication of the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
Referring to fig. 1, a first embodiment of the present invention is: a video compression method based on just noticeable distortion model comprises the following steps,
s10, after a video frame is input, dividing the image into blocks, performing DCT (discrete cosine transformation) to obtain a spectrogram corresponding to each image block, and then calculating the texture complexity T of each image block;
s20, according to the texture complexity T of each image block, on the premise of not considering quantization distortion, calculating JND corresponding to each DCT coefficient0A value;
s30, according to JND0The JND value corresponding to each image block is calculated on the premise of considering quantization distortion;
and S40, according to the JND value corresponding to each image block, performing coding preprocessing on the video frame, reducing the DCT coefficient value of the corresponding position of the video frame, and removing visual redundancy in the video frame to obtain a coded video frame.
Further, in step S10, the texture complexity T is calculated by:
Figure BDA0002837245790000061
whereinB represents the bit depth of the pixel value of the DCT block, N represents the size of the DCT block, omega (i, j), C (i, j) represents the frequency and DCT coefficient corresponding to the position (i, j) respectively; χ ═ 1.4, ∈ ═ 0.7, φ ═ 2.1,. lamda ═ 0.1, c1=0.02,c2=0.1,c3=3.9,p0=2,p1=2,p2=2,p3=4。
Further, in step S20, JND is calculated0The model used, based on coefficient reduction, is:
Figure BDA0002837245790000062
wherein JND0(x, y) represents JND of block number (x, y)0The values a, b, c are parameter values, and the parameter values a, b, c are obtained by training a model based on coefficient reduction.
Further, the calculation process of the a, b, c parameters is as follows:
JND for acquiring a certain number of image blocks through subjective testing0Values of a, b, c are then obtained by fitting based on a model with coefficient reduction.
Further, step S30 specifically includes,
s31, a feature V is proposed to measure the compression distortion visibility of the image block, and the calculation formula is as follows:
Figure BDA0002837245790000071
wherein Q isqAnd Qq -1Representing quantization and inverse quantization with a quantization step q;
s32, predicting a JND value by using a support vector regression method;
the step S32 specifically includes the steps of,
s321, JND of image block0The (x, y) and q values are input to train a just noticeable distortion model of the support vector machine, and an alpha value is output; wherein q is a quantization step;
s322, traversing JND values from 0 to JND0Calculating a V value, wherein when the V value is closest to 1, the corresponding alpha value is the label of the image block;
s323, JND of image block0The (x, y) and q values are input to train a just noticeable distortion model of the support vector machine, and an alpha value is output; wherein q is a quantization step;
the beneficial effect of this embodiment lies in: the quantization effect in the encoding process ignored in the traditional JND modeling is considered, the JND modeling is carried out according to different visual redundancy quantities of DCT blocks with different texture complexity, the threshold value corresponding to a smooth area is small, the threshold value of a complex area is large, the visual redundancy in pictures/videos is effectively removed, larger compression gain is obtained, and the characteristics of a human eye visual system are better met.
As shown in fig. 2, the second embodiment of the present invention is: a video compression apparatus based on just noticeable distortion model includes,
the texture complexity calculating module 10 is configured to, after a video frame is input, perform DCT transformation on an image after being partitioned to obtain a spectrogram corresponding to each image block, and then calculate a texture complexity T of each image block;
JND0a value calculating module 20, configured to calculate a JND corresponding to each DCT coefficient according to the texture complexity T of each image block without considering quantization distortion0A value;
a JND value calculation module 30 for calculating a JND value based on the JND0The value is calculated, and the JND value corresponding to each image block is calculated on the premise of considering quantization distortion;
and the video frame coding module 40 is configured to perform coding preprocessing on the video frame according to the JND value corresponding to each image block, reduce DCT coefficient values at corresponding positions of the video frame, and remove visual redundancy in the video frame to obtain a coded video frame.
Further, in the texture complexity calculating module 10, the texture complexity T is calculated in the following manner:
Figure BDA0002837245790000081
b represents the bit depth of the pixel value of the DCT block, N represents the size of the DCT block, omega (i, j), C (i, j) respectively represents the frequency and the DCT coefficient corresponding to the position (i, j); χ ═ 1.4, ∈ ═ 0.7, φ ═ 2.1,. lamda ═ 0.1, c1=0.02,c2=0.1,c3=3.9,p0=2,p1=2,p2=2,p3=4。
Further, the JND0In the value calculation module 20, JND is calculated0The model used, based on coefficient reduction, is:
Figure BDA0002837245790000082
wherein JND0(x, y) represents JND of block number (x, y)0The values a, b, c are parameter values, and the parameter values a, b, c are obtained by training a model based on coefficient reduction.
Further, the method comprises the following steps of,
a model training module for obtaining JND of a certain number of image blocks through subjective test0Values of a, b, c are then obtained by fitting based on a model with coefficient reduction.
Further, the JND value calculating module 30 specifically includes,
a compression distortion visibility measuring unit for providing a feature V to measure the compression distortion visibility of the image block, wherein the calculation formula is as follows:
Figure BDA0002837245790000083
wherein Q isqAnd Qq -1Representing quantization and inverse quantization with a quantization step q;
the JND value prediction unit is used for predicting the JND value by using a support vector regression method;
the JND value prediction unit is specifically configured to,
JND of image blocks0The (x, y) and q values are input to train a just noticeable distortion model of the support vector machine, and an alpha value is output; wherein q is a quantization step;
traversing JND values from 0 to JND0Calculating a V value, wherein when the V value is closest to 1, the corresponding alpha value is the label of the image block;
JND of image blocks0Inputting the (x, y) and q values into a trained prediction model to obtain corresponding alpha values, and finally obtaining a JND value JND (alpha × JND) corresponding to each DCT coefficient0
It should be noted that, as can be clearly understood by those skilled in the art, the detailed implementation process of the video compression apparatus based on the just noticeable distortion model may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, no further description is provided herein.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program comprises program instructions. The program instructions, when executed by the processor, cause the processor to perform a video compression method based on a just noticeable distortion model as described above.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A video compression method based on just noticeable distortion model is characterized in that: comprises the following steps of (a) carrying out,
s10, after a video frame is input, dividing the image into blocks, performing DCT (discrete cosine transformation) to obtain a spectrogram corresponding to each image block, and then calculating the texture complexity T of each image block;
s20, according to the texture complexity T of each image block, under the premise of not considering quantization distortion, calculating JND corresponding to each image block0A value;
s30, according to JND0The value is calculated, and the JND value corresponding to each image block is calculated on the premise of considering quantization distortion;
and S40, according to the JND value corresponding to each image block, performing coding preprocessing on the video frame, reducing the DCT coefficient value of the corresponding position of the video frame, and removing visual redundancy in the video frame to obtain a coded video frame.
2. A method for video compression based on just noticeable distortion models as in claim 1, wherein: in step S10, the texture complexity T is calculated by:
Figure FDA0002837245780000011
b represents the N bit depth of the pixel value of the DCT block, represents the size of the DCT block, and omega (i, j) and C (i, j) respectively represent the frequency and the DCT coefficient corresponding to the position (i, j); χ ═ 1.4, ∈ ═ 0.7, φ ═ 2.1,. lamda ═ 0.1, c1=0.02,c2=0.1,c3=3.9,p0=2,p1=2,p2=2,p3=4。
3. As claimed in claim 2The video compression method based on just noticeable distortion model is characterized in that: in step S20, JND is calculated0The model used, based on coefficient reduction, is:
Figure FDA0002837245780000012
wherein JND0(x, y) represents JND of block number (x, y)0The values a, b, c are parameter values, and the parameter values a, b, c are obtained by training a model based on coefficient reduction.
4. A method for video compression based on just noticeable distortion models as in claim 3, wherein: the calculation process of the a, b and c parameters comprises the following steps:
JND for acquiring a certain number of image blocks through subjective testing0Values of a, b, c are then obtained by fitting based on a model with coefficient reduction.
5. A method for video compression based on just noticeable distortion models as in claim 4, wherein: the step S30 specifically includes the steps of,
s31, a feature V is proposed to measure the compression distortion visibility of the image block, and the calculation formula is as follows:
Figure FDA0002837245780000021
wherein Q isqAnd Qq -1Representing quantization and inverse quantization with a quantization step q;
s32, predicting a JND value by using a support vector regression method;
the step S32 specifically includes the steps of,
s321, JND of image block0The (x, y) and q values are input to train a just noticeable distortion model of the support vector machine, and an alpha value is output; wherein q is a quantization step;
S322and traversing JND values from 0 to JND0Calculating a V value, wherein when the V value is closest to 1, the corresponding alpha value is the label of the image block;
s323, JND of image block0Inputting the (x, y) and q values into a trained prediction model to obtain corresponding alpha values, and finally obtaining a JND value JND (alpha × JND) corresponding to each DCT coefficient0
6. A video compression apparatus based on a just noticeable distortion model, comprising: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
the texture complexity calculation module is used for dividing the image into blocks and then performing DCT (discrete cosine transformation) after a video frame is input to obtain a spectrogram corresponding to each image block and then calculating the texture complexity T of each image block;
JND0a value calculating module, configured to calculate, according to the texture complexity T of each image block, a JND corresponding to each image block without considering quantization distortion0A value;
JND value calculation module for calculating JND value according to JND0The value is calculated, and the JND value corresponding to each image block is calculated on the premise of considering quantization distortion;
and the video frame coding module is used for carrying out coding preprocessing on the video frame according to the JND value corresponding to each image block, reducing the DCT coefficient value of the corresponding position of the video frame, and removing visual redundancy in the video frame to obtain a coded video frame.
7. The device for video compression based on a just noticeable distortion model as in claim 6, wherein: in the texture complexity calculating module, the calculating mode of the texture complexity T is as follows:
Figure FDA0002837245780000031
b represents the bit depth of the pixel value of the DCT block, N represents the size of the DCT block, omega (i, j), C (i, j) respectively represents the frequency and the DCT coefficient corresponding to the position (i, j); χ ═ 1.4, ∈ ═ 0.7, # ═ 2.1, and λ ═ 0.1,c1=0.02,c2=0.1,c3=3.9,p0=2,p1=2,p2=2,p3=4。
8. The device for video compression based on a just noticeable distortion model as in claim 7, wherein: the JND0In the value calculation module, JND is calculated0The model used, based on coefficient reduction, is:
Figure FDA0002837245780000032
wherein JND0(x, y) represents JND of block number (x, y)0The values a, b, c are parameter values, and the parameter values a, b, c are obtained by training a model based on coefficient reduction.
9. A video compression apparatus based on a just noticeable distortion model as in claim 8, wherein: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
a model training module for obtaining JND of a certain number of image blocks through subjective test0Values of a, b, c are then obtained by fitting based on a model with coefficient reduction.
10. A video compression apparatus based on a just noticeable distortion model as in claim 9, wherein: the JND value calculation module specifically includes,
a compression distortion visibility measuring unit for providing a feature V to measure the compression distortion visibility of the image block, wherein the calculation formula is as follows:
Figure FDA0002837245780000041
wherein Q isqAnd Qq -1Representing quantization and inverse quantization with a quantization step q;
the JND value prediction unit is used for predicting the JND value by using a support vector regression method;
the JND value prediction unit is specifically configured to,
JND of image blocks0The (x, y) and q values are input to train a just noticeable distortion model of the support vector machine, and an alpha value is output; wherein q is a quantization step;
traversing JND values from 0 to JND0Calculating a V value, wherein when the V value is closest to 1, the corresponding alpha value is the label of the image block;
JND of image blocks0Inputting the (x, y) and q values into a trained prediction model to obtain corresponding alpha values, and finally obtaining a JND value JND (alpha × JND) corresponding to each DCT coefficient0
CN202011480118.0A 2020-12-15 2020-12-15 Video compression method and device based on just noticeable distortion model Active CN112584153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011480118.0A CN112584153B (en) 2020-12-15 2020-12-15 Video compression method and device based on just noticeable distortion model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011480118.0A CN112584153B (en) 2020-12-15 2020-12-15 Video compression method and device based on just noticeable distortion model

Publications (2)

Publication Number Publication Date
CN112584153A true CN112584153A (en) 2021-03-30
CN112584153B CN112584153B (en) 2022-07-01

Family

ID=75135387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011480118.0A Active CN112584153B (en) 2020-12-15 2020-12-15 Video compression method and device based on just noticeable distortion model

Country Status (1)

Country Link
CN (1) CN112584153B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489983A (en) * 2021-06-11 2021-10-08 浙江智慧视频安防创新中心有限公司 Method and device for determining block coding parameters based on correlation comparison
CN114359784A (en) * 2021-12-03 2022-04-15 湖南财政经济学院 Prediction method and system for just noticeable distortion of human eyes for video compression

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219525A (en) * 2014-09-01 2014-12-17 国家广播电影电视总局广播科学研究院 Perceptual video coding method based on saliency and just noticeable distortion
CN104378636A (en) * 2014-11-10 2015-02-25 中安消技术有限公司 Video image coding method and device
CN109525847A (en) * 2018-11-13 2019-03-26 华侨大学 A kind of just discernable distortion model threshold value calculation method
US10356404B1 (en) * 2017-09-28 2019-07-16 Amazon Technologies, Inc. Image processing using just-noticeable-difference thresholds
CN110062234A (en) * 2019-04-29 2019-07-26 同济大学 A kind of perception method for video coding based on the just discernable distortion in region

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219525A (en) * 2014-09-01 2014-12-17 国家广播电影电视总局广播科学研究院 Perceptual video coding method based on saliency and just noticeable distortion
CN104378636A (en) * 2014-11-10 2015-02-25 中安消技术有限公司 Video image coding method and device
US10356404B1 (en) * 2017-09-28 2019-07-16 Amazon Technologies, Inc. Image processing using just-noticeable-difference thresholds
CN109525847A (en) * 2018-11-13 2019-03-26 华侨大学 A kind of just discernable distortion model threshold value calculation method
CN110062234A (en) * 2019-04-29 2019-07-26 同济大学 A kind of perception method for video coding based on the just discernable distortion in region

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
X. SHEN ET AL.: "Just Noticeable Distortion Based Perceptually Lossless Intra Coding", 《ICASSP 2020-2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 *
李淑芝 等: "结合纹理复杂度和JND模型的图像水印算法", 《计算机应用研究》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489983A (en) * 2021-06-11 2021-10-08 浙江智慧视频安防创新中心有限公司 Method and device for determining block coding parameters based on correlation comparison
CN114359784A (en) * 2021-12-03 2022-04-15 湖南财政经济学院 Prediction method and system for just noticeable distortion of human eyes for video compression

Also Published As

Publication number Publication date
CN112584153B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
EP1938613B1 (en) Method and apparatus for using random field models to improve picture and video compression and frame rate up conversion
Zhang et al. Low-rank decomposition-based restoration of compressed images via adaptive noise estimation
CN112584153B (en) Video compression method and device based on just noticeable distortion model
WO2013143396A1 (en) Digital video quality control method and device thereof
JP2002531971A (en) Image processing circuit and method for reducing differences between pixel values across image boundaries
Wan et al. Image bit-depth enhancement via maximum a posteriori estimation of AC signal
CN104378636B (en) A kind of video encoding method and device
CN110136057B (en) Image super-resolution reconstruction method and device and electronic equipment
Singh et al. A signal adaptive filter for blocking effect reduction of JPEG compressed images
Shao et al. No-reference view synthesis quality prediction for 3-D videos based on color–depth interactions
WO2017004889A1 (en) Jnd factor-based super-pixel gaussian filter pre-processing method
Ghadiyaram et al. A no-reference video quality predictor for compression and scaling artifacts
CN111524110B (en) Video quality evaluation model construction method, evaluation method and device
US11917163B2 (en) ROI-based video coding method and device
WO2019037471A1 (en) Video processing method, video processing device and terminal
Thakur et al. Texture analysis and synthesis using steerable pyramid decomposition for video coding
CN112437301B (en) Code rate control method and device for visual analysis, storage medium and terminal
CN110740324B (en) Coding control method and related device
CN110740316A (en) Data coding method and device
Farah et al. Full-reference and reduced-reference quality metrics based on SIFT
CN115567712A (en) Screen content video coding perception code rate control method and device based on just noticeable distortion by human eyes
Akramullah et al. Video quality metrics
CN112509107A (en) Point cloud attribute recoloring method, device and encoder
US8204335B2 (en) Method and apparatus for measuring blockiness in video images
KR100843100B1 (en) Method and apparatus for reducing block noises of digital image, and encoder/decoder using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant