CN103903259A - Objective three-dimensional image quality evaluation method based on structure and texture separation - Google Patents
Objective three-dimensional image quality evaluation method based on structure and texture separation Download PDFInfo
- Publication number
- CN103903259A CN103903259A CN201410105777.4A CN201410105777A CN103903259A CN 103903259 A CN103903259 A CN 103903259A CN 201410105777 A CN201410105777 A CN 201410105777A CN 103903259 A CN103903259 A CN 103903259A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- org
- msubsup
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000000926 separation method Methods 0.000 title claims abstract description 22
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 13
- 238000011156 evaluation Methods 0.000 claims abstract description 95
- 230000004927 fusion Effects 0.000 claims abstract description 10
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 66
- 230000008569 process Effects 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 18
- 239000003814 drug Substances 0.000 claims description 6
- 241000411851 herbal medicine Species 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 abstract description 11
- 230000000007 visual effect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses an objective three-dimensional image quality evaluation method based on structure and texture separation. The method includes the steps that firstly, structure and texture separation is conducted on a left viewpoint image body and a right viewpoint image body of an original undistorted three-dimensional image and a left viewpoint image body and a right viewpoint image body of a distorted three-dimensional image to be evaluated to obtain structure images and texture images of the left viewpoint image bodies and structure images and texture images of the right viewpoint image bodies; gradient similarity is used for evaluating the structure images of the left viewpoint image bodies and the structure images of the right viewpoint image bodies respectively, structural similarity is used for evaluating the texture images of the left viewpoint image bodies and the texture images of the right viewpoint image bodies, and an objective image quality evaluation prediction value of the distorted three-dimensional image to be evaluated is obtained through fusion. The objective three-dimensional image quality evaluation method based on structure and texture separation has the advantages that the structure images obtained through decomposition and the texture images obtained through decomposition can well represent influences on image quality of image structure and texture information, so that it seems that an evaluation result better conforms to a human visual system, and therefore the relevance of the objective evaluation result and subjective perception is effectively improved.
Description
Technical Field
The invention relates to an image quality evaluation method, in particular to a three-dimensional image quality objective evaluation method based on structure texture separation.
Background
With the rapid development of image coding technology and stereoscopic display technology, the stereoscopic image technology has received more and more extensive attention and application, and has become a current research hotspot. The stereoscopic image technology utilizes the binocular parallax principle of human eyes, the left viewpoint image and the right viewpoint image from the same scene are respectively and independently received by the two eyes, and the binocular parallax is formed through brain fusion, so that the stereoscopic image with depth perception and reality perception is appreciated. Because of the influence of the acquisition system and the storage compression and transmission equipment, a series of distortions are inevitably introduced into the stereo image, and compared with a single-channel image, the stereo image needs to ensure the image quality of two channels simultaneously, so that the quality evaluation of the stereo image is of great significance. However, currently, there is no effective objective evaluation method for evaluating the quality of stereoscopic images. Therefore, establishing an effective objective evaluation model of the quality of the stereo image has very important significance.
The current objective evaluation method for the quality of the stereo image is to directly apply a plane image quality evaluation method to evaluate the quality of the stereo image or evaluate the depth perception of the stereo image by evaluating the quality of a disparity map, however, the process of fusing the stereo image to generate the stereo effect is not an extension of the simple plane image quality evaluation method, human eyes do not directly watch the disparity map, and the evaluation of the depth perception of the stereo image by the quality of the disparity map is not very accurate. Therefore, how to effectively simulate the binocular stereo perception process in the stereo image quality evaluation process and how to analyze the influence mechanism of different distortion types on the stereo perception quality so that the evaluation result can more objectively reflect the human visual system are problems to be researched and solved in the process of objectively evaluating the stereo image quality.
Disclosure of Invention
The invention aims to provide a three-dimensional image quality objective evaluation method based on structure texture separation, which can effectively improve the correlation between objective evaluation results and subjective perception.
The technical scheme adopted by the invention for solving the technical problems is as follows: a three-dimensional image quality objective evaluation method based on structure texture separation is characterized in that the processing process is as follows:
firstly, respectively implementing structure texture separation on a left viewpoint image and a right viewpoint image of an original undistorted stereo image and a left viewpoint image and a right viewpoint image of a distorted stereo image to be evaluated to obtain respective structure images and texture images;
secondly, obtaining an objective evaluation prediction value of the image quality of the structural image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the left viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the structural image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the right viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the right viewpoint image of the distorted stereo image to be evaluated;
secondly, obtaining an objective image quality evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 multiplied by 8 in the texture image of the left viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 multiplied by 8 in the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the texture image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 × 8 in the texture image of the right viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 × 8 in the texture image of the right viewpoint image of the distorted stereo image to be evaluated;
thirdly, fusing the image quality objective evaluation predicted values of the structural images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the structural image of the distorted three-dimensional image to be evaluated; similarly, fusing the image quality objective evaluation predicted values of the texture images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the texture image of the distorted three-dimensional image to be evaluated;
and finally, fusing the image quality objective evaluation predicted value of the structural image and the texture image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the distorted three-dimensional image to be evaluated.
The objective evaluation method for the quality of the stereo image based on the structure texture separation specifically comprises the following steps:
making SorgRepresenting the original undistorted stereo image, let SdisA stereoscopic image representing distortion to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, wherein (x, y) denotes a coordinate position of a pixel point in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. x.ltoreq.W, y is 1. ltoreq. y.ltoreq.H, W denotes a width of the left viewpoint image and the right viewpoint image, H denotes a height of the left viewpoint image and the right viewpoint image, L is Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { RorgThe pixel value L of the pixel point with the coordinate position (x, y) in (x, y) } isdis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
② are respectively paired with { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } structural texture separation to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } respective structural and texture images, will { Lorg(x, y) } structural and texture image correspondences are notedAndwill { Rorg(x, y) } structural and texture image correspondences are notedAndwill { Ldis(x, y) } structural and texture image correspondences are notedAndlet the structural image and texture image of { Rdis (x, y) } correspond asAndwherein,to representThe pixel value of the pixel point with the middle coordinate position of (x, y),To representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y);
calculatingEach pixel point in (1)The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position andthe gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>L</mi>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representThe vertical direction gradient of the pixel point with the middle coordinate position (x, y),to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representGradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1Is a control parameter; then according toEach pixel point in (1)Calculating the gradient similarity between corresponding pixel pointsThe predicted value of objective evaluation of image quality is recorded as
Also, calculateEach pixel point in (1)Property of willThe pixel point with the (x, y) coordinate position andthe gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>R</mi>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representThe vertical direction gradient of the pixel point with the middle coordinate position (x, y),to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representGradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1Is a control parameter; then according toEach pixel point in (1)Calculating the gradient similarity between corresponding pixel pointsThe predicted value of objective evaluation of image quality is recorded as
Fourthly, passing throughObtainingEach of the sub-blocks having a size of 8 x 8 andcalculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8The predicted value of objective evaluation of image quality is recorded as
Also, by obtainingEach of the sub-blocks having a size of 8 x 8 andcalculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8The predicted value of objective evaluation of image quality is recorded as
Fifthly, toAndcarrying out fusion to obtain SdisThe predicted value of the objective evaluation of the image quality of the structural image is marked as Qstr,Wherein, wsTo representAndthe weight proportion of (2);
also, forAndcarrying out fusion to obtain SdisThe predicted value of the texture image is recorded as Qtex,Wherein, wtTo representAndthe weight proportion of (2);
sixthly to QstrAnd QtexCarrying out fusion to obtain SdisThe predicted value of the objective evaluation of image quality is expressed as Q, Q = w × Qstr+(1-w)×QtexWherein w represents QstrAnd SdisThe weight ratio of (2).
In the step II, { Lorg(x, y) } structural imageAnd texture imagesThe acquisition process comprises the following steps:
② 1a, will { LorgDefining the current pixel point to be processed in (x, y) } as the current pixel pointA front pixel point;
2a, setting the current pixel point at { LorgThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded asDefining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in { L } neighborhood sub-blocksorg(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as centerWherein p ∈ Ω, q ∈ Ω, where Ω denotes { L ∈ Ωorg(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-blockThe pixel point in (1) is in the current sub-blockCoordinate position of (1) x2≤9,1≤y2≤9,Representing a current sub-blockThe coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To representIs atCoordinate position of (1) x3≤9,1≤y3≤9,To representThe middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1);
in the step 2a, for any neighborhood pixel point and any pixel point in the current sub-block, the pixel point is assumed to be in the { L [ ]orgThe coordinate position in (x, y) } is (x, y), if x is<1 and y is more than or equal to 1 and less than or equal to H, then { L ≦ HorgAssigning the pixel value of the pixel point with the coordinate position (1, y) in the (x, y) } to the pixel point; if x>W is 1. ltoreq. y.ltoreq.H, then { LorgAssigning the pixel value of the pixel point with the coordinate position (W, y) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x,1) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x, H) in the (x, y) } to the pixel point; if x<1 and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1,1) in the (x, y) } to the pixel point; if x>W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W,1) in the (x, y) } to the pixel point; if x<1 and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1, H) in the (x, y) } to the pixel point; if x>W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W, H) in the (x, y) } to the pixel point;
② 3a, obtaining the current sub-blockThe feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-blockThe feature vector of the pixel point with the middle coordinate position (x2, y2) is recorded as <math>
<mrow>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>[</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>]</mo>
</mrow>
</math> Whereinhas a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,representing a current sub-blockThe middle coordinate position is (x)2,y2) The density value of the pixel point of (a),is composed ofThe first partial derivative in the horizontal direction,is composed ofThe first partial derivative in the vertical direction,is composed ofThe second partial derivative in the horizontal direction,is composed ofSecond partial derivatives in the vertical direction;
② 4a, according to the current sub-blockCalculating the current sub-block according to the feature vector of each pixel pointCovariance matrix of (2), as <math>
<mrow>
<msubsup>
<mi>C</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>7</mn>
<mo>×</mo>
<mn>7</mn>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mfrac>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mo>,</mo>
</mrow>
</math> Wherein,has a dimension of 7 x 7,representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1),is composed ofThe transposed vector of (1);
② 5a, for the current sub-blockCovariance matrix ofThe Cholesky decomposition is carried out and,obtaining the current sub-blockSigma feature set of (D), noted <math>
<mrow>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mo>[</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>]</mo>
<mo>,</mo>
</mrow>
</math> Wherein L isTIs a transposed matrix of the L and,has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)A 7 th column vector representing L;
secondly, 6a, adopting the same operation as the operation from the step II to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window taking each neighborhood pixel point as the center, and leading the Sigma characteristic set to be matched with the neighborhood sub-blockSigma feature set ofHas a dimension of 7 × 15;
7a, according to the current sub-blockSigma feature set ofAnd a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math>
<mrow>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, N' (p) indicates that all neighborhood pixels in a 21 × 21 neighborhood window centered on the current pixel in { Lorg (x, y) } are in { L }orgA set of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of a gaussian function, the symbol "| | |" is the euclidean distance calculation symbol, Lorg(q) represents { L }org(x, y) } the pixel value of a pixel point with the coordinate position of q;
② 8a, according to the structure information of current pixel pointObtaining the texture information of the current pixel point and recording the texture information as Wherein L isorg(p) representing a pixel value of a current pixel point;
② 9a, will { LorgTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2a to continue executing until the pixel point is LorgAll pixel points in (x, y) are processed to obtain { L }orgThe structural information and the texture information of each pixel point in (x, y) } are expressed by { L }orgThe structural information of all the pixel points in (x, y) } constitutes { L }org(x, y) } structural image, notedFrom { LorgTexture information of all pixel points in (x, y) } constitutes { L }org(x, y) } texture image, noted
Acquiring { L by adopting steps from 1a to 9aorg(x, y) } structural imageAnd texture imagesSame operation, get { Rorg(x, y) } structural imageAnd texture images{Ldis(x, y) } structural imageAnd texture images{Rdis(x, y) } structural imageAnd texture images
In the step (iv)Objectively evaluating the predicted value of image qualityThe acquisition process comprises the following steps:
fourthly-1 a, respectivelyAndis divided intoSub-blocks of size 8 × 8, which do not overlap with each other, are formedDefining the current kth sub-block to be processed as the current first sub-blockThe current, to be processed, k sub-block is defined as the current, second sub-block, wherein,k has an initial value of 1;
fourthly-2 a, recording the current first sub-block asRecord the current second sub-block asWherein (x)4,y4) To representAndx is more than or equal to 14≤8,1≤y4≤8,To representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 a, calculating the current first sub-blockMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>;</mo>
</mrow>
</math>
likewise, the current second sub-block is calculatedMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>;</mo>
</mrow>
</math>
fourthly-4 a, calculating the current first sub-blockWith the current second sub-blockStructural similarity between them, is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mi>tex</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mrow>
<mn>4</mn>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, C2Is a control parameter;
(iv) 5a, let k = k +1, willWait for next oneThe sub-block processed is taken as the current first sub-block and will beTaking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 a) to continue execution until the next sub-block to be processed is reachedAndall the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicineEach sub-block of (1) andwherein "=" in k = k +1 is an assignment symbol;
fourthly-6 a, according toEach sub-block of (1) andstructural similarity between corresponding sub-blocks in the sequence, calculatingThe predicted value of objective evaluation of image quality is recorded as
In the step (iv)Objectively evaluating the predicted value of image qualityThe acquisition process comprises the following steps:
fourthly-1 b, respectively mixingAndis divided intoSub-blocks of size 8 × 8, which do not overlap with each other, are formedDefining the current kth sub-block to be processed as the current first sub-blockThe current, to be processed, k sub-block is defined as the current, second sub-block, wherein,k has an initial value of 1;
fourthly-2 b, recording the current first sub-block asRecord the current second sub-block asWherein (x)4,y4) To representAndx is more than or equal to 14≤8,1≤y4≤8,To representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 b, calculating the current first sub-blockMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>;</mo>
</mrow>
</math>
likewise, the current second sub-block is calculatedMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>;</mo>
</mrow>
</math>
fourthly-4 b, calculating the current first sub-blockWith the current second sub-blockStructural similarity between them, is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mi>tex</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mrow>
<mn>4</mn>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, C2Is a control parameter;
(iv) 5b, let k = k +1, willTaking the next sub-block to be processed as the current first sub-blockTaking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 b) to continue execution until the next sub-block to be processed is reachedAndall the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicineEach sub-block of (1) andwherein "=" in k = k +1 is an assignment symbol;
fourthly-6 b, according toEach sub-block of (1) andstructural similarity between corresponding sub-blocks in the sequence, calculatingThe predicted value of objective evaluation of image quality is recorded as
Compared with the prior art, the invention has the advantages that:
1) the method of the invention considers that the distortion can cause the loss of image structure or texture information, so that the distorted stereo image is separated into the structure image and the texture image, and different parameters are adopted to respectively fuse the image quality objective evaluation predicted values of the structure image and the texture image of the left viewpoint image and the right viewpoint image, thus better reflecting the quality change condition of the stereo image and leading the evaluation result to be more in line with the human visual system.
2) The method adopts the gradient similarity to evaluate the structural image and the structural similarity to evaluate the texture image, so that the influence of the loss of structural and texture information on the image quality can be well represented, and the correlation between objective evaluation results and subjective perception can be effectively improved.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2 is a scatter diagram of the difference between the objective evaluation predicted value of image quality and the mean subjective score of each distorted stereo image in Ningbo university stereo image library obtained by the method of the present invention;
fig. 3 is a scatter diagram of the difference between the objective evaluation prediction value of image quality and the average subjective score of each distorted stereo image in the LIVE stereo image library obtained by the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a three-dimensional image quality objective evaluation method based on structure texture separation, the overall implementation block diagram of which is shown in figure 1, and the processing process of the method is as follows:
firstly, respectively implementing structure texture separation on a left viewpoint image and a right viewpoint image of an original undistorted stereo image and a left viewpoint image and a right viewpoint image of a distorted stereo image to be evaluated to obtain respective structure images and texture images;
secondly, obtaining an objective evaluation prediction value of the image quality of the structural image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the left viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the structural image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the right viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the right viewpoint image of the distorted stereo image to be evaluated;
secondly, obtaining an objective image quality evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 multiplied by 8 in the texture image of the left viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 multiplied by 8 in the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the texture image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 × 8 in the texture image of the right viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 × 8 in the texture image of the right viewpoint image of the distorted stereo image to be evaluated;
thirdly, fusing the image quality objective evaluation predicted values of the structural images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the structural image of the distorted three-dimensional image to be evaluated; similarly, fusing the image quality objective evaluation predicted values of the texture images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the texture image of the distorted three-dimensional image to be evaluated;
and finally, fusing the image quality objective evaluation predicted value of the structural image and the texture image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the distorted three-dimensional image to be evaluated.
The invention relates to a three-dimensional image quality objective evaluation method based on structure texture separation, which specifically comprises the following steps:
making SorgRepresenting the original undistorted stereo image, let SdisA stereoscopic image representing distortion to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, wherein (x, y) denotes a coordinate position of a pixel point in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. x.ltoreq.W, y is 1. ltoreq. y.ltoreq.H, W denotes a width of the left viewpoint image and the right viewpoint image, H denotes a height of the left viewpoint image and the right viewpoint image, L is Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { Rorg(x, y) } position of coordinate in (x, y) } spacePixel value of pixel point of (x, y), Ldis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
Here, the Ningbo university stereo image library and the LIVE stereo image library are used to analyze the correlation between the image quality objective evaluation prediction value of the distorted stereo image obtained in the present embodiment and the average subjective score difference value. The Ningbo university stereo image library is composed of 12 undistorted stereo images, 60 distorted stereo images under JPEG compression with different distortion degrees, 60 distorted stereo images under JPEG2000 compression, 60 distorted stereo images under Gaussian blur, 60 distorted stereo images under Gaussian white noise, and 72 distorted stereo images under H.264 coding distortion. The LIVE stereo image library is composed of 20 undistorted stereo images, 80 distorted stereo images under JPEG compression with different distortion degrees, 80 distorted stereo images under JPEG2000 compression, 45 distorted stereo images under gaussian blur, 80 distorted stereo images under gaussian white noise, and 80 distorted stereo images under Fast Fading distortion.
② are respectively paired with { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } structural texture separation to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } respective structural and texture images, will { Lorg(x, y) } structural and texture image correspondences are notedAndwill { Rorg(x, y) } structural and texture image correspondences are notedAndwill { Ldis(x, y) } structural and texture image correspondences are notedAndwill { Rdis(x, y) } structural and texture image correspondences are notedAndwherein,to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y).
In this embodiment, step two is { Lorg(x, y) } structural imageAnd texture imagesThe acquisition process comprises the following steps:
② 1a, will { LorgAnd (x, y) defining the current pixel point to be processed as the current pixel point.
2a, setting the current pixel point at { Lorg(x,y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded asDefining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in { L } neighborhood sub-blocksorg(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as centerWherein p ∈ Ω, q ∈ Ω, where Ω denotes { L ∈ Ωorg(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-blockThe pixel point in (1) is in the current sub-blockCoordinate position of (1) x2≤9,1≤y2≤9,Representing a current sub-blockThe middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To representIs atCoordinates of (5)Position, 1. ltoreq. x3≤9,1≤y3≤9,To representThe middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1).
In the step 2a, for any pixel point in the current sub-block, the pixel point is assumed to be in { L [ ]orgThe coordinate position in (x, y) } is (x, y), if x is<1 and y is more than or equal to 1 and less than or equal to H, then { L ≦ HorgAssigning the pixel value of the pixel point with the coordinate position (1, y) in the (x, y) } to the pixel point; if x>W is 1. ltoreq. y.ltoreq.H, then { LorgAssigning the pixel value of the pixel point with the coordinate position (W, y) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x,1) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x, H) in the (x, y) } to the pixel point; if x<1 and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1,1) in the (x, y) } to the pixel point; if x>W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W,1) in the (x, y) } to the pixel point; if x<1 and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1, H) in the (x, y) } to the pixel point; if x>W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W, H) in the (x, y) } to the pixel point; similarly, for any neighborhood pixel point, the same operation is performed as for any pixel point in the current sub-block, so that the pixel value of the neighborhood pixel point beyond the image boundary is replaced by the pixel value of the nearest boundary pixel point. That is, in the above step 2a, if the coordinate position of a certain pixel point in the block formed by the 9 × 9 neighborhood window with the current pixel point as the center exceeds { L }org(x, y) } the pixel value of the pixel is the image of the nearest boundary pixelReplacing the prime value; if the coordinate position of the neighborhood pixel point in the 21 multiplied by 21 neighborhood window taking the current pixel point as the center exceeds { L }org(x, y) } the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of a pixel point in a block formed by a 9 x 9 neighborhood window with each neighborhood pixel point in the 21 x 21 neighborhood window with the current pixel point as the center exceeds { L }org(x, y) }, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
② 3a, obtaining the current sub-blockThe feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-blockThe feature vector of the pixel point with the middle coordinate position (x2, y2) is recorded as <math>
<mrow>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>[</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>]</mo>
</mrow>
</math> Wherein,has a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,representing a current sub-blockThe middle coordinate position is (x)2,y2) The density value of the pixel point of (a),is composed ofThe first partial derivative in the horizontal direction,is composed ofIn the vertical directionThe first-order partial derivative of (a) is,is composed ofThe second partial derivative in the horizontal direction,is composed ofSecond partial derivative in vertical direction.
② 4a, according to the current sub-blockCalculating the current sub-block according to the feature vector of each pixel pointCovariance matrix of (2), as <math>
<mrow>
<msubsup>
<mi>C</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>7</mn>
<mo>×</mo>
<mn>7</mn>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mfrac>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mo>,</mo>
</mrow>
</math> Wherein,has a dimension of 7 x 7,representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1),is composed ofThe transposed vector of (1).
② 5a, for the current sub-blockCovariance matrix ofThe Cholesky decomposition is carried out and,obtaining the current sub-blockSigma feature set of (D), noted <math>
<mrow>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mo>[</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>]</mo>
<mo>,</mo>
</mrow>
</math> Wherein L isTIs a transposed matrix of the L and,has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)The 7 th column vector of L is represented.
Secondly, 6a, adopting the same operation as the operation from the step II to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window taking each neighborhood pixel point as the center, and leading the Sigma characteristic set to be matched with the neighborhood sub-blockSigma feature set ofHas a dimension of 7 × 15.
7a, according to the current sub-blockSigma feature set ofAnd a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math>
<mrow>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein N' (p) represents { Lorg21 × 21 neighbors centered on the current pixel in (x, y) } methodAll neighborhood pixels in the domain window are in { LorgSet of coordinate positions in (x, y) }, exp () represents an exponential function based on e, e =2.71828183, σ represents the standard deviation of the gaussian function, in this example, σ =0.06, the symbol "| is the euclidean distance calculation symbol, Lorg(q) represents { L }orgAnd (x, y) } the pixel value of the pixel point with the coordinate position of q.
② 8a, according to the structure information of current pixel pointObtaining the texture information of the current pixel point and recording the texture information asWherein L isorg(p) represents the pixel value of the current pixel point.
② 9a, will { LorgTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2a to continue executing until the pixel point is LorgAll pixel points in (x, y) are processed to obtain { L }orgThe structural information and the texture information of each pixel point in (x, y) } are expressed by { L }orgThe structural information of all the pixel points in (x, y) } constitutes { L }org(x, y) } structural image, notedFrom { LorgTexture information of all pixel points in (x, y) } constitutes { L }org(x, y) } texture image, noted
Acquiring { L by adopting steps from 1a to 9aorg(x, y) } structural imageAnd texture imagesSame operation, get { Rorg(x, y) } structural imageAnd texture images{Ldis(x, y) } structural imageAnd texture images{Rdis(x, y) } structural imageAnd texture imagesNamely: step two, { Rorg(x, y) } structural imageAnd texture imagesThe acquisition process comprises the following steps:
2- (1 b) will be { RorgAnd (x, y) defining the current pixel point to be processed as the current pixel point.
2b, setting the current pixel point at { RorgThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded asEach of 21 x 21 neighborhood windows centered on the current pixel pointThe blocks formed by the 9 multiplied by 9 neighborhood windows with the neighborhood pixel point as the center are all defined as neighborhood sub-blocks, and are in the 21 multiplied by 21 neighborhood window with the current pixel point as the center and are in the { Rorg(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as centerWherein p ∈ Ω, q ∈ Ω, where Ω denotes { R ∈ Ωorg(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-blockThe pixel point in (1) is in the current sub-blockCoordinate position of (1) x2≤9,1≤y2≤9,Representing a current sub-blockThe middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To representIs atCoordinate position of (1) x3≤9,1≤y3≤9,To representThe middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1).
In the second step-2 b, if the coordinate position of a certain pixel point in the block formed by the 9 multiplied by 9 neighborhood window taking the current pixel point as the center exceeds { R }org(x, y) } the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of the neighborhood pixel point in the 21 multiplied by 21 neighborhood window taking the current pixel point as the center exceeds { R }org(x, y) } the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of a pixel point in a block formed by a 9 x 9 neighborhood window with each neighborhood pixel point in the 21 x 21 neighborhood window with the current pixel point as the center exceeds { R }org(x, y) }, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
② 3b, obtaining the current sub-blockThe feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-blockThe feature vector of the pixel point with the middle coordinate position (x2, y2) is recorded as <math>
<mrow>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>[</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>]</mo>
</mrow>
</math> Wherein,has a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,representing a current sub-blockThe density value of the pixel point with the middle coordinate position of (x2, y2),is composed ofThe first partial derivative in the horizontal direction,is composed ofThe first partial derivative in the vertical direction,is composed ofThe second partial derivative in the horizontal direction,is composed ofSecond partial derivative in vertical direction.
2-4 b, according to the current sub-blockCalculating the current sub-block according to the feature vector of each pixel pointCovariance matrix of (2), as <math>
<mrow>
<msubsup>
<mi>C</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>7</mn>
<mo>×</mo>
<mn>7</mn>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mfrac>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mo>,</mo>
</mrow>
</math> Wherein,has a dimension of 7 x 7,representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1),is composed ofThe transposed vector of (1).
② 5b, for the current sub-blockCovariance matrix ofThe Cholesky decomposition is carried out and,obtaining the current sub-blockSigma feature set of (D), noted <math>
<mrow>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mo>[</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>]</mo>
<mo>,</mo>
</mrow>
</math> Wherein L isTIs a transposed matrix of the L and,has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)The 7 th column vector of L is represented.
6b, adopting the same operation as the steps from 3b to 5b to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as the center, and leading the Sigma characteristic set to be used for solving the problem that the Sigma characteristic set of the neighborhood sub-block is not suitable for the next stepSigma feature set ofHas a dimension of 7 × 15.
7b, according to the current sub-blockSigma feature set ofAnd a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math>
<mrow>
<msubsup>
<mi>I</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein N' (p) represents { RorgAll neighborhood pixels in the 21 x 21 neighborhood window centered on the current pixel in (x, y) are in { R }orgSet of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of the gaussian function, in this embodiment, σ =0.06, the symbol "| | |" is the euclidean distance calculation symbol, Rorg(q) represents { RorgAnd (x, y) } the pixel value of the pixel point with the coordinate position of q.
② 8b, according to the structure information of current pixel pointObtaining the texture information of the current pixel point and recording the texture information asWherein R isorg(p) represents the pixel value of the current pixel point.
② 9b, will { RorgTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2b to continue executing until the pixel point is { R }orgAll pixel points in (x, y) are processed to obtain { R }orgThe structural information and the texture information of each pixel point in (x, y) } are represented by { R }orgThe structural information of all pixel points in (x, y) constitutes Rorg(x, y) } structural image, notedFrom { RorgTexture information of all pixel points in (x, y) } constitutes { R }org(x, y) } texture image, noted
Step two, { Ldis(x, y) } structural imageAnd texture imagesThe acquisition process comprises the following steps:
② 1c, will { LdisAnd (x, y) defining the current pixel point to be processed as the current pixel point.
2c, setting the current pixel point at { LdisThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded asDefining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in { L } neighborhood sub-blocksdis(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as centerWherein p ∈ Ω, q ∈ Ω, where Ω denotes { L ∈ Ωdis(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-blockThe pixel point in (1) is in the current sub-blockCoordinate position of (1) x2≤9,1≤y2≤9,Representing a current sub-blockThe middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To representIs atCoordinate position of (1) x3≤9,1≤y3≤9,To representThe middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1).
In the second step 2c, if the coordinate position of a certain pixel point in the block formed by the 9 multiplied by 9 neighborhood window taking the current pixel point as the center exceeds { L }dis(x, y) } the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of the neighborhood pixel point in the 21 multiplied by 21 neighborhood window taking the current pixel point as the center exceeds { L }dis(x, y) } the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of a pixel point in a block formed by a 9 x 9 neighborhood window with each neighborhood pixel point in the 21 x 21 neighborhood window with the current pixel point as the center exceeds { L }dis(x, y) }, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
② 3c, obtaining the current sub-blockThe feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-blockThe middle coordinate position is (x)2,y2) The feature vector of the pixel point is recorded as <math>
<mrow>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>[</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>]</mo>
</mrow>
</math> Whereinhas a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,representing a current sub-blockThe middle coordinate position is (x)2,y2) The density value of the pixel point of (a),is composed ofThe first partial derivative in the horizontal direction,is composed ofThe first partial derivative in the vertical direction,is composed ofThe second partial derivative in the horizontal direction,is composed ofSecond partial derivative in vertical direction.
2-4 c, according to the current sub-blockCalculating the current sub-block according to the feature vector of each pixel pointCovariance matrix of (2), as <math>
<mrow>
<msubsup>
<mi>C</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>7</mn>
<mo>×</mo>
<mn>7</mn>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mfrac>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mo>,</mo>
</mrow>
</math> Wherein,has a dimension of 7 x 7,representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1),is composed ofThe transposed vector of (1).
② 5c, for the current sub-blockCovariance matrix ofThe Cholesky decomposition is carried out and,obtaining the current sub-blockSigma ofCollect a collection, mark as <math>
<mrow>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mo>[</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>]</mo>
<mo>,</mo>
</mrow>
</math> Wherein L isTIs a transposed matrix of the L and,has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)The 7 th column vector of L is represented.
6c, adopting the same operation as the steps from 3c to 5c to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as the center, and leading the Sigma characteristic set to be used for solving the problem that the Sigma characteristic set of the neighborhood sub-block is not suitable for the next stepSigma feature set ofHas a dimension of 7 × 15.
7c, according to the current sub-blockSigma feature set ofAnd a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math>
<mrow>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein N' (p) represents { LdisAll neighborhood pixels in the 21 x 21 neighborhood window centered on the current pixel in (x, y) } are in { L }disSet of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of the gaussian function, in this embodiment, σ =0.06, the symbol "| | |" is the euclidean distance calculation symbol, Ldis(q) represents { L }disAnd (x, y) } the pixel value of the pixel point with the coordinate position of q.
② 8c, according to the structure information of current pixel pointObtaining the texture information of the current pixel point and recording the texture information as Wherein L isdis(p) represents the pixel value of the current pixel point.
② 9c, will { LdisTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2c to continue executing until the pixel point is LdisAll pixel points in (x, y) are processed to obtain { L }disThe structural information and the texture information of each pixel point in (x, y) } are expressed by { L }disThe structural information of all the pixel points in (x, y) } constitutes { L }dis(x, y) } structural image, notedFrom { LdisTexture information of all pixel points in (x, y) } constitutes { L }dis(x, y) } texture image, noted
Step two, { Rdis(x, y) } structural imageAnd texture imagesThe acquisition process comprises the following steps:
② 1d, mixing { RdisAnd (x, y) defining the current pixel point to be processed as the current pixel point.
2d, setting the current pixel point at { RdisThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded asDefining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in the { R } neighborhood sub-blocksdis(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as centerWherein p ∈ Ω, q ∈ Ω, where Ω denotes { R ∈ Ωdis(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-blockThe pixel point in (1) is in the current sub-blockCoordinate position of (1) x2≤9,1≤y2≤9,Representing a current sub-blockThe middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To representIs atCoordinate position of (1) x3≤9,1≤y3≤9,To representThe middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1).
In the second step 2d, if the coordinate position of a certain pixel point in the block formed by the 9 multiplied by 9 neighborhood window taking the current pixel point as the center exceeds { R }dis(x, y) } the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of the neighborhood pixel point in the 21 multiplied by 21 neighborhood window taking the current pixel point as the center exceeds { R }dis(x, y) } the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of a pixel point in a block formed by a 9 x 9 neighborhood window with each neighborhood pixel point in the 21 x 21 neighborhood window with the current pixel point as the center exceeds { R }dis(x, y) }, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
② 3d, obtaining the current sub-blockThe feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-blockThe middle coordinate position is (x)2,y2) The feature vector of the pixel point is recorded as <math>
<mrow>
<msubsup>
<mi>X</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>[</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>]</mo>
</mrow>
</math> Whereinhas a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, and the symbol" | "is an absolute value symbolThe number of the mobile station is,representing a current sub-blockThe middle coordinate position is (x)2,y2) The density value of the pixel point of (a),is composed ofThe first partial derivative in the horizontal direction,is composed ofThe first partial derivative in the vertical direction,is composed ofThe second partial derivative in the horizontal direction,is composed ofSecond partial derivative in vertical direction.
4d, according to the current sub-blockCalculating the current sub-block according to the feature vector of each pixel pointCovariance matrix of (2), as <math>
<mrow>
<msubsup>
<mi>C</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>7</mn>
<mo>×</mo>
<mn>7</mn>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mfrac>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mo>,</mo>
</mrow>
</math> Wherein,has a dimension of 7 x 7,representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1),is composed ofThe transposed vector of (1).
② 5d, for the current sub-blockCovariance matrix ofThe Cholesky decomposition is carried out and,obtaining the current sub-blockSigma feature set of (D), noted <math>
<mrow>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mo>[</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>]</mo>
<mo>,</mo>
</mrow>
</math> Wherein L isTIs a transposed matrix of the L and,has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)The 7 th column vector of L is represented.
6d, adopting the same operation as the steps from 3d to 5d to obtain the Sigma feature set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as the center, and leading the Sigma feature set to be used for solving the problem that the Sigma feature set of the neighborhood sub-block is not suitable for the next stepSigma feature set ofHas a dimension of 7 × 15.
7d, according to the current sub-blockSigma feature set ofAnd a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math>
<mrow>
<msubsup>
<mi>I</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein N' (p) represents { RdisAll neighborhood pixels in the 21 x 21 neighborhood window centered on the current pixel in (x, y) are in { R }disSet of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of the gaussian function, in this embodiment, σ =0.06, the symbol "| | |" is the euclidean distance calculation symbol, Rdis(q) represents { RdisAnd (x, y) } the pixel value of the pixel point with the coordinate position of q.
② 8d, according to the structure information of current pixel pointObtaining the texture information of the current pixel point and recording the texture information as Wherein R isdis(p) represents the pixel value of the current pixel point.
② 9d, will { RdisTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2d to continue executing until the pixel point is { R }disAll pixel points in (x, y) are processed to obtain { R }disThe structural information and the texture information of each pixel point in (x, y) } are represented by { R }disThe structural information of all pixel points in (x, y) constitutes Rdis(x, y) } structural image, notedFrom { RdisTexture information of all pixel points in (x, y) } constitutes { R }dis(x, y) } texture image, noted
Comparing with original image, the structure image separates out detail information such as texture from original image to make structure information more stable, so the invention method calculatesEach pixel point in (1)The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position andthe gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>L</mi>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representThe vertical direction gradient of the pixel point with the middle coordinate position (x, y),to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representGradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1For controlling the parameters, in this example C is taken1= 0.0026; then according toEach pixel point in (1)Calculating the gradient similarity between corresponding pixel pointsThe predicted value of objective evaluation of image quality is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>L</mi>
<mi>str</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>x</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>W</mi>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>y</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>H</mi>
</munderover>
<msubsup>
<mi>Q</mi>
<mi>L</mi>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>W</mi>
<mo>×</mo>
<mi>H</mi>
</mrow>
</mfrac>
<mo>.</mo>
</mrow>
</math>
Also, calculateEach pixel point in (1)The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position andthe gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>R</mi>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representThe vertical direction gradient of the pixel point with the middle coordinate position (x, y),to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representGradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1For controlling the parameters, in this example C is taken1= 0.0026; then according toEach pixel point in (1)Calculating the gradient similarity between corresponding pixel pointsImage of (2)The predicted value of objective quality evaluation is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>R</mi>
<mi>str</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>x</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>W</mi>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>y</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>H</mi>
</munderover>
<msubsup>
<mi>Q</mi>
<mi>R</mi>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>W</mi>
<mo>×</mo>
<mi>H</mi>
</mrow>
</mfrac>
<mo>.</mo>
</mrow>
</math>
Fourthly, because the information of the mean value and the standard deviation can well evaluate the change of the detail information of the image, the method of the invention obtains the information of the mean value and the standard deviationEach of the sub-blocks having a size of 8 x 8 andcalculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8The predicted value of objective evaluation of image quality is recorded as
In this embodiment, step (iv)Objectively evaluating the predicted value of image qualityThe acquisition process comprises the following steps:
fourthly-1 a, respectivelyAndis divided intoSub-blocks of size 8 × 8, which do not overlap with each other, are formedDefining the current kth sub-block to be processed as the current first sub-blockThe current, to be processed, k sub-block is defined as the current, second sub-block, wherein,the initial value of k is 1.
Fourthly-2 a, recording the current first sub-block asRecord the current second sub-block asWherein (x)4,y4) To representAndx is more than or equal to 14≤8,1≤y4≤8,To representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1).
Fourthly-3 a, calculating the current first sub-blockMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>.</mo>
</mrow>
</math>
likewise, the current second sub-block is calculatedMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>.</mo>
</mrow>
</math>
fourthly-4 a, calculating the current first sub-blockWith the current second sub-blockStructural similarity between them, is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mi>tex</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mrow>
<mn>4</mn>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, C2For controlling the parameters, in this example C is taken2=0.85。
(iv) 5a, let k = k +1, willTaking the next sub-block to be processed as the current first sub-blockTaking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 a) to continue execution until the next sub-block to be processed is reachedAndall the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicineEach sub-block of (1) andwherein "=" in k = k +1 is an assignerNumber (n).
Fourthly-6 a, according toEach sub-block of (1) andstructural similarity between corresponding sub-blocks in the sequence, calculatingThe predicted value of objective evaluation of image quality is recorded as
Also, by obtainingEach of the sub-blocks having a size of 8 x 8 andcalculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8The predicted value of objective evaluation of image quality is recorded as
In this embodiment, the step (iv)Objectively evaluating the predicted value of image qualityHas obtainedThe process is as follows:
fourthly-1 b, respectively mixingAndis divided intoSub-blocks of size 8 × 8, which do not overlap with each other, are formedDefining the current kth sub-block to be processed as the current first sub-blockThe current, to be processed, k sub-block is defined as the current, second sub-block, wherein,the initial value of k is 1.
Fourthly-2 b, recording the current first sub-block asRecord the current second sub-block asWherein (x)4,y4) To representAndx is more than or equal to 14≤8,1≤y4≤8,To representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1).
Fourthly-3 b, calculating the current first sub-blockMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>;</mo>
</mrow>
</math>
likewise, the current second sub-block is calculatedMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>.</mo>
</mrow>
</math>
fourthly-4 b, calculating the current first sub-blockWith the current second sub-blockStructural similarity between them, is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mi>tex</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mrow>
<mn>4</mn>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<mtext></mtext>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<mtext></mtext>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, C2For controlling the parameters, in this example C is taken2=0.85。
(iv) 5b, let k = k +1, willTaking the next sub-block to be processed as the current first sub-blockTaking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 b) to continue execution until the next sub-block to be processed is reachedAndall the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicineEach sub-block of (1) andwherein "=" in k = k +1 is an assigned symbol.
Fourthly-6 b, according toEach sub-block of (1) andstructural similarity between corresponding sub-blocks in the sequence, calculatingThe predicted value of objective evaluation of image quality is recorded as
Fifthly, toAndcarrying out fusion to obtain SdisThe predicted value of the objective evaluation of the image quality of the structural image is marked as Qstr,Wherein, wsTo representAndin this embodiment, for the Ningbo university stereo image library, take ws= 0.980; for LIVE stereo image library, take ws=0.629。
Also, forAndcarrying out fusion to obtain SdisThe predicted value of the texture image is recorded as Qtex,Wherein, wtTo representAndin this embodiment, for the Ningbo university stereo image library, take wt= 0.888; for LIVE stereo image library, take wt=0.503。
Sixthly to QstrAnd QtexCarrying out fusion to obtain SdisThe predicted value of the objective evaluation of image quality is expressed as Q, Q = w × Qstr+(1-w)×QtexWherein w represents QstrAnd SdisIn this embodiment, for the stereo image library of ningbo university, w =0.882 is taken; for LIVE stereo image library, take w = 0.838.
Here, 4 common objective parameters of the evaluation method for evaluating image quality are used as evaluation indexes, that is, Pearson correlation coefficient (PLCC), Spearman correlation coefficient (SROCC), Kendall correlation coefficient (KROCC), mean square error (RMSE), accuracy of the objective evaluation result of the stereo image in which PLCC and RMSE reflect distortion, and monotonicity of SROCC and KROCC reflects monotonicity thereof under nonlinear regression conditions.
The method is used for calculating the image quality objective evaluation predicted value of each distorted three-dimensional image in the Ningbo university three-dimensional image library and the image quality objective evaluation predicted value of each distorted three-dimensional image in the LIVE three-dimensional image library, and then the average subjective score difference value of each distorted three-dimensional image in the Ningbo university three-dimensional image library and the average subjective score difference value of each distorted three-dimensional image in the LIVE three-dimensional image library are obtained by using the existing subjective evaluation method. The image quality objective evaluation predicted value of the distorted stereo image calculated according to the method is subjected to five-parameter Logistic function nonlinear fitting, and the higher the PLCC, SROCC and KROCC values are, the lower the RMSE value is, the better the correlation between the objective evaluation method and the average subjective score difference is. Tables 1, 2, 3 and 4 show Pearson correlation coefficient, Spearman correlation coefficient, Kendall correlation coefficient and mean square error between the image quality objective evaluation predicted value and the average subjective score difference value of the distorted stereo image obtained by the method of the present invention. As can be seen from tables 1, 2, 3 and 4, the correlation between the final objective evaluation prediction value of image quality of the distorted stereoscopic image obtained by the method of the present invention and the average subjective score difference is very high, which indicates that the objective evaluation result is more consistent with the result of human eye subjective perception, and is sufficient to explain the effectiveness of the method of the present invention.
Fig. 2 shows a scatter diagram of the difference between the objective evaluation predicted value of the image quality of each distorted stereoscopic image in the Ningbo university stereoscopic image library and the average subjective score obtained by the method of the present invention, and fig. 3 shows a scatter diagram of the difference between the objective evaluation predicted value of the image quality of each distorted stereoscopic image in the LIVE stereoscopic image library and the average subjective score obtained by the method of the present invention, wherein the more concentrated the scatter diagram, the better the consistency between the objective evaluation result and the subjective perception is. As can be seen from fig. 2 and 3, the scatter diagram obtained by the method of the present invention is more concentrated, and has a higher degree of matching with the subjective evaluation data.
TABLE 1 Objective evaluation prediction value and average subjective evaluation of image quality of distorted stereoscopic images obtained by the method of the present invention
Pearson correlation coefficient comparison between variance values
TABLE 2 comparison of Spearman correlation coefficients between objective evaluation prediction values of image quality and mean subjective score differences for distorted stereo images obtained by the method of the invention
TABLE 3 Kendall correlation coefficient comparison between the image quality objective evaluation prediction value and the average subjective score difference of the distorted stereo image obtained by the method of the present invention
TABLE 4 mean square error comparison between the predicted value of objective evaluation of image quality and the difference of mean subjective score of distorted stereoscopic images obtained by the method of the present invention
Claims (4)
1. A three-dimensional image quality objective evaluation method based on structure texture separation is characterized in that the processing process is as follows:
firstly, respectively implementing structure texture separation on a left viewpoint image and a right viewpoint image of an original undistorted stereo image and a left viewpoint image and a right viewpoint image of a distorted stereo image to be evaluated to obtain respective structure images and texture images;
secondly, obtaining an objective evaluation prediction value of the image quality of the structural image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the left viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the structural image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the right viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the right viewpoint image of the distorted stereo image to be evaluated;
secondly, obtaining an objective image quality evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 multiplied by 8 in the texture image of the left viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 multiplied by 8 in the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the texture image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 × 8 in the texture image of the right viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 × 8 in the texture image of the right viewpoint image of the distorted stereo image to be evaluated;
thirdly, fusing the image quality objective evaluation predicted values of the structural images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the structural image of the distorted three-dimensional image to be evaluated; similarly, fusing the image quality objective evaluation predicted values of the texture images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the texture image of the distorted three-dimensional image to be evaluated;
and finally, fusing the image quality objective evaluation predicted value of the structural image and the texture image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the distorted three-dimensional image to be evaluated.
2. The objective evaluation method for stereo image quality based on structure texture separation according to claim 1, characterized in that it comprises the following steps:
making SorgRepresenting the original undistorted stereo image, let SdisA stereoscopic image representing distortion to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, wherein (x, y) denotes a coordinate position of a pixel point in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. x.ltoreq.W, y is 1. ltoreq. y.ltoreq.H, W denotes a width of the left viewpoint image and the right viewpoint image, H denotes a height of the left viewpoint image and the right viewpoint image, L is Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { RorgThe pixel value L of the pixel point with the coordinate position (x, y) in (x, y) } isdis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
② are respectively paired with { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } structural texture separation to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } respective structural and texture images, will { Lorg(x, y) } structural and texture image correspondences are notedAndwill { Rorg(x, y) } structural and texture image correspondences are notedAndwill { Ldis(x, y) } structural and texture image correspondences are notedAndwill { Rdis(x, y) } structural image and texture image correspondenceAndwherein,to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y);
calculatingEach pixel point in (1)The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position andthe gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>L</mi>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, , to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representThe vertical direction gradient of the pixel point with the middle coordinate position (x, y),to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to representGradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1Is a control parameter; then according toEach pixel point in (1)Calculating the gradient similarity between corresponding pixel pointsThe predicted value of objective evaluation of image quality is recorded as
Also, calculateEach pixel point in (1)The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position andthe gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mi>R</mi>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>m</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>dis</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein,
ObtainingEach of the sub-blocks having a size of 8 x 8 andcalculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8The predicted value of objective evaluation of image quality is recorded as
Also, by obtainingEach of the sub-blocks having a size of 8 x 8 andcalculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8The predicted value of objective evaluation of image quality is recorded as
Fifthly, toAndto carry outFusing to obtain SdisThe predicted value of the objective evaluation of the image quality of the structural image is marked as Qstr,Wherein, wsTo representAndthe weight proportion of (2);
also, forAndcarrying out fusion to obtain SdisThe predicted value of the texture image is recorded as Qstr,Wherein, wtTo representAndthe weight proportion of (2);
sixthly to QstrAnd QtexCarrying out fusion to obtain SdisThe predicted value of the objective evaluation of image quality is expressed as Q, Q = w × Qstr+(1-w)×QtexWherein w represents QstrAnd SdisThe weight ratio of (2).
3. The objective evaluation method for stereo image quality based on structure texture separation as claimed in claim 2, wherein in step (ii) { L }, the objective evaluation method for stereo image quality based on structure texture separation is adoptedorg(x, y) } structural imageAnd texture imagesThe acquisition process comprises the following steps:
② 1a, will { LorgDefining the current pixel point to be processed in (x, y) as the current pixel point;
2a, setting the current pixel point at { LorgThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded asDefining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in { L } neighborhood sub-blocksorg(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as centerWherein p ∈ Ω, q ∈ Ω, where Ω denotes { L ∈ Ωorg(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-blockThe pixel point in (1) is in the current sub-blockCoordinate position of (1) x2≤9,1≤y2≤9,Representing a current sub-blockThe middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To representIs atCoordinate position of (1) x3≤9,1≤y3≤9,To representThe middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1);
in the step 2a, for any neighborhood pixel point and any pixel point in the current sub-block, the pixel point is assumed to be in the { L [ ]orgThe coordinate position in (x, y) } is (x, y), if x is<1 and y is more than or equal to 1 and less than or equal to H, then { L ≦ HorgAssigning the pixel value of the pixel point with the coordinate position (1, y) in the (x, y) } to the pixel point; if x>W is 1. ltoreq. y.ltoreq.H, then { LorgAssigning the pixel value of the pixel point with the coordinate position (W, y) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x,1) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x, H) in the (x, y) } to the pixel point; if x<1 and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1,1) in the (x, y) } to the pixel point; if x>W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W,1) in the (x, y) } to the pixel point;if x<1 and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1, H) in the (x, y) } to the pixel point; if x>W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W, H) in the (x, y) } to the pixel point;
② 3a, obtaining the current sub-blockThe feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-blockThe middle coordinate position is (x)2,y2) The feature vector of the pixel point is recorded as <math>
<mrow>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>[</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msubsup>
<mrow>
<mo>∂</mo>
<mi>I</mi>
</mrow>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>x</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<mo>|</mo>
<mfrac>
<mrow>
<msup>
<mo>∂</mo>
<mn>2</mn>
</msup>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<msup>
<mrow>
<mo>∂</mo>
<mi>y</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>|</mo>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>]</mo>
</mrow>
</math> Whereinhas a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,representing a current sub-blockThe middle coordinate position is (x)2,y2) The density value of the pixel point of (a),is composed ofThe first partial derivative in the horizontal direction,is composed ofThe first partial derivative in the vertical direction,is composed ofThe second partial derivative in the horizontal direction,is composed ofSecond partial derivatives in the vertical direction;
② 4a, according to the current sub-blockCalculating the current sub-block according to the feature vector of each pixel pointCovariance matrix of (2), as <math>
<mrow>
<msubsup>
<mi>C</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>7</mn>
<mo>×</mo>
<mn>7</mn>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mfrac>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>9</mn>
</munderover>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mo>,</mo>
</mrow>
</math> Wherein,has a dimension of 7 x 7,representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1), <math>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
</math> is composed of <math>
<mrow>
<mo>(</mo>
<msubsup>
<mi>X</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>)</mo>
</mrow>
</math> The transposed vector of (1);
② 5a, for the current sub-blockCovariance matrix ofThe Cholesky decomposition is carried out and,obtaining the current sub-blockSigma feature set of (D), noted <math>
<mrow>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>=</mo>
<mo>[</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>′</mo>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<mo>.</mo>
<mo>.</mo>
<mo>.</mo>
<mo>,</mo>
<mo>-</mo>
<msqrt>
<mn>10</mn>
</msqrt>
<mo>×</mo>
<msup>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<msubsup>
<mi>μ</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>]</mo>
<mo>,</mo>
</mrow>
</math> Wherein L isTIs a transposed matrix of the L and,has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)A 7 th column vector representing L;
secondly, 6a, adopting the same operation as the operation from the step II to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window taking each neighborhood pixel point as the center, and leading the Sigma characteristic set to be matched with the neighborhood sub-blockSigma feature set ofHas a dimension of 7 × 15;
7a, according to the current sub-blockSigma feature set ofAnd a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math>
<mrow>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>,</mo>
<msubsup>
<mi>I</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>str</mi>
</msubsup>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
<mo>×</mo>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munder>
<mi>Σ</mi>
<mrow>
<mi>q</mi>
<mo>∈</mo>
<mi>N</mi>
<mo>′</mo>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
</mrow>
</munder>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>p</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>S</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>org</mi>
</mrow>
<mi>q</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
<mn>2</mn>
</msup>
<msup>
<mrow>
<mn>2</mn>
<mi>σ</mi>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein N' (p) represents { LorgAll neighborhood pixels in the 21 x 21 neighborhood window centered on the current pixel in (x, y) } are in { L }orgA set of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of a gaussian function, the symbol "|" is the euclidean distance calculation symbol, Lorg(q) represents { L }org(x, y) } the pixel value of a pixel point with the coordinate position of q;
② 8a, according to the structure information of current pixel pointObtaining the texture information of the current pixel point and recording the texture information as Wherein L isorg(p) representing a pixel value of a current pixel point;
② 9a, will { LorgTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2a to continue executing until the pixel point is LorgAll pixel points in (x, y) are processed to obtain { L }orgThe structural information and the texture information of each pixel point in (x, y) } are expressed by { L }orgThe structural information of all the pixel points in (x, y) } constitutes { L }org(x, y) } structural image, notedByTexture information composition of all pixels in (1) { L }org(x, y) } texture image, noted
4. The objective evaluation method for stereo image quality based on structure texture separation according to claim 2 or 3, characterized in that the step (c) isObjectively evaluating the predicted value of image qualityThe acquisition process comprises the following steps:
fourthly-1 a, respectivelyAndis divided intoSub-blocks of size 8 × 8, which do not overlap with each other, are formedDefining the current kth sub-block to be processed as the current first sub-blockThe current, to be processed, k sub-block is defined as the current, second sub-block, wherein,k has an initial value of 1;
fourthly-2 a, recording the current first sub-block asRecord the current second sub-block asWherein (x)4,y4) To representAndx is more than or equal to 14≤8,1≤y4≤8,To representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 a, calculating the current first sub-blockMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>;</mo>
</mrow>
</math>
likewise, the current second sub-block is calculatedMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>;</mo>
</mrow>
</math>
fourthly-4 a, calculating the current first sub-blockWith the current second sub-blockStructural similarity between them, is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mrow>
<mi>L</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mi>tex</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mrow>
<mn>4</mn>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, C2Is a control parameter;
(iv) 5a, let k = k +1, willTaking the next sub-block to be processed as the current first sub-blockTaking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 a) to continue execution until the next sub-block to be processed is reachedAndall the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicineEach sub-block of (1) andwherein "=" in k = k +1 is an assignment symbol;
fourthly-6 a, according toEach sub-block of (1) andstructural similarity between corresponding sub-blocks in the sequence, calculatingObjective evaluation of image qualityPredicted value, recorded as
In the step (iv)Objectively evaluating the predicted value of image qualityThe acquisition process comprises the following steps:
fourthly-1 b, respectively mixingAndis divided intoSub-blocks of size 8 × 8, which do not overlap with each other, are formedDefining the current kth sub-block to be processed as the current first sub-blockThe current, to be processed, k sub-block is defined as the current, second sub-block, wherein,k has an initial value of 1;
fourthly-2 b, recording the current first sub-block asRecord the current second sub-block asWherein (x)4,y4) To representAndx is more than or equal to 14≤8,1≤y4≤8,To representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 b, calculating the current first sub-blockMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>;</mo>
</mrow>
</math>
likewise, the current second sub-block is calculatedMean and standard deviation of (D), corresponding notationAnd <math>
<mrow>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mn>64</mn>
</mfrac>
<mo>,</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<munderover>
<mi>Σ</mi>
<mrow>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>4</mn>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mn>4</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mn>64</mn>
</mfrac>
</msqrt>
<mo>;</mo>
</mrow>
</math>
fourthly-4 b, calculating the current first sub-blockWith the current second sub-blockStructural similarity between them, is recorded as <math>
<mrow>
<msubsup>
<mi>Q</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
<mi>tex</mi>
</msubsup>
<mo>=</mo>
<mfrac>
<mrow>
<mn>4</mn>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>σ</mi>
<mrow>
<mi>R</mi>
<msub>
<mi>L</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>×</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>×</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
<mrow>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>org</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>σ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>μ</mi>
<mrow>
<msub>
<mi>R</mi>
<mi>dis</mi>
</msub>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
</math> Wherein, C2Is a control parameter;
(iv) 5b, let k = k +1, willTaking the next sub-block to be processed as the current first sub-blockTaking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 b) to continue execution until the next sub-block to be processed is reachedAndall the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicineEach sub-block of (1) andwherein "=" in k = k +1 is an assignment symbol;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410105777.4A CN103903259A (en) | 2014-03-20 | 2014-03-20 | Objective three-dimensional image quality evaluation method based on structure and texture separation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410105777.4A CN103903259A (en) | 2014-03-20 | 2014-03-20 | Objective three-dimensional image quality evaluation method based on structure and texture separation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103903259A true CN103903259A (en) | 2014-07-02 |
Family
ID=50994566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410105777.4A Pending CN103903259A (en) | 2014-03-20 | 2014-03-20 | Objective three-dimensional image quality evaluation method based on structure and texture separation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103903259A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105931257A (en) * | 2016-06-12 | 2016-09-07 | 西安电子科技大学 | SAR image quality evaluation method based on texture feature and structural similarity |
CN106780432A (en) * | 2016-11-14 | 2017-05-31 | 浙江科技学院 | A kind of objective evaluation method for quality of stereo images based on sparse features similarity |
CN109887023A (en) * | 2019-01-11 | 2019-06-14 | 杭州电子科技大学 | A kind of binocular fusion stereo image quality evaluation method based on weighted gradient amplitude |
CN110363753A (en) * | 2019-07-11 | 2019-10-22 | 北京字节跳动网络技术有限公司 | Image quality measure method, apparatus and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000278710A (en) * | 1999-03-26 | 2000-10-06 | Ricoh Co Ltd | Device for evaluating binocular stereoscopic vision picture |
CN102075786A (en) * | 2011-01-19 | 2011-05-25 | 宁波大学 | Method for objectively evaluating image quality |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
CN102209257A (en) * | 2011-06-17 | 2011-10-05 | 宁波大学 | Stereo image quality objective evaluation method |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | Stereo image quality objective evaluation method based on visual perception |
CN102521825A (en) * | 2011-11-16 | 2012-06-27 | 宁波大学 | Three-dimensional image quality objective evaluation method based on zero watermark |
-
2014
- 2014-03-20 CN CN201410105777.4A patent/CN103903259A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000278710A (en) * | 1999-03-26 | 2000-10-06 | Ricoh Co Ltd | Device for evaluating binocular stereoscopic vision picture |
CN102075786A (en) * | 2011-01-19 | 2011-05-25 | 宁波大学 | Method for objectively evaluating image quality |
CN102142145A (en) * | 2011-03-22 | 2011-08-03 | 宁波大学 | Image quality objective evaluation method based on human eye visual characteristics |
CN102209257A (en) * | 2011-06-17 | 2011-10-05 | 宁波大学 | Stereo image quality objective evaluation method |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | Stereo image quality objective evaluation method based on visual perception |
CN102521825A (en) * | 2011-11-16 | 2012-06-27 | 宁波大学 | Three-dimensional image quality objective evaluation method based on zero watermark |
Non-Patent Citations (5)
Title |
---|
KEMENG LI 等: "Objective quality assessment for stereoscopic images based on structure-texture decomposition", 《WSEAS TRANSACTIONS ON COMPUTERS》, 31 January 2014 (2014-01-31) * |
L. KARACAN 等: "Structure-preserving image smoothing via region covariances", 《ACM TRANSACTIONS ON GRAPHICS》, vol. 32, no. 6, 1 November 2013 (2013-11-01), XP058033898, DOI: doi:10.1145/2508363.2508403 * |
M. SHLH 等: "MIQM: a multicamera image quality measure", 《EEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 21, no. 9, 22 May 2012 (2012-05-22) * |
WUFENG XUE 等: "Gradient Magnitude Similarity Deviation: AnHighly Efficient Perceptual Image Quality Index", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 23, no. 2, 3 December 2013 (2013-12-03) * |
靳鑫 等: "基于结构相似度的自适应图像质量评价", 《光电子激光》, vol. 25, no. 2, 28 February 2014 (2014-02-28) * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105931257A (en) * | 2016-06-12 | 2016-09-07 | 西安电子科技大学 | SAR image quality evaluation method based on texture feature and structural similarity |
CN105931257B (en) * | 2016-06-12 | 2018-08-31 | 西安电子科技大学 | SAR image method for evaluating quality based on textural characteristics and structural similarity |
CN106780432A (en) * | 2016-11-14 | 2017-05-31 | 浙江科技学院 | A kind of objective evaluation method for quality of stereo images based on sparse features similarity |
CN106780432B (en) * | 2016-11-14 | 2019-05-28 | 浙江科技学院 | A kind of objective evaluation method for quality of stereo images based on sparse features similarity |
CN109887023A (en) * | 2019-01-11 | 2019-06-14 | 杭州电子科技大学 | A kind of binocular fusion stereo image quality evaluation method based on weighted gradient amplitude |
CN110363753A (en) * | 2019-07-11 | 2019-10-22 | 北京字节跳动网络技术有限公司 | Image quality measure method, apparatus and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103581661B (en) | Method for evaluating visual comfort degree of three-dimensional image | |
CN104036501B (en) | A kind of objective evaluation method for quality of stereo images based on rarefaction representation | |
CN102209257B (en) | Stereo image quality objective evaluation method | |
CN103347196B (en) | Method for evaluating stereo image vision comfort level based on machine learning | |
CN105282543B (en) | Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception | |
CN104581143B (en) | A kind of based on machine learning without with reference to objective evaluation method for quality of stereo images | |
CN102708567B (en) | Visual perception-based three-dimensional image quality objective evaluation method | |
CN103136748B (en) | The objective evaluation method for quality of stereo images of a kind of feature based figure | |
CN103413298B (en) | A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic | |
CN105376563B (en) | No-reference three-dimensional image quality evaluation method based on binocular fusion feature similarity | |
CN104036502B (en) | A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology | |
CN104658001A (en) | Non-reference asymmetric distorted stereo image objective quality assessment method | |
CN102521825B (en) | Three-dimensional image quality objective evaluation method based on zero watermark | |
CN102903107B (en) | Three-dimensional picture quality objective evaluation method based on feature fusion | |
CN104902268B (en) | Based on local tertiary mode without with reference to three-dimensional image objective quality evaluation method | |
CN105357519B (en) | Quality objective evaluation method for three-dimensional image without reference based on self-similarity characteristic | |
CN104408716A (en) | Three-dimensional image quality objective evaluation method based on visual fidelity | |
CN102547368A (en) | Objective evaluation method for quality of stereo images | |
CN103903259A (en) | Objective three-dimensional image quality evaluation method based on structure and texture separation | |
CN107360416A (en) | Stereo image quality evaluation method based on local multivariate Gaussian description | |
CN103200420B (en) | Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention | |
CN103369348B (en) | Three-dimensional image quality objective evaluation method based on regional importance classification | |
CN102999911B (en) | Three-dimensional image quality objective evaluation method based on energy diagrams | |
CN102737380B (en) | Stereo image quality objective evaluation method based on gradient structure tensor | |
CN103914835A (en) | Non-reference quality evaluation method for fuzzy distortion three-dimensional images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140702 |