CN103903259A - Objective three-dimensional image quality evaluation method based on structure and texture separation - Google Patents

Objective three-dimensional image quality evaluation method based on structure and texture separation Download PDF

Info

Publication number
CN103903259A
CN103903259A CN201410105777.4A CN201410105777A CN103903259A CN 103903259 A CN103903259 A CN 103903259A CN 201410105777 A CN201410105777 A CN 201410105777A CN 103903259 A CN103903259 A CN 103903259A
Authority
CN
China
Prior art keywords
mrow
msub
org
msubsup
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410105777.4A
Other languages
Chinese (zh)
Inventor
邵枫
李柯蒙
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201410105777.4A priority Critical patent/CN103903259A/en
Publication of CN103903259A publication Critical patent/CN103903259A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an objective three-dimensional image quality evaluation method based on structure and texture separation. The method includes the steps that firstly, structure and texture separation is conducted on a left viewpoint image body and a right viewpoint image body of an original undistorted three-dimensional image and a left viewpoint image body and a right viewpoint image body of a distorted three-dimensional image to be evaluated to obtain structure images and texture images of the left viewpoint image bodies and structure images and texture images of the right viewpoint image bodies; gradient similarity is used for evaluating the structure images of the left viewpoint image bodies and the structure images of the right viewpoint image bodies respectively, structural similarity is used for evaluating the texture images of the left viewpoint image bodies and the texture images of the right viewpoint image bodies, and an objective image quality evaluation prediction value of the distorted three-dimensional image to be evaluated is obtained through fusion. The objective three-dimensional image quality evaluation method based on structure and texture separation has the advantages that the structure images obtained through decomposition and the texture images obtained through decomposition can well represent influences on image quality of image structure and texture information, so that it seems that an evaluation result better conforms to a human visual system, and therefore the relevance of the objective evaluation result and subjective perception is effectively improved.

Description

Three-dimensional image quality objective evaluation method based on structure texture separation
Technical Field
The invention relates to an image quality evaluation method, in particular to a three-dimensional image quality objective evaluation method based on structure texture separation.
Background
With the rapid development of image coding technology and stereoscopic display technology, the stereoscopic image technology has received more and more extensive attention and application, and has become a current research hotspot. The stereoscopic image technology utilizes the binocular parallax principle of human eyes, the left viewpoint image and the right viewpoint image from the same scene are respectively and independently received by the two eyes, and the binocular parallax is formed through brain fusion, so that the stereoscopic image with depth perception and reality perception is appreciated. Because of the influence of the acquisition system and the storage compression and transmission equipment, a series of distortions are inevitably introduced into the stereo image, and compared with a single-channel image, the stereo image needs to ensure the image quality of two channels simultaneously, so that the quality evaluation of the stereo image is of great significance. However, currently, there is no effective objective evaluation method for evaluating the quality of stereoscopic images. Therefore, establishing an effective objective evaluation model of the quality of the stereo image has very important significance.
The current objective evaluation method for the quality of the stereo image is to directly apply a plane image quality evaluation method to evaluate the quality of the stereo image or evaluate the depth perception of the stereo image by evaluating the quality of a disparity map, however, the process of fusing the stereo image to generate the stereo effect is not an extension of the simple plane image quality evaluation method, human eyes do not directly watch the disparity map, and the evaluation of the depth perception of the stereo image by the quality of the disparity map is not very accurate. Therefore, how to effectively simulate the binocular stereo perception process in the stereo image quality evaluation process and how to analyze the influence mechanism of different distortion types on the stereo perception quality so that the evaluation result can more objectively reflect the human visual system are problems to be researched and solved in the process of objectively evaluating the stereo image quality.
Disclosure of Invention
The invention aims to provide a three-dimensional image quality objective evaluation method based on structure texture separation, which can effectively improve the correlation between objective evaluation results and subjective perception.
The technical scheme adopted by the invention for solving the technical problems is as follows: a three-dimensional image quality objective evaluation method based on structure texture separation is characterized in that the processing process is as follows:
firstly, respectively implementing structure texture separation on a left viewpoint image and a right viewpoint image of an original undistorted stereo image and a left viewpoint image and a right viewpoint image of a distorted stereo image to be evaluated to obtain respective structure images and texture images;
secondly, obtaining an objective evaluation prediction value of the image quality of the structural image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the left viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the structural image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the right viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the right viewpoint image of the distorted stereo image to be evaluated;
secondly, obtaining an objective image quality evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 multiplied by 8 in the texture image of the left viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 multiplied by 8 in the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the texture image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 × 8 in the texture image of the right viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 × 8 in the texture image of the right viewpoint image of the distorted stereo image to be evaluated;
thirdly, fusing the image quality objective evaluation predicted values of the structural images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the structural image of the distorted three-dimensional image to be evaluated; similarly, fusing the image quality objective evaluation predicted values of the texture images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the texture image of the distorted three-dimensional image to be evaluated;
and finally, fusing the image quality objective evaluation predicted value of the structural image and the texture image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the distorted three-dimensional image to be evaluated.
The objective evaluation method for the quality of the stereo image based on the structure texture separation specifically comprises the following steps:
making SorgRepresenting the original undistorted stereo image, let SdisA stereoscopic image representing distortion to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, wherein (x, y) denotes a coordinate position of a pixel point in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. x.ltoreq.W, y is 1. ltoreq. y.ltoreq.H, W denotes a width of the left viewpoint image and the right viewpoint image, H denotes a height of the left viewpoint image and the right viewpoint image, L is Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { RorgThe pixel value L of the pixel point with the coordinate position (x, y) in (x, y) } isdis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
② are respectively paired with { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } structural texture separation to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } respective structural and texture images, will { Lorg(x, y) } structural and texture image correspondences are notedAnd
Figure BDA0000479627040000032
will { Rorg(x, y) } structural and texture image correspondences are noted
Figure BDA0000479627040000033
And
Figure BDA0000479627040000034
will { Ldis(x, y) } structural and texture image correspondences are noted
Figure BDA0000479627040000035
And
Figure BDA0000479627040000036
let the structural image and texture image of { Rdis (x, y) } correspond as
Figure BDA0000479627040000037
And
Figure BDA0000479627040000038
wherein,
Figure BDA0000479627040000039
to represent
Figure BDA00004796270400000310
The pixel value of the pixel point with the middle coordinate position of (x, y),
Figure BDA00004796270400000311
To represent
Figure BDA00004796270400000312
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA00004796270400000313
to represent
Figure BDA00004796270400000314
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA00004796270400000315
to represent
Figure BDA00004796270400000316
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA00004796270400000317
to represent
Figure BDA00004796270400000318
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA00004796270400000319
to represent
Figure BDA00004796270400000320
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA00004796270400000321
to represent
Figure BDA00004796270400000322
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA00004796270400000323
to representThe middle coordinate position is the pixel value of the pixel point of (x, y);
calculating
Figure BDA00004796270400000325
Each pixel point in (1)
Figure BDA00004796270400000326
The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position and
Figure BDA00004796270400000328
the gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as
Figure BDA00004796270400000329
<math> <mrow> <msubsup> <mi>Q</mi> <mi>L</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, m L , org str ( x , y ) = ( gx L , org str ( x , y ) ) 2 + ( gy L , org str ( x , y ) ) 2 , m L , dis str ( x , y ) = ( gx L , dis str ( x , y ) ) 2 + ( gy L , dis str ( x , y ) ) 2 ,
Figure BDA00004796270400000333
to represent
Figure BDA00004796270400000334
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to represent
Figure BDA00004796270400000336
The vertical direction gradient of the pixel point with the middle coordinate position (x, y),
Figure BDA00004796270400000337
to represent
Figure BDA00004796270400000338
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure BDA0000479627040000041
to represent
Figure BDA0000479627040000042
Gradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1Is a control parameter; then according to
Figure BDA0000479627040000043
Each pixel point in (1)
Figure BDA0000479627040000044
Calculating the gradient similarity between corresponding pixel points
Figure BDA0000479627040000045
The predicted value of objective evaluation of image quality is recorded as
Figure BDA0000479627040000046
Also, calculate
Figure BDA0000479627040000047
Each pixel point in (1)
Figure BDA0000479627040000048
Property of will
Figure BDA0000479627040000049
The pixel point with the (x, y) coordinate position and
Figure BDA00004796270400000410
the gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as <math> <mrow> <msubsup> <mi>Q</mi> <mi>R</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, m R , org str ( x , y ) = ( gx R , org str ( x , y ) ) 2 + ( gy R , org str ( x , y ) ) 2 , m R , dis str ( x , y ) = ( gx R , dis str ( x , y ) ) 2 + ( gy R , dis str ( x , y ) ) 2 ,
Figure BDA00004796270400000415
to represent
Figure BDA00004796270400000416
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure BDA00004796270400000417
to represent
Figure BDA00004796270400000418
The vertical direction gradient of the pixel point with the middle coordinate position (x, y),
Figure BDA00004796270400000419
to represent
Figure BDA00004796270400000420
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to represent
Figure BDA00004796270400000422
Gradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1Is a control parameter; then according to
Figure BDA00004796270400000423
Each pixel point in (1)
Figure BDA00004796270400000424
Calculating the gradient similarity between corresponding pixel points
Figure BDA00004796270400000425
The predicted value of objective evaluation of image quality is recorded as
Figure BDA00004796270400000426
Figure BDA00004796270400000427
Fourthly, passing throughObtaining
Figure BDA00004796270400000428
Each of the sub-blocks having a size of 8 x 8 and
Figure BDA00004796270400000429
calculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8
Figure BDA00004796270400000430
The predicted value of objective evaluation of image quality is recorded as
Also, by obtaining
Figure BDA00004796270400000432
Each of the sub-blocks having a size of 8 x 8 and
Figure BDA00004796270400000433
calculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8The predicted value of objective evaluation of image quality is recorded as
Fifthly, to
Figure BDA0000479627040000051
And
Figure BDA0000479627040000052
carrying out fusion to obtain SdisThe predicted value of the objective evaluation of the image quality of the structural image is marked as Qstr
Figure BDA0000479627040000053
Wherein, wsTo represent
Figure BDA0000479627040000054
And
Figure BDA0000479627040000055
the weight proportion of (2);
also, for
Figure BDA0000479627040000056
And
Figure BDA0000479627040000057
carrying out fusion to obtain SdisThe predicted value of the texture image is recorded as Qtex
Figure BDA0000479627040000058
Wherein, wtTo represent
Figure BDA0000479627040000059
And
Figure BDA00004796270400000510
the weight proportion of (2);
sixthly to QstrAnd QtexCarrying out fusion to obtain SdisThe predicted value of the objective evaluation of image quality is expressed as Q, Q = w × Qstr+(1-w)×QtexWherein w represents QstrAnd SdisThe weight ratio of (2).
In the step II, { Lorg(x, y) } structural image
Figure BDA00004796270400000511
And texture images
Figure BDA00004796270400000512
The acquisition process comprises the following steps:
② 1a, will { LorgDefining the current pixel point to be processed in (x, y) } as the current pixel pointA front pixel point;
2a, setting the current pixel point at { LorgThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded as
Figure BDA00004796270400000513
Defining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in { L } neighborhood sub-blocksorg(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as centerWherein p ∈ Ω, q ∈ Ω, where Ω denotes { L ∈ Ωorg(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-block
Figure BDA00004796270400000515
The pixel point in (1) is in the current sub-block
Figure BDA00004796270400000516
Coordinate position of (1) x2≤9,1≤y2≤9,
Figure BDA00004796270400000517
Representing a current sub-block
Figure BDA00004796270400000518
The coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To represent
Figure BDA00004796270400000519
Is at
Figure BDA00004796270400000520
Coordinate position of (1) x3≤9,1≤y3≤9,
Figure BDA00004796270400000521
To represent
Figure BDA00004796270400000522
The middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1);
in the step 2a, for any neighborhood pixel point and any pixel point in the current sub-block, the pixel point is assumed to be in the { L [ ]orgThe coordinate position in (x, y) } is (x, y), if x is<1 and y is more than or equal to 1 and less than or equal to H, then { L ≦ HorgAssigning the pixel value of the pixel point with the coordinate position (1, y) in the (x, y) } to the pixel point; if x>W is 1. ltoreq. y.ltoreq.H, then { LorgAssigning the pixel value of the pixel point with the coordinate position (W, y) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x,1) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x, H) in the (x, y) } to the pixel point; if x<1 and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1,1) in the (x, y) } to the pixel point; if x>W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W,1) in the (x, y) } to the pixel point; if x<1 and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1, H) in the (x, y) } to the pixel point; if x>W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W, H) in the (x, y) } to the pixel point;
② 3a, obtaining the current sub-block
Figure BDA0000479627040000061
The feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-block
Figure BDA0000479627040000062
The feature vector of the pixel point with the middle coordinate position (x2, y2) is recorded as
Figure BDA0000479627040000063
<math> <mrow> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </math> Wherein
Figure BDA0000479627040000065
has a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,
Figure BDA0000479627040000066
representing a current sub-block
Figure BDA0000479627040000067
The middle coordinate position is (x)2,y2) The density value of the pixel point of (a),
Figure BDA0000479627040000068
is composed of
Figure BDA0000479627040000069
The first partial derivative in the horizontal direction,
Figure BDA00004796270400000610
is composed of
Figure BDA00004796270400000611
The first partial derivative in the vertical direction,
Figure BDA00004796270400000612
is composed of
Figure BDA00004796270400000613
The second partial derivative in the horizontal direction,
Figure BDA00004796270400000614
is composed of
Figure BDA00004796270400000615
Second partial derivatives in the vertical direction;
② 4a, according to the current sub-block
Figure BDA00004796270400000616
Calculating the current sub-block according to the feature vector of each pixel point
Figure BDA00004796270400000617
Covariance matrix of (2), as <math> <mrow> <msubsup> <mi>C</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>7</mn> <mo>&times;</mo> <mn>7</mn> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math> Wherein,has a dimension of 7 x 7,representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1),
Figure BDA0000479627040000075
is composed of
Figure BDA0000479627040000076
The transposed vector of (1);
② 5a, for the current sub-block
Figure BDA0000479627040000077
Covariance matrix of
Figure BDA0000479627040000078
The Cholesky decomposition is carried out and,obtaining the current sub-block
Figure BDA00004796270400000710
Sigma feature set of (D), noted <math> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mo>[</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein L isTIs a transposed matrix of the L and,
Figure BDA00004796270400000713
has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)A 7 th column vector representing L;
secondly, 6a, adopting the same operation as the operation from the step II to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window taking each neighborhood pixel point as the center, and leading the Sigma characteristic set to be matched with the neighborhood sub-blockSigma feature set of
Figure BDA00004796270400000715
Has a dimension of 7 × 15;
7a, according to the current sub-blockSigma feature set ofAnd a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math> <mrow> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>L</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, N' (p) indicates that all neighborhood pixels in a 21 × 21 neighborhood window centered on the current pixel in { Lorg (x, y) } are in { L }orgA set of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of a gaussian function, the symbol "| | |" is the euclidean distance calculation symbol, Lorg(q) represents { L }org(x, y) } the pixel value of a pixel point with the coordinate position of q;
② 8a, according to the structure information of current pixel point
Figure BDA00004796270400000719
Obtaining the texture information of the current pixel point and recording the texture information as I L , org tex ( p ) , I L , org tex ( p ) = L org ( p ) - I L , org str ( p ) , Wherein L isorg(p) representing a pixel value of a current pixel point;
② 9a, will { LorgTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2a to continue executing until the pixel point is LorgAll pixel points in (x, y) are processed to obtain { L }orgThe structural information and the texture information of each pixel point in (x, y) } are expressed by { L }orgThe structural information of all the pixel points in (x, y) } constitutes { L }org(x, y) } structural image, noted
Figure BDA0000479627040000081
From { LorgTexture information of all pixel points in (x, y) } constitutes { L }org(x, y) } texture image, noted
Figure BDA0000479627040000082
Acquiring { L by adopting steps from 1a to 9aorg(x, y) } structural image
Figure BDA0000479627040000083
And texture imagesSame operation, get { Rorg(x, y) } structural image
Figure BDA0000479627040000085
And texture images
Figure BDA0000479627040000086
{Ldis(x, y) } structural imageAnd texture images
Figure BDA0000479627040000088
{Rdis(x, y) } structural image
Figure BDA0000479627040000089
And texture images
Figure BDA00004796270400000810
In the step (iv)
Figure BDA00004796270400000811
Objectively evaluating the predicted value of image quality
Figure BDA00004796270400000812
The acquisition process comprises the following steps:
fourthly-1 a, respectively
Figure BDA00004796270400000813
And
Figure BDA00004796270400000814
is divided into
Figure BDA00004796270400000815
Sub-blocks of size 8 × 8, which do not overlap with each other, are formed
Figure BDA00004796270400000816
Defining the current kth sub-block to be processed as the current first sub-blockThe current, to be processed, k sub-block is defined as the current, second sub-block, wherein,
Figure BDA00004796270400000817
k has an initial value of 1;
fourthly-2 a, recording the current first sub-block as
Figure BDA00004796270400000818
Record the current second sub-block as
Figure BDA00004796270400000819
Wherein (x)4,y4) To represent
Figure BDA00004796270400000820
And
Figure BDA00004796270400000821
x is more than or equal to 14≤8,1≤y4≤8,
Figure BDA00004796270400000822
To represent
Figure BDA00004796270400000823
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),to represent
Figure BDA00004796270400000825
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 a, calculating the current first sub-blockMean and standard deviation of (D), corresponding notation
Figure BDA00004796270400000827
And <math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
likewise, the current second sub-block is calculated
Figure BDA00004796270400000830
Mean and standard deviation of (D), corresponding notation
Figure BDA00004796270400000831
And
Figure BDA00004796270400000832
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
fourthly-4 a, calculating the current first sub-blockWith the current second sub-block
Figure BDA0000479627040000093
Structural similarity between them, is recorded as
Figure BDA00004796270400000933
<math> <mrow> <msubsup> <mi>Q</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>tex</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C2Is a control parameter;
(iv) 5a, let k = k +1, will
Figure BDA0000479627040000095
Wait for next oneThe sub-block processed is taken as the current first sub-block and will be
Figure BDA0000479627040000096
Taking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 a) to continue execution until the next sub-block to be processed is reached
Figure BDA0000479627040000097
And
Figure BDA0000479627040000098
all the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicine
Figure BDA0000479627040000099
Each sub-block of (1) and
Figure BDA00004796270400000910
wherein "=" in k = k +1 is an assignment symbol;
fourthly-6 a, according to
Figure BDA00004796270400000911
Each sub-block of (1) and
Figure BDA00004796270400000912
structural similarity between corresponding sub-blocks in the sequence, calculating
Figure BDA00004796270400000913
The predicted value of objective evaluation of image quality is recorded as
Figure BDA00004796270400000914
In the step (iv)
Figure BDA00004796270400000915
Objectively evaluating the predicted value of image quality
Figure BDA00004796270400000916
The acquisition process comprises the following steps:
fourthly-1 b, respectively mixing
Figure BDA00004796270400000917
Andis divided into
Figure BDA00004796270400000919
Sub-blocks of size 8 × 8, which do not overlap with each other, are formed
Figure BDA00004796270400000920
Defining the current kth sub-block to be processed as the current first sub-block
Figure BDA00004796270400000921
The current, to be processed, k sub-block is defined as the current, second sub-block, wherein,
Figure BDA00004796270400000922
k has an initial value of 1;
fourthly-2 b, recording the current first sub-block as
Figure BDA00004796270400000923
Record the current second sub-block as
Figure BDA00004796270400000924
Wherein (x)4,y4) To representAnd
Figure BDA00004796270400000926
x is more than or equal to 14≤8,1≤y4≤8,
Figure BDA00004796270400000927
To represent
Figure BDA00004796270400000928
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),
Figure BDA00004796270400000934
to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 b, calculating the current first sub-block
Figure BDA00004796270400000930
Mean and standard deviation of (D), corresponding notation
Figure BDA00004796270400000931
And
Figure BDA00004796270400000932
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
likewise, the current second sub-block is calculated
Figure BDA0000479627040000102
Mean and standard deviation of (D), corresponding notation
Figure BDA0000479627040000103
And
Figure BDA0000479627040000104
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
fourthly-4 b, calculating the current first sub-block
Figure BDA0000479627040000106
With the current second sub-block
Figure BDA0000479627040000107
Structural similarity between them, is recorded as <math> <mrow> <msubsup> <mi>Q</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>tex</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C2Is a control parameter;
(iv) 5b, let k = k +1, will
Figure BDA0000479627040000109
Taking the next sub-block to be processed as the current first sub-block
Figure BDA00004796270400001010
Taking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 b) to continue execution until the next sub-block to be processed is reached
Figure BDA00004796270400001011
And
Figure BDA00004796270400001012
all the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicine
Figure BDA00004796270400001013
Each sub-block of (1) and
Figure BDA00004796270400001014
wherein "=" in k = k +1 is an assignment symbol;
fourthly-6 b, according to
Figure BDA00004796270400001015
Each sub-block of (1) and
Figure BDA00004796270400001016
structural similarity between corresponding sub-blocks in the sequence, calculating
Figure BDA00004796270400001017
The predicted value of objective evaluation of image quality is recorded as
Figure BDA00004796270400001019
Compared with the prior art, the invention has the advantages that:
1) the method of the invention considers that the distortion can cause the loss of image structure or texture information, so that the distorted stereo image is separated into the structure image and the texture image, and different parameters are adopted to respectively fuse the image quality objective evaluation predicted values of the structure image and the texture image of the left viewpoint image and the right viewpoint image, thus better reflecting the quality change condition of the stereo image and leading the evaluation result to be more in line with the human visual system.
2) The method adopts the gradient similarity to evaluate the structural image and the structural similarity to evaluate the texture image, so that the influence of the loss of structural and texture information on the image quality can be well represented, and the correlation between objective evaluation results and subjective perception can be effectively improved.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2 is a scatter diagram of the difference between the objective evaluation predicted value of image quality and the mean subjective score of each distorted stereo image in Ningbo university stereo image library obtained by the method of the present invention;
fig. 3 is a scatter diagram of the difference between the objective evaluation prediction value of image quality and the average subjective score of each distorted stereo image in the LIVE stereo image library obtained by the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a three-dimensional image quality objective evaluation method based on structure texture separation, the overall implementation block diagram of which is shown in figure 1, and the processing process of the method is as follows:
firstly, respectively implementing structure texture separation on a left viewpoint image and a right viewpoint image of an original undistorted stereo image and a left viewpoint image and a right viewpoint image of a distorted stereo image to be evaluated to obtain respective structure images and texture images;
secondly, obtaining an objective evaluation prediction value of the image quality of the structural image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the left viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the structural image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the right viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the right viewpoint image of the distorted stereo image to be evaluated;
secondly, obtaining an objective image quality evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 multiplied by 8 in the texture image of the left viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 multiplied by 8 in the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the texture image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 × 8 in the texture image of the right viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 × 8 in the texture image of the right viewpoint image of the distorted stereo image to be evaluated;
thirdly, fusing the image quality objective evaluation predicted values of the structural images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the structural image of the distorted three-dimensional image to be evaluated; similarly, fusing the image quality objective evaluation predicted values of the texture images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the texture image of the distorted three-dimensional image to be evaluated;
and finally, fusing the image quality objective evaluation predicted value of the structural image and the texture image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the distorted three-dimensional image to be evaluated.
The invention relates to a three-dimensional image quality objective evaluation method based on structure texture separation, which specifically comprises the following steps:
making SorgRepresenting the original undistorted stereo image, let SdisA stereoscopic image representing distortion to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, wherein (x, y) denotes a coordinate position of a pixel point in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. x.ltoreq.W, y is 1. ltoreq. y.ltoreq.H, W denotes a width of the left viewpoint image and the right viewpoint image, H denotes a height of the left viewpoint image and the right viewpoint image, L is Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { Rorg(x, y) } position of coordinate in (x, y) } spacePixel value of pixel point of (x, y), Ldis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
Here, the Ningbo university stereo image library and the LIVE stereo image library are used to analyze the correlation between the image quality objective evaluation prediction value of the distorted stereo image obtained in the present embodiment and the average subjective score difference value. The Ningbo university stereo image library is composed of 12 undistorted stereo images, 60 distorted stereo images under JPEG compression with different distortion degrees, 60 distorted stereo images under JPEG2000 compression, 60 distorted stereo images under Gaussian blur, 60 distorted stereo images under Gaussian white noise, and 72 distorted stereo images under H.264 coding distortion. The LIVE stereo image library is composed of 20 undistorted stereo images, 80 distorted stereo images under JPEG compression with different distortion degrees, 80 distorted stereo images under JPEG2000 compression, 45 distorted stereo images under gaussian blur, 80 distorted stereo images under gaussian white noise, and 80 distorted stereo images under Fast Fading distortion.
② are respectively paired with { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } structural texture separation to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } respective structural and texture images, will { Lorg(x, y) } structural and texture image correspondences are noted
Figure BDA0000479627040000121
And
Figure BDA0000479627040000122
will { Rorg(x, y) } structural and texture image correspondences are noted
Figure BDA0000479627040000123
And
Figure BDA0000479627040000124
will { Ldis(x, y) } structural and texture image correspondences are noted
Figure BDA0000479627040000125
And
Figure BDA0000479627040000126
will { Rdis(x, y) } structural and texture image correspondences are noted
Figure BDA0000479627040000131
And
Figure BDA0000479627040000132
wherein,
Figure BDA0000479627040000133
to represent
Figure BDA0000479627040000134
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA0000479627040000135
to represent
Figure BDA0000479627040000136
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA0000479627040000137
to represent
Figure BDA0000479627040000138
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA0000479627040000139
to represent
Figure BDA00004796270400001310
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA00004796270400001311
to represent
Figure BDA00004796270400001312
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA00004796270400001313
to represent
Figure BDA00004796270400001314
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure BDA00004796270400001315
to represent
Figure BDA00004796270400001316
The middle coordinate position is the pixel value of the pixel point of (x, y),to represent
Figure BDA00004796270400001318
The middle coordinate position is the pixel value of the pixel point of (x, y).
In this embodiment, step two is { Lorg(x, y) } structural imageAnd texture images
Figure BDA00004796270400001320
The acquisition process comprises the following steps:
② 1a, will { LorgAnd (x, y) defining the current pixel point to be processed as the current pixel point.
2a, setting the current pixel point at { Lorg(x,y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded as
Figure BDA00004796270400001321
Defining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in { L } neighborhood sub-blocksorg(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as center
Figure BDA00004796270400001322
Wherein p ∈ Ω, q ∈ Ω, where Ω denotes { L ∈ Ωorg(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-blockThe pixel point in (1) is in the current sub-block
Figure BDA00004796270400001324
Coordinate position of (1) x2≤9,1≤y2≤9,
Figure BDA00004796270400001325
Representing a current sub-block
Figure BDA00004796270400001326
The middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To representIs atCoordinates of (5)Position, 1. ltoreq. x3≤9,1≤y3≤9,To represent
Figure BDA00004796270400001330
The middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1).
In the step 2a, for any pixel point in the current sub-block, the pixel point is assumed to be in { L [ ]orgThe coordinate position in (x, y) } is (x, y), if x is<1 and y is more than or equal to 1 and less than or equal to H, then { L ≦ HorgAssigning the pixel value of the pixel point with the coordinate position (1, y) in the (x, y) } to the pixel point; if x>W is 1. ltoreq. y.ltoreq.H, then { LorgAssigning the pixel value of the pixel point with the coordinate position (W, y) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x,1) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x, H) in the (x, y) } to the pixel point; if x<1 and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1,1) in the (x, y) } to the pixel point; if x>W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W,1) in the (x, y) } to the pixel point; if x<1 and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1, H) in the (x, y) } to the pixel point; if x>W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W, H) in the (x, y) } to the pixel point; similarly, for any neighborhood pixel point, the same operation is performed as for any pixel point in the current sub-block, so that the pixel value of the neighborhood pixel point beyond the image boundary is replaced by the pixel value of the nearest boundary pixel point. That is, in the above step 2a, if the coordinate position of a certain pixel point in the block formed by the 9 × 9 neighborhood window with the current pixel point as the center exceeds { L }org(x, y) } the pixel value of the pixel is the image of the nearest boundary pixelReplacing the prime value; if the coordinate position of the neighborhood pixel point in the 21 multiplied by 21 neighborhood window taking the current pixel point as the center exceeds { L }org(x, y) } the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of a pixel point in a block formed by a 9 x 9 neighborhood window with each neighborhood pixel point in the 21 x 21 neighborhood window with the current pixel point as the center exceeds { L }org(x, y) }, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
② 3a, obtaining the current sub-block
Figure BDA0000479627040000141
The feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-block
Figure BDA0000479627040000142
The feature vector of the pixel point with the middle coordinate position (x2, y2) is recorded as
Figure BDA0000479627040000143
<math> <mrow> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </math> Wherein,
Figure BDA0000479627040000145
has a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,representing a current sub-block
Figure BDA00004796270400001528
The middle coordinate position is (x)2,y2) The density value of the pixel point of (a),is composed of
Figure BDA00004796270400001529
The first partial derivative in the horizontal direction,is composed of
Figure BDA0000479627040000154
In the vertical directionThe first-order partial derivative of (a) is,
Figure BDA0000479627040000155
is composed ofThe second partial derivative in the horizontal direction,
Figure BDA0000479627040000156
is composed ofSecond partial derivative in vertical direction.
② 4a, according to the current sub-block
Figure BDA0000479627040000158
Calculating the current sub-block according to the feature vector of each pixel point
Figure BDA0000479627040000159
Covariance matrix of (2), as
Figure BDA00004796270400001510
<math> <mrow> <msubsup> <mi>C</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>7</mn> <mo>&times;</mo> <mn>7</mn> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math> Wherein,
Figure BDA00004796270400001512
has a dimension of 7 x 7,
Figure BDA00004796270400001513
representing a current sub-block
Figure BDA00004796270400001514
The mean vector of the feature vectors of all the pixel points in (1),
Figure BDA00004796270400001515
is composed of
Figure BDA00004796270400001516
The transposed vector of (1).
② 5a, for the current sub-block
Figure BDA00004796270400001517
Covariance matrix of
Figure BDA00004796270400001518
The Cholesky decomposition is carried out and,
Figure BDA00004796270400001519
obtaining the current sub-block
Figure BDA00004796270400001520
Sigma feature set of (D), noted <math> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mo>[</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein L isTIs a transposed matrix of the L and,
Figure BDA00004796270400001523
has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)The 7 th column vector of L is represented.
Secondly, 6a, adopting the same operation as the operation from the step II to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window taking each neighborhood pixel point as the center, and leading the Sigma characteristic set to be matched with the neighborhood sub-block
Figure BDA00004796270400001524
Sigma feature set of
Figure BDA00004796270400001525
Has a dimension of 7 × 15.
7a, according to the current sub-blockSigma feature set of
Figure BDA00004796270400001527
And a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math> <mrow> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>L</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein N' (p) represents { Lorg21 × 21 neighbors centered on the current pixel in (x, y) } methodAll neighborhood pixels in the domain window are in { LorgSet of coordinate positions in (x, y) }, exp () represents an exponential function based on e, e =2.71828183, σ represents the standard deviation of the gaussian function, in this example, σ =0.06, the symbol "| is the euclidean distance calculation symbol, Lorg(q) represents { L }orgAnd (x, y) } the pixel value of the pixel point with the coordinate position of q.
② 8a, according to the structure information of current pixel point
Figure BDA0000479627040000162
Obtaining the texture information of the current pixel point and recording the texture information as
Figure BDA0000479627040000163
Wherein L isorg(p) represents the pixel value of the current pixel point.
② 9a, will { LorgTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2a to continue executing until the pixel point is LorgAll pixel points in (x, y) are processed to obtain { L }orgThe structural information and the texture information of each pixel point in (x, y) } are expressed by { L }orgThe structural information of all the pixel points in (x, y) } constitutes { L }org(x, y) } structural image, noted
Figure BDA0000479627040000164
From { LorgTexture information of all pixel points in (x, y) } constitutes { L }org(x, y) } texture image, noted
Figure BDA0000479627040000165
Acquiring { L by adopting steps from 1a to 9aorg(x, y) } structural image
Figure BDA0000479627040000166
And texture images
Figure BDA0000479627040000167
Same operation, get { Rorg(x, y) } structural image
Figure BDA0000479627040000168
And texture images
Figure BDA0000479627040000169
{Ldis(x, y) } structural image
Figure BDA00004796270400001610
And texture images
Figure BDA00004796270400001611
{Rdis(x, y) } structural image
Figure BDA00004796270400001612
And texture images
Figure BDA00004796270400001613
Namely: step two, { Rorg(x, y) } structural imageAnd texture images
Figure BDA00004796270400001615
The acquisition process comprises the following steps:
2- (1 b) will be { RorgAnd (x, y) defining the current pixel point to be processed as the current pixel point.
2b, setting the current pixel point at { RorgThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded as
Figure BDA0000479627040000171
Each of 21 x 21 neighborhood windows centered on the current pixel pointThe blocks formed by the 9 multiplied by 9 neighborhood windows with the neighborhood pixel point as the center are all defined as neighborhood sub-blocks, and are in the 21 multiplied by 21 neighborhood window with the current pixel point as the center and are in the { Rorg(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as center
Figure BDA0000479627040000172
Wherein p ∈ Ω, q ∈ Ω, where Ω denotes { R ∈ Ωorg(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-block
Figure BDA0000479627040000173
The pixel point in (1) is in the current sub-block
Figure BDA0000479627040000174
Coordinate position of (1) x2≤9,1≤y2≤9,
Figure BDA0000479627040000175
Representing a current sub-block
Figure BDA0000479627040000176
The middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To represent
Figure BDA0000479627040000177
Is atCoordinate position of (1) x3≤9,1≤y3≤9,
Figure BDA0000479627040000179
To represent
Figure BDA00004796270400001710
The middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1).
In the second step-2 b, if the coordinate position of a certain pixel point in the block formed by the 9 multiplied by 9 neighborhood window taking the current pixel point as the center exceeds { R }org(x, y) } the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of the neighborhood pixel point in the 21 multiplied by 21 neighborhood window taking the current pixel point as the center exceeds { R }org(x, y) } the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of a pixel point in a block formed by a 9 x 9 neighborhood window with each neighborhood pixel point in the 21 x 21 neighborhood window with the current pixel point as the center exceeds { R }org(x, y) }, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
② 3b, obtaining the current sub-block
Figure BDA00004796270400001711
The feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-block
Figure BDA00004796270400001712
The feature vector of the pixel point with the middle coordinate position (x2, y2) is recorded as
Figure BDA00004796270400001713
<math> <mrow> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </math> Wherein,
Figure BDA00004796270400001715
has a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,representing a current sub-block
Figure BDA00004796270400001717
The density value of the pixel point with the middle coordinate position of (x2, y2),
Figure BDA0000479627040000181
is composed of
Figure BDA0000479627040000182
The first partial derivative in the horizontal direction,
Figure BDA0000479627040000183
is composed of
Figure BDA0000479627040000184
The first partial derivative in the vertical direction,
Figure BDA0000479627040000185
is composed of
Figure BDA0000479627040000186
The second partial derivative in the horizontal direction,is composed of
Figure BDA0000479627040000188
Second partial derivative in vertical direction.
2-4 b, according to the current sub-block
Figure BDA0000479627040000189
Calculating the current sub-block according to the feature vector of each pixel point
Figure BDA00004796270400001810
Covariance matrix of (2), as
Figure BDA00004796270400001811
<math> <mrow> <msubsup> <mi>C</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>7</mn> <mo>&times;</mo> <mn>7</mn> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math> Wherein,
Figure BDA00004796270400001813
has a dimension of 7 x 7,representing a current sub-block
Figure BDA00004796270400001815
The mean vector of the feature vectors of all the pixel points in (1),
Figure BDA00004796270400001816
is composed of
Figure BDA00004796270400001817
The transposed vector of (1).
② 5b, for the current sub-block
Figure BDA00004796270400001818
Covariance matrix of
Figure BDA00004796270400001819
The Cholesky decomposition is carried out and,
Figure BDA00004796270400001820
obtaining the current sub-blockSigma feature set of (D), noted
Figure BDA00004796270400001828
<math> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mo>[</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein L isTIs a transposed matrix of the L and,
Figure BDA00004796270400001823
has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)The 7 th column vector of L is represented.
6b, adopting the same operation as the steps from 3b to 5b to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as the center, and leading the Sigma characteristic set to be used for solving the problem that the Sigma characteristic set of the neighborhood sub-block is not suitable for the next step
Figure BDA00004796270400001824
Sigma feature set of
Figure BDA00004796270400001825
Has a dimension of 7 × 15.
7b, according to the current sub-block
Figure BDA00004796270400001826
Sigma feature set of
Figure BDA00004796270400001827
And a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math> <mrow> <msubsup> <mi>I</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>I</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>L</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein N' (p) represents { RorgAll neighborhood pixels in the 21 x 21 neighborhood window centered on the current pixel in (x, y) are in { R }orgSet of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of the gaussian function, in this embodiment, σ =0.06, the symbol "| | |" is the euclidean distance calculation symbol, Rorg(q) represents { RorgAnd (x, y) } the pixel value of the pixel point with the coordinate position of q.
② 8b, according to the structure information of current pixel point
Figure BDA0000479627040000192
Obtaining the texture information of the current pixel point and recording the texture information as
Figure BDA0000479627040000193
Wherein R isorg(p) represents the pixel value of the current pixel point.
② 9b, will { RorgTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2b to continue executing until the pixel point is { R }orgAll pixel points in (x, y) are processed to obtain { R }orgThe structural information and the texture information of each pixel point in (x, y) } are represented by { R }orgThe structural information of all pixel points in (x, y) constitutes Rorg(x, y) } structural image, noted
Figure BDA0000479627040000194
From { RorgTexture information of all pixel points in (x, y) } constitutes { R }org(x, y) } texture image, noted
Figure BDA0000479627040000195
Step two, { Ldis(x, y) } structural imageAnd texture imagesThe acquisition process comprises the following steps:
② 1c, will { LdisAnd (x, y) defining the current pixel point to be processed as the current pixel point.
2c, setting the current pixel point at { LdisThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded as
Figure BDA0000479627040000198
Defining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in { L } neighborhood sub-blocksdis(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as center
Figure BDA0000479627040000199
Wherein p ∈ Ω, q ∈ Ω, where Ω denotes { L ∈ Ωdis(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-block
Figure BDA0000479627040000201
The pixel point in (1) is in the current sub-block
Figure BDA0000479627040000202
Coordinate position of (1) x2≤9,1≤y2≤9,
Figure BDA0000479627040000203
Representing a current sub-block
Figure BDA0000479627040000204
The middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To represent
Figure BDA0000479627040000205
Is at
Figure BDA0000479627040000206
Coordinate position of (1) x3≤9,1≤y3≤9,To represent
Figure BDA0000479627040000208
The middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1).
In the second step 2c, if the coordinate position of a certain pixel point in the block formed by the 9 multiplied by 9 neighborhood window taking the current pixel point as the center exceeds { L }dis(x, y) } the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of the neighborhood pixel point in the 21 multiplied by 21 neighborhood window taking the current pixel point as the center exceeds { L }dis(x, y) } the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of a pixel point in a block formed by a 9 x 9 neighborhood window with each neighborhood pixel point in the 21 x 21 neighborhood window with the current pixel point as the center exceeds { L }dis(x, y) }, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
② 3c, obtaining the current sub-block
Figure BDA0000479627040000209
The feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-block
Figure BDA00004796270400002010
The middle coordinate position is (x)2,y2) The feature vector of the pixel point is recorded as <math> <mrow> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </math> Wherein
Figure BDA00004796270400002013
has a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,
Figure BDA00004796270400002014
representing a current sub-block
Figure BDA00004796270400002015
The middle coordinate position is (x)2,y2) The density value of the pixel point of (a),
Figure BDA00004796270400002016
is composed ofThe first partial derivative in the horizontal direction,
Figure BDA00004796270400002018
is composed ofThe first partial derivative in the vertical direction,
Figure BDA00004796270400002020
is composed of
Figure BDA00004796270400002021
The second partial derivative in the horizontal direction,
Figure BDA00004796270400002022
is composed of
Figure BDA00004796270400002023
Second partial derivative in vertical direction.
2-4 c, according to the current sub-block
Figure BDA0000479627040000211
Calculating the current sub-block according to the feature vector of each pixel point
Figure BDA0000479627040000212
Covariance matrix of (2), as
Figure BDA0000479627040000213
<math> <mrow> <msubsup> <mi>C</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>7</mn> <mo>&times;</mo> <mn>7</mn> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math> Wherein,
Figure BDA0000479627040000215
has a dimension of 7 x 7,representing a current sub-block
Figure BDA0000479627040000217
The mean vector of the feature vectors of all the pixel points in (1),
Figure BDA0000479627040000218
is composed ofThe transposed vector of (1).
② 5c, for the current sub-block
Figure BDA00004796270400002110
Covariance matrix ofThe Cholesky decomposition is carried out and,
Figure BDA00004796270400002112
obtaining the current sub-block
Figure BDA00004796270400002113
Sigma ofCollect a collection, mark as
Figure BDA00004796270400002121
<math> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mo>[</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein L isTIs a transposed matrix of the L and,
Figure BDA00004796270400002115
has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)The 7 th column vector of L is represented.
6c, adopting the same operation as the steps from 3c to 5c to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as the center, and leading the Sigma characteristic set to be used for solving the problem that the Sigma characteristic set of the neighborhood sub-block is not suitable for the next step
Figure BDA00004796270400002116
Sigma feature set of
Figure BDA00004796270400002117
Has a dimension of 7 × 15.
7c, according to the current sub-block
Figure BDA00004796270400002118
Sigma feature set of
Figure BDA00004796270400002119
And a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math> <mrow> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein N' (p) represents { LdisAll neighborhood pixels in the 21 x 21 neighborhood window centered on the current pixel in (x, y) } are in { L }disSet of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of the gaussian function, in this embodiment, σ =0.06, the symbol "| | |" is the euclidean distance calculation symbol, Ldis(q) represents { L }disAnd (x, y) } the pixel value of the pixel point with the coordinate position of q.
② 8c, according to the structure information of current pixel point
Figure BDA0000479627040000221
Obtaining the texture information of the current pixel point and recording the texture information as I L , dis tex ( p ) , I L , dis tex ( p ) = L dis ( p ) - I L , dis str ( p ) , Wherein L isdis(p) represents the pixel value of the current pixel point.
② 9c, will { LdisTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2c to continue executing until the pixel point is LdisAll pixel points in (x, y) are processed to obtain { L }disThe structural information and the texture information of each pixel point in (x, y) } are expressed by { L }disThe structural information of all the pixel points in (x, y) } constitutes { L }dis(x, y) } structural image, noted
Figure BDA0000479627040000223
From { LdisTexture information of all pixel points in (x, y) } constitutes { L }dis(x, y) } texture image, noted
Figure BDA0000479627040000224
Step two, { Rdis(x, y) } structural image
Figure BDA0000479627040000225
And texture images
Figure BDA0000479627040000226
The acquisition process comprises the following steps:
② 1d, mixing { RdisAnd (x, y) defining the current pixel point to be processed as the current pixel point.
2d, setting the current pixel point at { RdisThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded as
Figure BDA0000479627040000227
Defining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in the { R } neighborhood sub-blocksdis(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as center
Figure BDA0000479627040000228
Wherein p ∈ Ω, q ∈ Ω, where Ω denotes { R ∈ Ωdis(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-block
Figure BDA0000479627040000229
The pixel point in (1) is in the current sub-blockCoordinate position of (1) x2≤9,1≤y2≤9,Representing a current sub-block
Figure BDA00004796270400002212
The middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To represent
Figure BDA00004796270400002213
Is at
Figure BDA00004796270400002214
Coordinate position of (1) x3≤9,1≤y3≤9,To represent
Figure BDA00004796270400002216
The middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1).
In the second step 2d, if the coordinate position of a certain pixel point in the block formed by the 9 multiplied by 9 neighborhood window taking the current pixel point as the center exceeds { R }dis(x, y) } the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of the neighborhood pixel point in the 21 multiplied by 21 neighborhood window taking the current pixel point as the center exceeds { R }dis(x, y) } the pixel value of the neighborhood pixel is replaced by the pixel value of the nearest boundary pixel; if the coordinate position of a pixel point in a block formed by a 9 x 9 neighborhood window with each neighborhood pixel point in the 21 x 21 neighborhood window with the current pixel point as the center exceeds { R }dis(x, y) }, the pixel value of the pixel is replaced by the pixel value of the nearest boundary pixel.
② 3d, obtaining the current sub-block
Figure BDA0000479627040000231
The feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-block
Figure BDA0000479627040000232
The middle coordinate position is (x)2,y2) The feature vector of the pixel point is recorded as
Figure BDA0000479627040000233
<math> <mrow> <msubsup> <mi>X</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msubsup> <mi>I</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </math> Wherein
Figure BDA0000479627040000235
has a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, and the symbol" | "is an absolute value symbolThe number of the mobile station is,representing a current sub-block
Figure BDA0000479627040000237
The middle coordinate position is (x)2,y2) The density value of the pixel point of (a),
Figure BDA0000479627040000238
is composed of
Figure BDA0000479627040000239
The first partial derivative in the horizontal direction,
Figure BDA00004796270400002310
is composed of
Figure BDA00004796270400002311
The first partial derivative in the vertical direction,is composed of
Figure BDA00004796270400002313
The second partial derivative in the horizontal direction,
Figure BDA00004796270400002314
is composed of
Figure BDA00004796270400002315
Second partial derivative in vertical direction.
4d, according to the current sub-block
Figure BDA00004796270400002316
Calculating the current sub-block according to the feature vector of each pixel point
Figure BDA00004796270400002317
Covariance matrix of (2), as
Figure BDA00004796270400002318
<math> <mrow> <msubsup> <mi>C</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>7</mn> <mo>&times;</mo> <mn>7</mn> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math> Wherein,
Figure BDA00004796270400002326
has a dimension of 7 x 7,
Figure BDA00004796270400002320
representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1),
Figure BDA00004796270400002322
is composed ofThe transposed vector of (1).
② 5d, for the current sub-block
Figure BDA00004796270400002324
Covariance matrix ofThe Cholesky decomposition is carried out and,
Figure BDA0000479627040000241
obtaining the current sub-block
Figure BDA0000479627040000242
Sigma feature set of (D), noted
Figure BDA0000479627040000243
<math> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mo>[</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein L isTIs a transposed matrix of the L and,
Figure BDA0000479627040000245
has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)The 7 th column vector of L is represented.
6d, adopting the same operation as the steps from 3d to 5d to obtain the Sigma feature set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as the center, and leading the Sigma feature set to be used for solving the problem that the Sigma feature set of the neighborhood sub-block is not suitable for the next step
Figure BDA0000479627040000246
Sigma feature set of
Figure BDA0000479627040000247
Has a dimension of 7 × 15.
7d, according to the current sub-block
Figure BDA0000479627040000248
Sigma feature set ofAnd a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math> <mrow> <msubsup> <mi>I</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>I</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein N' (p) represents { RdisAll neighborhood pixels in the 21 x 21 neighborhood window centered on the current pixel in (x, y) are in { R }disSet of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of the gaussian function, in this embodiment, σ =0.06, the symbol "| | |" is the euclidean distance calculation symbol, Rdis(q) represents { RdisAnd (x, y) } the pixel value of the pixel point with the coordinate position of q.
② 8d, according to the structure information of current pixel point
Figure BDA00004796270400002411
Obtaining the texture information of the current pixel point and recording the texture information as I R , dis tex ( p ) , I R , dis tex ( p ) = R dis ( p ) - I R , dis str ( p ) , Wherein R isdis(p) represents the pixel value of the current pixel point.
② 9d, will { RdisTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2d to continue executing until the pixel point is { R }disAll pixel points in (x, y) are processed to obtain { R }disThe structural information and the texture information of each pixel point in (x, y) } are represented by { R }disThe structural information of all pixel points in (x, y) constitutes Rdis(x, y) } structural image, notedFrom { RdisTexture information of all pixel points in (x, y) } constitutes { R }dis(x, y) } texture image, noted
Comparing with original image, the structure image separates out detail information such as texture from original image to make structure information more stable, so the invention method calculates
Figure BDA0000479627040000252
Each pixel point in (1)
Figure BDA0000479627040000253
The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position and
Figure BDA0000479627040000255
the gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as
Figure BDA0000479627040000256
<math> <mrow> <msubsup> <mi>Q</mi> <mi>L</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, m L , org str ( x , y ) = ( gx L , org str ( x , y ) ) 2 + ( gy L , org str ( x , y ) ) 2 , m L , dis str ( x , y ) = ( gx L , dis str ( x , y ) ) 2 + ( gy L , dis str ( x , y ) ) 2 ,
Figure BDA00004796270400002510
to represent
Figure BDA00004796270400002511
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure BDA00004796270400002512
to represent
Figure BDA00004796270400002513
The vertical direction gradient of the pixel point with the middle coordinate position (x, y),
Figure BDA00004796270400002514
to represent
Figure BDA00004796270400002515
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure BDA00004796270400002516
to represent
Figure BDA00004796270400002517
Gradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1For controlling the parameters, in this example C is taken1= 0.0026; then according to
Figure BDA00004796270400002518
Each pixel point in (1)
Figure BDA00004796270400002519
Calculating the gradient similarity between corresponding pixel points
Figure BDA00004796270400002520
The predicted value of objective evaluation of image quality is recorded as
Figure BDA00004796270400002521
<math> <mrow> <msubsup> <mi>Q</mi> <mi>L</mi> <mi>str</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>W</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>H</mi> </munderover> <msubsup> <mi>Q</mi> <mi>L</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>W</mi> <mo>&times;</mo> <mi>H</mi> </mrow> </mfrac> <mo>.</mo> </mrow> </math>
Also, calculateEach pixel point in (1)
Figure BDA00004796270400002524
The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position and
Figure BDA00004796270400002526
the gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as
Figure BDA00004796270400002527
<math> <mrow> <msubsup> <mi>Q</mi> <mi>R</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, m R , org str ( x , y ) = ( gx R , org str ( x , y ) ) 2 + ( gy R , org str ( x , y ) ) 2 , m R , dis str ( x , y ) = ( gx R , dis str ( x , y ) ) 2 + ( gy R , dis str ( x , y ) ) 2 ,
Figure BDA00004796270400002531
to represent
Figure BDA00004796270400002532
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure BDA00004796270400002533
to represent
Figure BDA00004796270400002534
The vertical direction gradient of the pixel point with the middle coordinate position (x, y),to represent
Figure BDA0000479627040000262
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure BDA0000479627040000263
to represent
Figure BDA0000479627040000264
Gradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1For controlling the parameters, in this example C is taken1= 0.0026; then according to
Figure BDA0000479627040000265
Each pixel point in (1)
Figure BDA0000479627040000266
Calculating the gradient similarity between corresponding pixel points
Figure BDA0000479627040000267
Image of (2)The predicted value of objective quality evaluation is recorded as
Figure BDA0000479627040000268
<math> <mrow> <msubsup> <mi>Q</mi> <mi>R</mi> <mi>str</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>W</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>H</mi> </munderover> <msubsup> <mi>Q</mi> <mi>R</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>W</mi> <mo>&times;</mo> <mi>H</mi> </mrow> </mfrac> <mo>.</mo> </mrow> </math>
Fourthly, because the information of the mean value and the standard deviation can well evaluate the change of the detail information of the image, the method of the invention obtains the information of the mean value and the standard deviationEach of the sub-blocks having a size of 8 x 8 andcalculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8
Figure BDA00004796270400002612
The predicted value of objective evaluation of image quality is recorded as
Figure BDA00004796270400002613
In this embodiment, step (iv)
Figure BDA00004796270400002614
Objectively evaluating the predicted value of image quality
Figure BDA00004796270400002615
The acquisition process comprises the following steps:
fourthly-1 a, respectively
Figure BDA00004796270400002616
And
Figure BDA00004796270400002617
is divided into
Figure BDA00004796270400002618
Sub-blocks of size 8 × 8, which do not overlap with each other, are formedDefining the current kth sub-block to be processed as the current first sub-block
Figure BDA00004796270400002620
The current, to be processed, k sub-block is defined as the current, second sub-block, wherein,
Figure BDA00004796270400002621
the initial value of k is 1.
Fourthly-2 a, recording the current first sub-block asRecord the current second sub-block as
Figure BDA00004796270400002623
Wherein (x)4,y4) To represent
Figure BDA00004796270400002624
Andx is more than or equal to 14≤8,1≤y4≤8,
Figure BDA00004796270400002626
To represent
Figure BDA00004796270400002627
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),
Figure BDA00004796270400002628
to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1).
Fourthly-3 a, calculating the current first sub-block
Figure BDA00004796270400002630
Mean and standard deviation of (D), corresponding notation
Figure BDA00004796270400002631
And
Figure BDA00004796270400002632
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>.</mo> </mrow> </math>
likewise, the current second sub-block is calculated
Figure BDA00004796270400002634
Mean and standard deviation of (D), corresponding notation
Figure BDA00004796270400002635
And <math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>.</mo> </mrow> </math>
fourthly-4 a, calculating the current first sub-block
Figure BDA0000479627040000272
With the current second sub-block
Figure BDA0000479627040000273
Structural similarity between them, is recorded as
Figure BDA0000479627040000274
<math> <mrow> <msubsup> <mi>Q</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>tex</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C2For controlling the parameters, in this example C is taken2=0.85。
(iv) 5a, let k = k +1, will
Figure BDA0000479627040000276
Taking the next sub-block to be processed as the current first sub-block
Figure BDA0000479627040000277
Taking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 a) to continue execution until the next sub-block to be processed is reached
Figure BDA0000479627040000278
And
Figure BDA0000479627040000279
all the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicine
Figure BDA00004796270400002710
Each sub-block of (1) and
Figure BDA00004796270400002711
wherein "=" in k = k +1 is an assignerNumber (n).
Fourthly-6 a, according to
Figure BDA00004796270400002712
Each sub-block of (1) and
Figure BDA00004796270400002713
structural similarity between corresponding sub-blocks in the sequence, calculating
Figure BDA00004796270400002714
The predicted value of objective evaluation of image quality is recorded as
Figure BDA00004796270400002715
Figure BDA00004796270400002716
Also, by obtaining
Figure BDA00004796270400002717
Each of the sub-blocks having a size of 8 x 8 and
Figure BDA00004796270400002718
calculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8
Figure BDA00004796270400002719
The predicted value of objective evaluation of image quality is recorded as
Figure BDA00004796270400002720
In this embodiment, the step (iv)
Figure BDA00004796270400002721
Objectively evaluating the predicted value of image quality
Figure BDA00004796270400002722
Has obtainedThe process is as follows:
fourthly-1 b, respectively mixing
Figure BDA00004796270400002723
And
Figure BDA00004796270400002724
is divided into
Figure BDA00004796270400002725
Sub-blocks of size 8 × 8, which do not overlap with each other, are formed
Figure BDA00004796270400002726
Defining the current kth sub-block to be processed as the current first sub-block
Figure BDA00004796270400002727
The current, to be processed, k sub-block is defined as the current, second sub-block, wherein,
Figure BDA00004796270400002728
the initial value of k is 1.
Fourthly-2 b, recording the current first sub-block as
Figure BDA00004796270400002729
Record the current second sub-block as
Figure BDA00004796270400002730
Wherein (x)4,y4) To representAnd
Figure BDA0000479627040000282
x is more than or equal to 14≤8,1≤y4≤8,
Figure BDA0000479627040000283
To represent
Figure BDA0000479627040000284
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),
Figure BDA0000479627040000285
to represent
Figure BDA0000479627040000286
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1).
Fourthly-3 b, calculating the current first sub-block
Figure BDA0000479627040000287
Mean and standard deviation of (D), corresponding notation
Figure BDA0000479627040000288
And
Figure BDA0000479627040000289
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
likewise, the current second sub-block is calculated
Figure BDA00004796270400002811
Mean and standard deviation of (D), corresponding notation
Figure BDA00004796270400002812
And
Figure BDA00004796270400002813
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>.</mo> </mrow> </math>
fourthly-4 b, calculating the current first sub-blockWith the current second sub-block
Figure BDA00004796270400002816
Structural similarity between them, is recorded as
Figure BDA00004796270400002817
<math> <mrow> <msubsup> <mi>Q</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>tex</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <mtext></mtext> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <mtext></mtext> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C2For controlling the parameters, in this example C is taken2=0.85。
(iv) 5b, let k = k +1, will
Figure BDA00004796270400002819
Taking the next sub-block to be processed as the current first sub-blockTaking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 b) to continue execution until the next sub-block to be processed is reached
Figure BDA00004796270400002821
And
Figure BDA00004796270400002822
all the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicine
Figure BDA00004796270400002823
Each sub-block of (1) and
Figure BDA00004796270400002824
wherein "=" in k = k +1 is an assigned symbol.
Fourthly-6 b, according to
Figure BDA00004796270400002825
Each sub-block of (1) and
Figure BDA00004796270400002826
structural similarity between corresponding sub-blocks in the sequence, calculating
Figure BDA00004796270400002827
The predicted value of objective evaluation of image quality is recorded as
Figure BDA00004796270400002829
Fifthly, to
Figure BDA00004796270400002830
And
Figure BDA00004796270400002831
carrying out fusion to obtain SdisThe predicted value of the objective evaluation of the image quality of the structural image is marked as Qstr
Figure BDA00004796270400002832
Wherein, wsTo represent
Figure BDA00004796270400002833
And
Figure BDA00004796270400002834
in this embodiment, for the Ningbo university stereo image library, take ws= 0.980; for LIVE stereo image library, take ws=0.629。
Also, forAnd
Figure BDA0000479627040000292
carrying out fusion to obtain SdisThe predicted value of the texture image is recorded as Qtex
Figure BDA0000479627040000293
Wherein, wtTo represent
Figure BDA0000479627040000294
And
Figure BDA0000479627040000295
in this embodiment, for the Ningbo university stereo image library, take wt= 0.888; for LIVE stereo image library, take wt=0.503。
Sixthly to QstrAnd QtexCarrying out fusion to obtain SdisThe predicted value of the objective evaluation of image quality is expressed as Q, Q = w × Qstr+(1-w)×QtexWherein w represents QstrAnd SdisIn this embodiment, for the stereo image library of ningbo university, w =0.882 is taken; for LIVE stereo image library, take w = 0.838.
Here, 4 common objective parameters of the evaluation method for evaluating image quality are used as evaluation indexes, that is, Pearson correlation coefficient (PLCC), Spearman correlation coefficient (SROCC), Kendall correlation coefficient (KROCC), mean square error (RMSE), accuracy of the objective evaluation result of the stereo image in which PLCC and RMSE reflect distortion, and monotonicity of SROCC and KROCC reflects monotonicity thereof under nonlinear regression conditions.
The method is used for calculating the image quality objective evaluation predicted value of each distorted three-dimensional image in the Ningbo university three-dimensional image library and the image quality objective evaluation predicted value of each distorted three-dimensional image in the LIVE three-dimensional image library, and then the average subjective score difference value of each distorted three-dimensional image in the Ningbo university three-dimensional image library and the average subjective score difference value of each distorted three-dimensional image in the LIVE three-dimensional image library are obtained by using the existing subjective evaluation method. The image quality objective evaluation predicted value of the distorted stereo image calculated according to the method is subjected to five-parameter Logistic function nonlinear fitting, and the higher the PLCC, SROCC and KROCC values are, the lower the RMSE value is, the better the correlation between the objective evaluation method and the average subjective score difference is. Tables 1, 2, 3 and 4 show Pearson correlation coefficient, Spearman correlation coefficient, Kendall correlation coefficient and mean square error between the image quality objective evaluation predicted value and the average subjective score difference value of the distorted stereo image obtained by the method of the present invention. As can be seen from tables 1, 2, 3 and 4, the correlation between the final objective evaluation prediction value of image quality of the distorted stereoscopic image obtained by the method of the present invention and the average subjective score difference is very high, which indicates that the objective evaluation result is more consistent with the result of human eye subjective perception, and is sufficient to explain the effectiveness of the method of the present invention.
Fig. 2 shows a scatter diagram of the difference between the objective evaluation predicted value of the image quality of each distorted stereoscopic image in the Ningbo university stereoscopic image library and the average subjective score obtained by the method of the present invention, and fig. 3 shows a scatter diagram of the difference between the objective evaluation predicted value of the image quality of each distorted stereoscopic image in the LIVE stereoscopic image library and the average subjective score obtained by the method of the present invention, wherein the more concentrated the scatter diagram, the better the consistency between the objective evaluation result and the subjective perception is. As can be seen from fig. 2 and 3, the scatter diagram obtained by the method of the present invention is more concentrated, and has a higher degree of matching with the subjective evaluation data.
TABLE 1 Objective evaluation prediction value and average subjective evaluation of image quality of distorted stereoscopic images obtained by the method of the present invention
Pearson correlation coefficient comparison between variance values
Figure BDA0000479627040000301
TABLE 2 comparison of Spearman correlation coefficients between objective evaluation prediction values of image quality and mean subjective score differences for distorted stereo images obtained by the method of the invention
TABLE 3 Kendall correlation coefficient comparison between the image quality objective evaluation prediction value and the average subjective score difference of the distorted stereo image obtained by the method of the present invention
Figure BDA0000479627040000303
TABLE 4 mean square error comparison between the predicted value of objective evaluation of image quality and the difference of mean subjective score of distorted stereoscopic images obtained by the method of the present invention
Figure BDA0000479627040000311

Claims (4)

1. A three-dimensional image quality objective evaluation method based on structure texture separation is characterized in that the processing process is as follows:
firstly, respectively implementing structure texture separation on a left viewpoint image and a right viewpoint image of an original undistorted stereo image and a left viewpoint image and a right viewpoint image of a distorted stereo image to be evaluated to obtain respective structure images and texture images;
secondly, obtaining an objective evaluation prediction value of the image quality of the structural image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the left viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the structural image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the gradient similarity between each pixel point in the structural image of the right viewpoint image of the original undistorted stereo image and the corresponding pixel point in the structural image of the right viewpoint image of the distorted stereo image to be evaluated;
secondly, obtaining an objective image quality evaluation prediction value of the texture image of the left viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 multiplied by 8 in the texture image of the left viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 multiplied by 8 in the texture image of the left viewpoint image of the distorted stereo image to be evaluated; similarly, obtaining an objective evaluation prediction value of the image quality of the texture image of the right viewpoint image of the distorted stereo image to be evaluated by calculating the structural similarity between each subblock with the size of 8 × 8 in the texture image of the right viewpoint image of the original undistorted stereo image and the subblock with the corresponding size of 8 × 8 in the texture image of the right viewpoint image of the distorted stereo image to be evaluated;
thirdly, fusing the image quality objective evaluation predicted values of the structural images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the structural image of the distorted three-dimensional image to be evaluated; similarly, fusing the image quality objective evaluation predicted values of the texture images of the left viewpoint image and the right viewpoint image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the texture image of the distorted three-dimensional image to be evaluated;
and finally, fusing the image quality objective evaluation predicted value of the structural image and the texture image of the distorted three-dimensional image to be evaluated to obtain the image quality objective evaluation predicted value of the distorted three-dimensional image to be evaluated.
2. The objective evaluation method for stereo image quality based on structure texture separation according to claim 1, characterized in that it comprises the following steps:
making SorgRepresenting the original undistorted stereo image, let SdisA stereoscopic image representing distortion to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, wherein (x, y) denotes a coordinate position of a pixel point in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. x.ltoreq.W, y is 1. ltoreq. y.ltoreq.H, W denotes a width of the left viewpoint image and the right viewpoint image, H denotes a height of the left viewpoint image and the right viewpoint image, L is Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { RorgThe pixel value L of the pixel point with the coordinate position (x, y) in (x, y) } isdis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
② are respectively paired with { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } structural texture separation to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } respective structural and texture images, will { Lorg(x, y) } structural and texture image correspondences are notedAnd
Figure FDA0000479627030000022
will { Rorg(x, y) } structural and texture image correspondences are noted
Figure FDA0000479627030000023
Andwill { Ldis(x, y) } structural and texture image correspondences are noted
Figure FDA0000479627030000025
And
Figure FDA0000479627030000026
will { Rdis(x, y) } structural image and texture image correspondenceAnd
Figure FDA0000479627030000028
wherein,to represent
Figure FDA00004796270300000210
The middle coordinate position is the pixel value of the pixel point of (x, y),to representThe middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000213
to represent
Figure FDA00004796270300000214
The middle coordinate position is the pixel value of the pixel point of (x, y),to represent
Figure FDA00004796270300000216
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000217
to represent
Figure FDA00004796270300000218
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000219
to represent
Figure FDA00004796270300000220
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000221
to represent
Figure FDA00004796270300000222
The middle coordinate position is the pixel value of the pixel point of (x, y),
Figure FDA00004796270300000223
to represent
Figure FDA00004796270300000224
The middle coordinate position is the pixel value of the pixel point of (x, y);
calculating
Figure FDA00004796270300000225
Each pixel point in (1)
Figure FDA00004796270300000226
The gradient similarity between the corresponding pixel points will be
Figure FDA00004796270300000227
The pixel point with the (x, y) coordinate position andthe gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as
Figure FDA00004796270300000231
<math> <mrow> <msubsup> <mi>Q</mi> <mi>L</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, m L , org str ( x , y ) = ( gx L , org str ( x , y ) ) 2 + ( gy L , org str ( x , y ) ) 2 m L , dis str ( x , y ) = ( gx L , dis str ( x , y ) ) 2 + ( gy L , dis str ( x , y ) ) 2 ,
Figure FDA0000479627030000032
to represent
Figure FDA0000479627030000033
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),to represent
Figure FDA0000479627030000035
The vertical direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA0000479627030000036
to represent
Figure FDA0000479627030000037
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA0000479627030000038
to represent
Figure FDA0000479627030000039
Gradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1Is a control parameter; then according to
Figure FDA00004796270300000310
Each pixel point in (1)
Figure FDA00004796270300000311
Calculating the gradient similarity between corresponding pixel points
Figure FDA00004796270300000312
The predicted value of objective evaluation of image quality is recorded as
Figure FDA00004796270300000313
Figure FDA00004796270300000314
Also, calculate
Figure FDA00004796270300000315
Each pixel point in (1)The gradient similarity between the corresponding pixel points will beThe pixel point with the (x, y) coordinate position and
Figure FDA00004796270300000318
the gradient similarity between pixel points with (x, y) as the middle coordinate position is recorded as
Figure FDA00004796270300000319
<math> <mrow> <msubsup> <mi>Q</mi> <mi>R</mi> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>m</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>dis</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein,
m R , org str ( x , y ) = ( gx R , org str ( x , y ) ) 2 + ( gy R , org str ( x , y ) ) 2
m R , dis str ( x , y ) = ( gx R , dis str ( x , y ) ) 2 + ( gy R , dis str ( x , y ) ) 2 to representThe horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA00004796270300000325
to represent
Figure FDA00004796270300000326
The vertical direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA00004796270300000327
to represent
Figure FDA00004796270300000328
The horizontal direction gradient of the pixel point with the middle coordinate position (x, y),
Figure FDA00004796270300000329
to represent
Figure FDA00004796270300000330
Gradient of pixel points with (x, y) as middle coordinate position in vertical direction, C1Is a control parameter; then according toEach pixel point in (1)
Figure FDA00004796270300000332
Calculating the gradient similarity between corresponding pixel pointsThe predicted value of objective evaluation of image quality is recorded as
Figure FDA00004796270300000334
Figure FDA00004796270300000335
ObtainingEach of the sub-blocks having a size of 8 x 8 and
Figure FDA00004796270300000337
calculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8
Figure FDA00004796270300000338
The predicted value of objective evaluation of image quality is recorded as
Also, by obtaining
Figure FDA0000479627030000041
Each of the sub-blocks having a size of 8 x 8 and
Figure FDA0000479627030000042
calculating the structural similarity between the sub-blocks with the corresponding size of 8 multiplied by 8
Figure FDA0000479627030000043
The predicted value of objective evaluation of image quality is recorded as
Figure FDA0000479627030000044
Fifthly, to
Figure FDA0000479627030000045
And
Figure FDA0000479627030000046
to carry outFusing to obtain SdisThe predicted value of the objective evaluation of the image quality of the structural image is marked as Qstr
Figure FDA0000479627030000047
Wherein, wsTo representAnd
Figure FDA0000479627030000049
the weight proportion of (2);
also, for
Figure FDA00004796270300000410
And
Figure FDA00004796270300000411
carrying out fusion to obtain SdisThe predicted value of the texture image is recorded as Qstr
Figure FDA00004796270300000412
Wherein, wtTo represent
Figure FDA00004796270300000413
Andthe weight proportion of (2);
sixthly to QstrAnd QtexCarrying out fusion to obtain SdisThe predicted value of the objective evaluation of image quality is expressed as Q, Q = w × Qstr+(1-w)×QtexWherein w represents QstrAnd SdisThe weight ratio of (2).
3. The objective evaluation method for stereo image quality based on structure texture separation as claimed in claim 2, wherein in step (ii) { L }, the objective evaluation method for stereo image quality based on structure texture separation is adoptedorg(x, y) } structural image
Figure FDA00004796270300000415
And texture images
Figure FDA00004796270300000416
The acquisition process comprises the following steps:
② 1a, will { LorgDefining the current pixel point to be processed in (x, y) as the current pixel point;
2a, setting the current pixel point at { LorgThe coordinate position in (x, y) is recorded as p, each pixel point except the current pixel point in a 21 × 21 neighborhood window with the current pixel point as the center is defined as a neighborhood pixel point, a block formed by a 9 × 9 neighborhood window with the current pixel point as the center is defined as a current sub-block, and the current sub-block is recorded as
Figure FDA00004796270300000417
Defining blocks formed by 9 × 9 neighborhood windows with each neighborhood pixel point in 21 × 21 neighborhood window with the current pixel point as the center as neighborhood sub-blocks, and defining the blocks in the 21 × 21 neighborhood window with the current pixel point as the center and in { L } neighborhood sub-blocksorg(x, y) in the (x, y) } neighborhood sub-block formed by 9 multiplied by 9 neighborhood window with neighborhood pixel point with coordinate position q as center
Figure FDA00004796270300000418
Wherein p ∈ Ω, q ∈ Ω, where Ω denotes { L ∈ Ωorg(x, y) } set of coordinate positions of all pixel points, (x)2,y2) Representing a current sub-block
Figure FDA00004796270300000419
The pixel point in (1) is in the current sub-blockCoordinate position of (1) x2≤9,1≤y2≤9,
Figure FDA00004796270300000421
Representing a current sub-block
Figure FDA00004796270300000422
The middle coordinate position is (x)2,y2) (x) pixel value of the pixel point of (c)3,y3) To representIs atCoordinate position of (1) x3≤9,1≤y3≤9,
Figure FDA0000479627030000051
To represent
Figure FDA0000479627030000052
The middle coordinate position is (x)3,y3) The pixel value of the pixel point of (1);
in the step 2a, for any neighborhood pixel point and any pixel point in the current sub-block, the pixel point is assumed to be in the { L [ ]orgThe coordinate position in (x, y) } is (x, y), if x is<1 and y is more than or equal to 1 and less than or equal to H, then { L ≦ HorgAssigning the pixel value of the pixel point with the coordinate position (1, y) in the (x, y) } to the pixel point; if x>W is 1. ltoreq. y.ltoreq.H, then { LorgAssigning the pixel value of the pixel point with the coordinate position (W, y) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x,1) in the (x, y) } to the pixel point; if x is 1. ltoreq. W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (x, H) in the (x, y) } to the pixel point; if x<1 and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1,1) in the (x, y) } to the pixel point; if x>W and y<1, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W,1) in the (x, y) } to the pixel point;if x<1 and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (1, H) in the (x, y) } to the pixel point; if x>W and y>H, then { L }orgAssigning the pixel value of the pixel point with the coordinate position (W, H) in the (x, y) } to the pixel point;
② 3a, obtaining the current sub-block
Figure FDA0000479627030000053
The feature vector of each pixel point in (1) is used for converting the current sub-block into the current sub-block
Figure FDA0000479627030000054
The middle coordinate position is (x)2,y2) The feature vector of the pixel point is recorded as
Figure FDA0000479627030000055
<math> <mrow> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>[</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msubsup> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>x</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mfrac> <mrow> <msup> <mo>&PartialD;</mo> <mn>2</mn> </msup> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </math> Wherein
Figure FDA0000479627030000057
has a dimension of 7, the symbol "[ 2 ]]"is a vector representation symbol, the symbol" | "is an absolute value symbol,
Figure FDA0000479627030000058
representing a current sub-block
Figure FDA0000479627030000059
The middle coordinate position is (x)2,y2) The density value of the pixel point of (a),
Figure FDA00004796270300000510
is composed ofThe first partial derivative in the horizontal direction,
Figure FDA00004796270300000512
is composed of
Figure FDA00004796270300000513
The first partial derivative in the vertical direction,is composed ofThe second partial derivative in the horizontal direction,
Figure FDA0000479627030000061
is composed ofSecond partial derivatives in the vertical direction;
② 4a, according to the current sub-block
Figure FDA0000479627030000063
Calculating the current sub-block according to the feature vector of each pixel point
Figure FDA0000479627030000064
Covariance matrix of (2), as
Figure FDA0000479627030000065
<math> <mrow> <msubsup> <mi>C</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>7</mn> <mo>&times;</mo> <mn>7</mn> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>,</mo> </mrow> </math> Wherein,
Figure FDA0000479627030000067
has a dimension of 7 x 7,
Figure FDA0000479627030000068
representing a current sub-blockThe mean vector of the feature vectors of all the pixel points in (1), <math> <msup> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> </math> is composed of <math> <mrow> <mo>(</mo> <msubsup> <mi>X</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>)</mo> </mrow> </math> The transposed vector of (1);
② 5a, for the current sub-block
Figure FDA00004796270300000612
Covariance matrix ofThe Cholesky decomposition is carried out and,
Figure FDA00004796270300000614
obtaining the current sub-block
Figure FDA00004796270300000615
Sigma feature set of (D), noted
Figure FDA00004796270300000616
<math> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>=</mo> <mo>[</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>&prime;</mo> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mo>-</mo> <msqrt> <mn>10</mn> </msqrt> <mo>&times;</mo> <msup> <mi>L</mi> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msubsup> <mi>&mu;</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>]</mo> <mo>,</mo> </mrow> </math> Wherein L isTIs a transposed matrix of the L and,has a dimension of 7X 15, symbol "[ 2 ]]"is a vector representing a symbol where 1. ltoreq. i'. ltoreq.7, L(1)1 st column vector representing L, L(i')I' th column vector representing L, L(7)A 7 th column vector representing L;
secondly, 6a, adopting the same operation as the operation from the step II to obtain the Sigma characteristic set of the neighborhood sub-block formed by a 9 multiplied by 9 neighborhood window taking each neighborhood pixel point as the center, and leading the Sigma characteristic set to be matched with the neighborhood sub-blockSigma feature set of
Figure FDA00004796270300000620
Has a dimension of 7 × 15;
7a, according to the current sub-block
Figure FDA00004796270300000621
Sigma feature set of
Figure FDA00004796270300000622
And a Sigma characteristic set of a neighborhood sub-block consisting of a 9 multiplied by 9 neighborhood window with each neighborhood pixel point as a center, and acquiring the structural information of the current pixel point and recording the structural information as the structural information <math> <mrow> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>,</mo> <msubsup> <mi>I</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>str</mi> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <msub> <mi>L</mi> <mi>org</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <mi>N</mi> <mo>&prime;</mo> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>S</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>org</mi> </mrow> <mi>q</mi> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein N' (p) represents { LorgAll neighborhood pixels in the 21 x 21 neighborhood window centered on the current pixel in (x, y) } are in { L }orgA set of coordinate positions in (x, y) }, exp () represents an exponential function with e as the base, e =2.71828183, σ represents the standard deviation of a gaussian function, the symbol "|" is the euclidean distance calculation symbol, Lorg(q) represents { L }org(x, y) } the pixel value of a pixel point with the coordinate position of q;
② 8a, according to the structure information of current pixel point
Figure FDA0000479627030000071
Obtaining the texture information of the current pixel point and recording the texture information as I L , org tex ( p ) , I L , org tex ( p ) = L org ( p ) - I L , org str ( p ) , Wherein L isorg(p) representing a pixel value of a current pixel point;
② 9a, will { LorgTaking the next pixel point to be processed in (x, y) as the current pixel point, and then returning to the step 2a to continue executing until the pixel point is LorgAll pixel points in (x, y) are processed to obtain { L }orgThe structural information and the texture information of each pixel point in (x, y) } are expressed by { L }orgThe structural information of all the pixel points in (x, y) } constitutes { L }org(x, y) } structural image, noted
Figure FDA0000479627030000073
By
Figure FDA0000479627030000074
Texture information composition of all pixels in (1) { L }org(x, y) } texture image, noted
Figure FDA0000479627030000075
Acquiring { L by adopting steps from 1a to 9aorg(x, y) } structural image
Figure FDA0000479627030000076
And texture imagesSame operation, get { Rorg(x, y) } structural imageAnd texture images{Ldis(x, y) } structural image
Figure FDA00004796270300000710
And texture images
Figure FDA00004796270300000711
{Rdis(x, y) } structural image
Figure FDA00004796270300000712
And texture images
Figure FDA00004796270300000713
4. The objective evaluation method for stereo image quality based on structure texture separation according to claim 2 or 3, characterized in that the step (c) is
Figure FDA00004796270300000714
Objectively evaluating the predicted value of image quality
Figure FDA00004796270300000715
The acquisition process comprises the following steps:
fourthly-1 a, respectively
Figure FDA00004796270300000716
Andis divided into
Figure FDA00004796270300000718
Sub-blocks of size 8 × 8, which do not overlap with each other, are formed
Figure FDA00004796270300000719
Defining the current kth sub-block to be processed as the current first sub-blockThe current, to be processed, k sub-block is defined as the current, second sub-block, wherein,
Figure FDA00004796270300000721
k has an initial value of 1;
fourthly-2 a, recording the current first sub-block as
Figure FDA00004796270300000722
Record the current second sub-block as
Figure FDA00004796270300000723
Wherein (x)4,y4) To represent
Figure FDA00004796270300000724
And
Figure FDA00004796270300000725
x is more than or equal to 14≤8,1≤y4≤8,
Figure FDA00004796270300000726
To represent
Figure FDA00004796270300000727
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),to representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 a, calculating the current first sub-blockMean and standard deviation of (D), corresponding notation
Figure FDA0000479627030000082
And
Figure FDA0000479627030000083
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
likewise, the current second sub-block is calculated
Figure FDA0000479627030000085
Mean and standard deviation of (D), corresponding notation
Figure FDA0000479627030000086
And
Figure FDA0000479627030000087
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
fourthly-4 a, calculating the current first sub-block
Figure FDA0000479627030000089
With the current second sub-block
Figure FDA00004796270300000810
Structural similarity between them, is recorded as
Figure FDA00004796270300000811
<math> <mrow> <msubsup> <mi>Q</mi> <mrow> <mi>L</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>tex</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C2Is a control parameter;
(iv) 5a, let k = k +1, will
Figure FDA00004796270300000813
Taking the next sub-block to be processed as the current first sub-block
Figure FDA00004796270300000814
Taking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 a) to continue execution until the next sub-block to be processed is reached
Figure FDA00004796270300000815
Andall the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicine
Figure FDA00004796270300000817
Each sub-block of (1) and
Figure FDA00004796270300000818
wherein "=" in k = k +1 is an assignment symbol;
fourthly-6 a, according to
Figure FDA00004796270300000819
Each sub-block of (1) andstructural similarity between corresponding sub-blocks in the sequence, calculating
Figure FDA00004796270300000821
Objective evaluation of image qualityPredicted value, recorded as
Figure FDA00004796270300000822
In the step (iv)
Figure FDA00004796270300000824
Objectively evaluating the predicted value of image quality
Figure FDA00004796270300000825
The acquisition process comprises the following steps:
fourthly-1 b, respectively mixing
Figure FDA00004796270300000826
Andis divided into
Figure FDA00004796270300000828
Sub-blocks of size 8 × 8, which do not overlap with each other, are formed
Figure FDA00004796270300000829
Defining the current kth sub-block to be processed as the current first sub-block
Figure FDA00004796270300000830
The current, to be processed, k sub-block is defined as the current, second sub-block, wherein,
Figure FDA00004796270300000831
k has an initial value of 1;
fourthly-2 b, recording the current first sub-block as
Figure FDA00004796270300000832
Record the current second sub-block as
Figure FDA00004796270300000833
Wherein (x)4,y4) To representAnd
Figure FDA0000479627030000092
x is more than or equal to 14≤8,1≤y4≤8,
Figure FDA0000479627030000093
To representThe middle coordinate position is (x)4,y4) The pixel value of the pixel point of (a),
Figure FDA0000479627030000095
to represent
Figure FDA0000479627030000096
The middle coordinate position is (x)4,y4) The pixel value of the pixel point of (1);
fourthly-3 b, calculating the current first sub-block
Figure FDA0000479627030000097
Mean and standard deviation of (D), corresponding notation
Figure FDA0000479627030000098
And <math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
likewise, the current second sub-block is calculatedMean and standard deviation of (D), corresponding notation
Figure FDA00004796270300000912
And
Figure FDA00004796270300000913
<math> <mrow> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </mrow> <mn>64</mn> </mfrac> <mo>,</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mn>64</mn> </mfrac> </msqrt> <mo>;</mo> </mrow> </math>
fourthly-4 b, calculating the current first sub-block
Figure FDA00004796270300000915
With the current second sub-blockStructural similarity between them, is recorded as
Figure FDA00004796270300000917
<math> <mrow> <msubsup> <mi>Q</mi> <mrow> <mi>R</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>tex</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <mn>4</mn> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&sigma;</mi> <mrow> <mi>R</mi> <msub> <mi>L</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&times;</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> <mrow> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>org</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&sigma;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&mu;</mi> <mrow> <msub> <mi>R</mi> <mi>dis</mi> </msub> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math> Wherein, C2Is a control parameter;
(iv) 5b, let k = k +1, will
Figure FDA00004796270300000919
Taking the next sub-block to be processed as the current first sub-blockTaking the next sub-block to be processed as the current second sub-block, and then returning to the step (2 b) to continue execution until the next sub-block to be processed is reached
Figure FDA00004796270300000921
And
Figure FDA00004796270300000922
all the sub-blocks in the Chinese herbal medicine are processed to obtain the Chinese herbal medicine
Figure FDA00004796270300000923
Each sub-block of (1) andwherein "=" in k = k +1 is an assignment symbol;
fourthly-6 b, according to
Figure FDA00004796270300000925
Each sub-block of (1) andstructural similarity between corresponding sub-blocks in the sequence, calculating
Figure FDA00004796270300000927
The predicted value of objective evaluation of image quality is recorded as
Figure FDA00004796270300000928
Figure FDA00004796270300000929
CN201410105777.4A 2014-03-20 2014-03-20 Objective three-dimensional image quality evaluation method based on structure and texture separation Pending CN103903259A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410105777.4A CN103903259A (en) 2014-03-20 2014-03-20 Objective three-dimensional image quality evaluation method based on structure and texture separation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410105777.4A CN103903259A (en) 2014-03-20 2014-03-20 Objective three-dimensional image quality evaluation method based on structure and texture separation

Publications (1)

Publication Number Publication Date
CN103903259A true CN103903259A (en) 2014-07-02

Family

ID=50994566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410105777.4A Pending CN103903259A (en) 2014-03-20 2014-03-20 Objective three-dimensional image quality evaluation method based on structure and texture separation

Country Status (1)

Country Link
CN (1) CN103903259A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931257A (en) * 2016-06-12 2016-09-07 西安电子科技大学 SAR image quality evaluation method based on texture feature and structural similarity
CN106780432A (en) * 2016-11-14 2017-05-31 浙江科技学院 A kind of objective evaluation method for quality of stereo images based on sparse features similarity
CN109887023A (en) * 2019-01-11 2019-06-14 杭州电子科技大学 A kind of binocular fusion stereo image quality evaluation method based on weighted gradient amplitude
CN110363753A (en) * 2019-07-11 2019-10-22 北京字节跳动网络技术有限公司 Image quality measure method, apparatus and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000278710A (en) * 1999-03-26 2000-10-06 Ricoh Co Ltd Device for evaluating binocular stereoscopic vision picture
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics
CN102209257A (en) * 2011-06-17 2011-10-05 宁波大学 Stereo image quality objective evaluation method
CN102333233A (en) * 2011-09-23 2012-01-25 宁波大学 Stereo image quality objective evaluation method based on visual perception
CN102521825A (en) * 2011-11-16 2012-06-27 宁波大学 Three-dimensional image quality objective evaluation method based on zero watermark

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000278710A (en) * 1999-03-26 2000-10-06 Ricoh Co Ltd Device for evaluating binocular stereoscopic vision picture
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics
CN102209257A (en) * 2011-06-17 2011-10-05 宁波大学 Stereo image quality objective evaluation method
CN102333233A (en) * 2011-09-23 2012-01-25 宁波大学 Stereo image quality objective evaluation method based on visual perception
CN102521825A (en) * 2011-11-16 2012-06-27 宁波大学 Three-dimensional image quality objective evaluation method based on zero watermark

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KEMENG LI 等: "Objective quality assessment for stereoscopic images based on structure-texture decomposition", 《WSEAS TRANSACTIONS ON COMPUTERS》, 31 January 2014 (2014-01-31) *
L. KARACAN 等: "Structure-preserving image smoothing via region covariances", 《ACM TRANSACTIONS ON GRAPHICS》, vol. 32, no. 6, 1 November 2013 (2013-11-01), XP058033898, DOI: doi:10.1145/2508363.2508403 *
M. SHLH 等: "MIQM: a multicamera image quality measure", 《EEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 21, no. 9, 22 May 2012 (2012-05-22) *
WUFENG XUE 等: "Gradient Magnitude Similarity Deviation: AnHighly Efficient Perceptual Image Quality Index", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 23, no. 2, 3 December 2013 (2013-12-03) *
靳鑫 等: "基于结构相似度的自适应图像质量评价", 《光电子激光》, vol. 25, no. 2, 28 February 2014 (2014-02-28) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931257A (en) * 2016-06-12 2016-09-07 西安电子科技大学 SAR image quality evaluation method based on texture feature and structural similarity
CN105931257B (en) * 2016-06-12 2018-08-31 西安电子科技大学 SAR image method for evaluating quality based on textural characteristics and structural similarity
CN106780432A (en) * 2016-11-14 2017-05-31 浙江科技学院 A kind of objective evaluation method for quality of stereo images based on sparse features similarity
CN106780432B (en) * 2016-11-14 2019-05-28 浙江科技学院 A kind of objective evaluation method for quality of stereo images based on sparse features similarity
CN109887023A (en) * 2019-01-11 2019-06-14 杭州电子科技大学 A kind of binocular fusion stereo image quality evaluation method based on weighted gradient amplitude
CN110363753A (en) * 2019-07-11 2019-10-22 北京字节跳动网络技术有限公司 Image quality measure method, apparatus and electronic equipment

Similar Documents

Publication Publication Date Title
CN103581661B (en) Method for evaluating visual comfort degree of three-dimensional image
CN104036501B (en) A kind of objective evaluation method for quality of stereo images based on rarefaction representation
CN102209257B (en) Stereo image quality objective evaluation method
CN103347196B (en) Method for evaluating stereo image vision comfort level based on machine learning
CN105282543B (en) Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception
CN104581143B (en) A kind of based on machine learning without with reference to objective evaluation method for quality of stereo images
CN102708567B (en) Visual perception-based three-dimensional image quality objective evaluation method
CN103136748B (en) The objective evaluation method for quality of stereo images of a kind of feature based figure
CN103413298B (en) A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic
CN105376563B (en) No-reference three-dimensional image quality evaluation method based on binocular fusion feature similarity
CN104036502B (en) A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology
CN104658001A (en) Non-reference asymmetric distorted stereo image objective quality assessment method
CN102521825B (en) Three-dimensional image quality objective evaluation method based on zero watermark
CN102903107B (en) Three-dimensional picture quality objective evaluation method based on feature fusion
CN104902268B (en) Based on local tertiary mode without with reference to three-dimensional image objective quality evaluation method
CN105357519B (en) Quality objective evaluation method for three-dimensional image without reference based on self-similarity characteristic
CN104408716A (en) Three-dimensional image quality objective evaluation method based on visual fidelity
CN102547368A (en) Objective evaluation method for quality of stereo images
CN103903259A (en) Objective three-dimensional image quality evaluation method based on structure and texture separation
CN107360416A (en) Stereo image quality evaluation method based on local multivariate Gaussian description
CN103200420B (en) Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention
CN103369348B (en) Three-dimensional image quality objective evaluation method based on regional importance classification
CN102999911B (en) Three-dimensional image quality objective evaluation method based on energy diagrams
CN102737380B (en) Stereo image quality objective evaluation method based on gradient structure tensor
CN103914835A (en) Non-reference quality evaluation method for fuzzy distortion three-dimensional images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140702