CN113011281A - Light field image quality identification method based on 3D-DOG characteristics - Google Patents

Light field image quality identification method based on 3D-DOG characteristics Download PDF

Info

Publication number
CN113011281A
CN113011281A CN202110220509.7A CN202110220509A CN113011281A CN 113011281 A CN113011281 A CN 113011281A CN 202110220509 A CN202110220509 A CN 202110220509A CN 113011281 A CN113011281 A CN 113011281A
Authority
CN
China
Prior art keywords
light field
dog
distorted
field image
image quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110220509.7A
Other languages
Chinese (zh)
Inventor
曾焕强
黄海靓
侯军辉
王勇涛
曹九稳
蔡灿辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiao University
Original Assignee
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiao University filed Critical Huaqiao University
Priority to CN202110220509.7A priority Critical patent/CN113011281A/en
Publication of CN113011281A publication Critical patent/CN113011281A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a light field image quality identification method based on 3D-DOG characteristics, which comprises the following steps: converting the input reference and distorted light field images into reference and distorted light field sequences; respectively extracting 3D-DOG characteristics from the reference and distorted light field sequences by adopting a 3D-DOG filter; calculating the similarity of the reference and distorted light field sequences based on the 3D-DOG features; and calculating the light field image quality score by using a 3D-DOG characteristic pooling strategy. The invention fully considers the sensitivity of a human visual system to two-dimensional edge information and a three-dimensional geometric structure, adopts 3D-DOG characteristics to effectively describe scene edge information and structural change of a light field image, and has better light field image quality evaluation performance.

Description

Light field image quality identification method based on 3D-DOG characteristics
Technical Field
The invention relates to the field of image processing, in particular to a light field image quality identification method based on 3D-DOG characteristics.
Background
With the rapid development of multimedia and imaging technologies, light field images have received attention from academic and industrial fields as a new media, and are widely applied to the fields of computer vision and computer graphics, such as multi-view imaging, three-dimensional reconstruction, depth estimation, virtual reality, augmented reality, and the like. Unlike conventional two-dimensional imaging, light field imaging can capture position, orientation, and intensity information at any point in space. Therefore, the light field image has a specific 4-dimensional structure, and can better reflect the spatial and structural information of a real scene.
The light field image inevitably generates various distortions in image acquisition, compression, transmission, storage, display and other links, thereby reducing the visual effect. Human eyes are the final recipients of light field images, so it is necessary to provide a quality evaluation model capable of quickly and accurately determining light field images based on the human visual system. At present, most of the existing quality evaluation methods are designed for the traditional two-dimensional images, do not consider the special structure of the light field image, and are not suitable for the quality evaluation of the light field image. Therefore, the design of the light field image quality evaluation method which accords with the visual characteristics of human eyes has important theoretical research significance and practical application value.
Disclosure of Invention
The invention mainly aims to overcome the defects in the prior art and provides a light field image quality identification method based on 3D-DOG characteristics. The method has the advantages that the light field image is converted into the light field sequence, the two-dimensional spatial characteristics and the three-dimensional geometric structure of the reference and distorted light field image are extracted simultaneously by the 3D-DOG filter to describe the subjective perceptibility of human eyes to the distorted light field image, and the method has good light field image quality identification performance.
The invention adopts the following technical scheme:
the light field image quality identification method based on the 3D-DOG features comprises the following steps:
will input a reference light field image LrAnd distorted light field image LdConversion into a reference light field sequence VrAnd distorted light field sequence Vd
Respectively extracting reference light field sequences V by adopting a 3D-DOG filterrAnd distorted light field sequence Vd Reference 3D-DOG feature D ofr(x, y, z) and distorted 3D-DOG feature Dd(x,y,z);
Calculating to obtain a reference light field sequence V based on reference and distorted 3D-DOG characteristicsrAnd distorted light field sequence VdSimilarity Sim (x, y, z);
and calculating to obtain a light field image quality Score by using a 3D-DOG characteristic pooling strategy based on the similarity Sim (x, y, z).
Preferably, a reference light field image L is inputrAnd distorted light field image LdConversion into a reference light field sequence VrAnd distorted light field sequence VdThe method comprises the following steps:
inputting a reference light field image Lr={Lr,1,Lr,2,...,Lr,nAnd distorted light field image Ld={Ld,1,Ld,2,...,Ld,nN represents the number of a group of sub-aperture images, selecting the sub-aperture images with the odd subscript one by one to form a light field image sequence according to the sequence of the subscripts from small to large, and respectively obtaining a reference light field sequence VrAnd distorted light field sequence Vd
Preferably, the reference light field sequence V is respectively extracted by using a 3D-DOG filterrAnd distorted light field sequence Vd Reference 3D-DOG feature D ofr(x, y, z) and distorted 3D-DOG feature Dd(x, y, z), as follows:
separate extraction of reference light field sequences VrAnd distorted light field sequence VdAs a reference feature Dr(x, y, z) and distortion characteristics Dd(x, y, z) as follows:
Figure BDA0002954641860000021
Figure BDA0002954641860000022
wherein DOG (x, y, z) is a 3D-DOG filter, and the formula is as follows:
DOG(x,y,z)=G(x,y,z,σ1)-G(x,y,z,σ2)
where (x, y, z) denotes the 3D coordinates of each pixel in the light field sequence, G (x, y, z, σ) is a 3D Gaussian filter, with a 7 × 7 × 7 Gaussian kernel, standard deviation σ1=1.85,σ20.85, the formula of the 3D gaussian filter is as follows:
Figure BDA0002954641860000031
preferably, the reference light field sequence V is calculated based on the reference and distorted 3D-DOG featuresrAnd distorted light field sequence VdThe similarity Sim (x, y, z) is as follows:
based on reference 3D-DOG feature Dr(x, y, z) and distorted 3D-DOG feature Dd(x, y, z) obtaining a light field similarity Sim (x, y, z):
Figure BDA0002954641860000032
where c is a constant for ensuring numerical stability, and c is 0.01.
Preferably, the light field image quality Score is calculated by using a 3D-DOG feature pooling strategy based on the similarity Sim (x, y, z), as follows:
ω(x,y,z)=max{Dr(x,y,z),Dd(x,y,z)}
Figure BDA0002954641860000033
as can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
(1) the invention provides a light field image quality identification method based on 3D-DOG characteristics, which focuses on considering the characteristics of a human eye vision system and the structural characteristics of a light field image, constructs the light field image into a pseudo video sequence, adopts a 3D-DOG filter to extract the space and structural characteristics of the pseudo video sequence, fully utilizes the sensitivity of human eye vision to edge information, has better light field image quality evaluation performance compared with other methods, and has higher identification accuracy, sensitivity and robustness.
Drawings
Fig. 1 is a schematic flow chart provided by an embodiment of the present invention.
Fig. 2 is two light field images with different distortion degrees provided by the embodiment of the present invention, where fig. (a) is a diagram of example 1, and fig. (b) is a diagram of example 2.
The invention is described in further detail below with reference to the figures and specific examples.
Detailed Description
Referring to fig. 1, a light field image quality identification method based on 3D-DOG features includes the following specific steps:
s101: will input a reference light field image LrAnd distorted light field image LdConversion into a reference light field sequence VrAnd distorted light field sequence VdThe method comprises the following steps:
inputting a reference light field image Lr={Lr,1,Lr,2,...,Lr,nAnd distorted light field image Ld={Ld,1,Ld,2,...,Ld,nN represents the number of a group of sub-aperture images, selecting the sub-aperture images with the odd subscript one by one to form a light field image sequence according to the sequence of the subscripts from small to large, and respectively obtaining a reference light field sequence VrAnd distorted light field sequence Vd
S102: respectively extracting reference light field sequences V by using a 3D-DOG filterrAnd distorted light field sequence Vd Reference 3D-DOG feature D ofr(x, y, z) and distorted 3D-DOG feature Dd(x, y, z), as follows:
separate extraction of reference light field sequences VrAnd distorted light field sequence VdAs a reference feature Dr(x, y, z) and distortion characteristics Dd(x, y, z) as follows:
Figure BDA0002954641860000041
Figure BDA0002954641860000042
wherein DOG (x, y, z) is a 3D-DOG filter, and the formula is as follows:
DOG(x,y,z)=G(x,y,z,σ1)-G(x,y,z,σ2)
where (x, y, z) denotes the 3D coordinates of each pixel in the light field sequence, G (x, y, z, σ) is a 3D Gaussian filter, with a 7 × 7 × 7 Gaussian kernel, standard deviation σ1=1.85,σ20.85, the formula of the 3D gaussian filter is as follows:
Figure BDA0002954641860000043
s103: calculating to obtain a reference light field sequence V based on the reference and distorted 3D-DOG characteristicsrAnd distorted light field sequence VdThe similarity Sim (x, y, z) is as follows:
based on reference 3D-DOG feature Dr(x, y, z) and distorted 3D-DOG feature Dd(x, y, z) obtaining a light field similarity Sim (x, y, z):
Figure BDA0002954641860000051
where c is a constant for ensuring numerical stability, and c is 0.01.
S104: and calculating to obtain a light field image quality Score by using a 3D-DOG characteristic pooling strategy based on the similarity Sim (x, y, z), wherein the method specifically comprises the following steps:
ω(x,y,z)=max{Dr(x,y,z),Dd(x,y,z)}
Figure BDA0002954641860000052
the superiority of the process according to the invention is demonstrated below by means of specific examples and data;
as shown in table 1, the experimental results of the Light field database Dense Light Fields of the method of the present invention and other advanced algorithms are compared, wherein PSNR, SSIM, IWSSIM, FSIM, VIF, VSI, VSNR, ESIM, GFM are all algorithm names, deployed is the method of the present invention, PLCC (pearson linear correlation coefficient), SROCC (spearman rank correlation coefficient) and RMSE (root mean square error) are three general standards; the method is used for considering the quality of an evaluation method in the field of image quality evaluation, three classical correlation parameters exist, the closer the values of PLCC (Pearson linear correlation coefficient) and SROCC (Spierman rank correlation coefficient) are to 1, the smaller the RMSE value is, the higher the correlation between the result of an objective algorithm and the result of subjective evaluation is, the more excellent the algorithm is, and from the data in table 1, the values of PLCC (Pearson linear correlation coefficient) and SROCC (Spierman rank correlation coefficient) obtained by the method are all closer to 1 than other algorithms, and the RMSE is smaller than other algorithms, which shows that the higher the correlation between the result of the algorithm and the subjective evaluation provided by the method is, the more excellent the algorithm is.
Table 1: the experimental results of the algorithm provided by the invention and other advanced algorithms in the Light field database Dense Light Fields are compared
Figure BDA0002954641860000061
In addition, another set of comparative embodiments is provided in the embodiment of the present invention, for example, fig. 2 is two light field images with different distortion degrees provided by the embodiment of the present invention, where fig. 1 (a) is a diagram of example 1, and fig. 2(b) is a diagram of example 2, it can be seen from the diagram that the quality of fig. (a) is better than that of fig. (b), the image quality scores obtained by using the present algorithm are 0.908 score and 0.4682 score, respectively, and the result of the present invention represents the similarity degree between the distorted image and the reference image, so the quality score of the image with a large distortion degree is smaller, which illustrates that the method of the present invention has higher recognition accuracy and sensitivity and robustness.
The above description is only an embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modifications made by using the design concept should fall within the scope of infringing the present invention.

Claims (6)

1. A light field image quality identification method based on 3D-DOG features is characterized by comprising the following steps:
will input a reference light field image LrAnd distorted light field image LdConversion into a reference light field sequence VrAnd distorted light field sequence Vd
Respectively extracting reference light field sequences V by adopting a 3D-DOG filterrAnd distorted light field sequence VdReference 3D-DOG feature D ofr(x, y, z) and distorted 3D-DOG feature Dd(x,y,z);
Based on 3D-DOG characteristic Dr(x, y, z) and distorted 3D-DOG feature Dd(x, y, z) calculating to obtain a reference light field sequence VrAnd distorted light field sequence VdSimilarity Sim (x, y, z);
and calculating a light field image quality Score by using a 3D-DOG feature pooling strategy based on the similarity Sim (x, y, z).
2. The 3D-DOG feature based light field image quality recognition method according to claim 1, wherein: will input a reference light field image LrAnd distorted light field image LdConversion into a reference light field sequence VrAnd distorted light field sequence VdThe method comprises the following steps:
inputting a reference light field image Lr={Lr,1,Lr,2,...,Lr,nAnd distorted light field image Ld={Ld,1,Ld,2,...,Ld,nWhere n denotes a set of sub-aperturesSelecting sub-aperture images with odd subscripts one by one to form a light field sequence according to the sequence from small to large subscripts to respectively obtain a reference light field sequence VrAnd distorted light field sequence Vd
3. The 3D-DOG feature based light field image quality recognition method according to claim 1, wherein: respectively extracting reference light field sequences V by adopting a 3D-DOG filterrAnd distorted light field sequence VdReference 3D-DOG feature D ofr(x, y, z) and distorted 3D-DOG feature Dd(x, y, z), as follows:
separate extraction of reference light field sequences VrAnd distorted light field sequence Vd3D-DOG feature of (a) as a reference 3D-DOG feature Dr(x, y, z) and distorted 3D-DOG feature Dd(x, y, z) as follows:
Figure FDA0002954641850000011
Figure FDA0002954641850000012
wherein DOG (x, y, z) is a 3D-DOG filter.
4. The 3D-DOG feature based light field image quality recognition method according to claim 3, wherein: DOG (x, y, z) is specifically:
the formula is as follows:
DOG(x,y,z)=G(x,y,z,σ1)-G(x,y,z,σ2)
where (x, y, z) represents the 3D coordinates of each pixel in the light field sequence, G (x, y, z, σ) is a 3D Gaussian filter, σ1,σ2Is the standard deviation;
the formula of the 3D gaussian filter is as follows:
Figure FDA0002954641850000021
5. the 3D-DOG feature based light field image quality recognition method according to claim 1, wherein: based on 3D-DOG characteristic Dr(x, y, z) and distorted 3D-DOG feature Dd(x, y, z) calculating to obtain a reference light field sequence VrAnd distorted light field sequence VdThe similarity Sim (x, y, z) is as follows:
based on reference 3D-DOG feature Dr(x, y, z) and distorted 3D-DOG feature Dd(x, y, z) obtaining a light field similarity Sim (x, y, z):
Figure FDA0002954641850000022
where c is a constant for ensuring numerical stability.
6. The 3D-DOG feature based light field image quality recognition method according to claim 1, wherein: and calculating by using a 3D-DOG feature pooling strategy based on the similarity Sim (x, y, z) to obtain a light field image quality Score, specifically as follows:
ω(x,y,z)=max{Dr(x,y,z),Dd(x,y,z)}
Figure FDA0002954641850000023
CN202110220509.7A 2021-02-26 2021-02-26 Light field image quality identification method based on 3D-DOG characteristics Withdrawn CN113011281A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110220509.7A CN113011281A (en) 2021-02-26 2021-02-26 Light field image quality identification method based on 3D-DOG characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110220509.7A CN113011281A (en) 2021-02-26 2021-02-26 Light field image quality identification method based on 3D-DOG characteristics

Publications (1)

Publication Number Publication Date
CN113011281A true CN113011281A (en) 2021-06-22

Family

ID=76387503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110220509.7A Withdrawn CN113011281A (en) 2021-02-26 2021-02-26 Light field image quality identification method based on 3D-DOG characteristics

Country Status (1)

Country Link
CN (1) CN113011281A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612972A (en) * 2022-03-07 2022-06-10 北京拙河科技有限公司 Face recognition method and system of light field camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612972A (en) * 2022-03-07 2022-06-10 北京拙河科技有限公司 Face recognition method and system of light field camera

Similar Documents

Publication Publication Date Title
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN101877143B (en) Three-dimensional scene reconstruction method of two-dimensional image group
CN111563418A (en) Asymmetric multi-mode fusion significance detection method based on attention mechanism
CN103248906B (en) Method and system for acquiring depth map of binocular stereo video sequence
CN111626927B (en) Binocular image super-resolution method, system and device adopting parallax constraint
Tang et al. Single image dehazing via lightweight multi-scale networks
CN108257089B (en) A method of the big visual field video panorama splicing based on iteration closest approach
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
CN110070574B (en) Binocular vision stereo matching method based on improved PSMAT net
CN111126412A (en) Image key point detection method based on characteristic pyramid network
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN112085031A (en) Target detection method and system
CN110120013A (en) A kind of cloud method and device
CN114648482A (en) Quality evaluation method and system for three-dimensional panoramic image
CN112149662A (en) Multi-mode fusion significance detection method based on expansion volume block
CN110751271A (en) Image traceability feature characterization method based on deep neural network
CN110889868A (en) Monocular image depth estimation method combining gradient and texture features
CN113011281A (en) Light field image quality identification method based on 3D-DOG characteristics
CN107945119B (en) Method for estimating correlated noise in image based on Bayer pattern
CN112598604A (en) Blind face restoration method and system
CN107330856B (en) Panoramic imaging method based on projective transformation and thin plate spline
CN106682599B (en) Sparse representation-based stereo image visual saliency extraction method
CN110070626B (en) Three-dimensional object retrieval method based on multi-view classification
CN113628125B (en) Method for enhancing multiple infrared images based on space parallax priori network
CN111524104B (en) Full-reference light field image quality evaluation method based on multi-scale profile wave characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210622

WW01 Invention patent application withdrawn after publication