CN112419185B - Accurate high-reflectivity removing method based on light field iteration - Google Patents

Accurate high-reflectivity removing method based on light field iteration Download PDF

Info

Publication number
CN112419185B
CN112419185B CN202011308683.9A CN202011308683A CN112419185B CN 112419185 B CN112419185 B CN 112419185B CN 202011308683 A CN202011308683 A CN 202011308683A CN 112419185 B CN112419185 B CN 112419185B
Authority
CN
China
Prior art keywords
image
light field
pixel point
highlight
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011308683.9A
Other languages
Chinese (zh)
Other versions
CN112419185A (en
Inventor
冯维
李秀花
高俊辉
徐仕楠
曲通
孙星宇
周世奇
王恒辉
程雄昊
赵大兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Technology
Original Assignee
Hubei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Technology filed Critical Hubei University of Technology
Priority to CN202011308683.9A priority Critical patent/CN112419185B/en
Publication of CN112419185A publication Critical patent/CN112419185A/en
Application granted granted Critical
Publication of CN112419185B publication Critical patent/CN112419185B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Abstract

The invention discloses an accurate high-reflectivity removing method based on optical field iteration, which comprises the following steps: (1) extracting a central visual angle image by combining light field data, and fully detecting each mirror surface pixel point by applying a self-adaptive threshold value and a light field iterative algorithm; refocusing to each angle of the mirror surface candidate, calculating an angle pixel variance, and setting a threshold value to obtain saturated pixel points and unsaturated pixel points; and converting the unsaturated pixel points into an HSI space for clustering, and processing the unsaturated pixel points by using a two-color reflection model and combining an intrinsic image. (2) Extracting a saturated pixel point processing method in a self-adaptive direction, and removing an edge direction interfering with a reconstructed saturated pixel point by using an edge detection operator; then, based on a plurality of angular characteristics of the light field, a Gaussian probability distribution model is provided and applied to the light field image angular domain for weighted summation to replace saturated highlight pixel points. (3) And (3) providing a method for quantitatively evaluating the highlight removal effect by combining the mirror surface residual ratio and the image information entropy.

Description

Accurate high-reflectivity removing method based on light field iteration
Technical Field
The invention relates to a highlight removal method, in particular to an accurate highlight reflection removal method for optical field iteration.
Background
The current removal method based on high reflection is mainly based on a polarizer and a single-image and multi-image removal method. The polarizer can filter the highlight according to the material medium, however, the specular reflection component has a large error on the boundary area due to the chromatic aberration effect and the misregistration between the polarization images. For a single image, a bicolor reflection model is mainly used to eliminate local highlight, but the method is mostly based on prior knowledge, requires that light source parameters are known, and has poor adaptability. Highlight removal based on multiple pictures usually assumes that a light source is fixed or a shooting visual angle is changed, and highlight is removed by using a feature point matching algorithm, but the method needs to control the moving visual angle, and the time and space complexity is high.
Disclosure of Invention
The invention discloses an accurate high-reflection removing method based on light field iteration, aiming at the defects of the prior art, and the method comprises the following steps of (1) extracting a central visual angle image by combining light field data, and fully detecting each mirror surface pixel point by applying a self-adaptive threshold value and a light field iteration algorithm; refocusing to each angle of the mirror surface candidate, calculating an angle pixel variance, and setting a threshold value to obtain saturated pixel points and unsaturated pixel points; and converting the unsaturated pixel points into an HSI space for clustering, and processing the unsaturated pixel points by using a two-color reflection model and combining an intrinsic image. (2) Extracting a saturated pixel point processing method in a self-adaptive direction, and removing an edge direction interfering with a reconstructed saturated pixel point by using an edge detection operator; then, based on a plurality of angular characteristics of the light field, a Gaussian probability distribution model is provided and applied to the light field image angular domain for weighted summation to replace saturated highlight pixel points. (3) A method for quantitatively evaluating the highlight removal effect by combining the mirror surface residue ratio (SR) and the image information entropy (H) is provided.
In order to achieve the purpose, the invention provides the technical scheme that: a precise high-reflectivity removing method based on light field iteration specifically comprises the following steps (see figure 1 for general flow and figure 2 for detailed flow),
step 1, obtaining a 5D light field image by using an original light field image, and extracting a central visual angle image I from the 5D light field imageoAnd obtaining an accurate estimate of the depth of the light field as IdepAnd IoIntrinsic reflection image R ofe
Step 2, mixing IoConverting into HSI color space, clustering image brightness values into several classes, and takingThe brightness value of the largest centroid pixel point is used as the threshold th1 for the image IoGreater than th1 is regarded as a mirror candidate pixel point Icd
Step 3, using the depth map I in the step 1depRefocusing to each angle domain of the mirror candidate pixel, converting to HSI brightness space, calculating the variance of the angle domain, wherein the variance is greater than a threshold th2 and is an unsaturated pixel point LusOtherwise, it is a saturated pixel point Ls
Step 4, for the unsaturated pixel point L obtained in the step 3usPreliminarily recovering unsaturated pixel point information by using a method of combining a bicolor reflection model (1) and an intrinsic image, wherein IbFor p-point color information, α (p) and β (p) are the diffuse and specular reflection coefficients, respectively, at point p, RbAnd RsDiffuse reflection and specular reflection colors at p positions respectively;
Ib=α(p)Rb+β(p)Rs (1)
step 5, weakening the occlusion problem by using an intrinsic diagram Re confidence coefficient measurement method, defining a confidence coefficient:
Figure BDA0002789085090000021
if R is larger, it indicates IbThe lower the reliability of the recovered unsaturated pixel point is, the final unsaturated pixel point is obtained at the moment:
Ib'=Ib*w+(1-w)*Re (5)
step 6, recovering saturated pixel points L by using a self-adaptive direction highlight removal methodsIntrinsic color information of;
step 7, in order to fully utilize the light field multi-view data information obtained according to the step 1, defining a light field multi-view Gaussian probability distribution model
Figure BDA0002789085090000022
Where λ, ρ controls the probability distribution amplitude, (u)c,vc) Obtaining the contribution degree of each visual angle to the highlight point recovery of the central visual angle for the coordinate position of the central visual angle:
Figure BDA0002789085090000023
proijfor the contribution of the point of the (i, j) th view angle in the light field microimage to the central view angle highlight point recovery, (x) the light field microimage is the image under each microlens in the light field original imagei,yj) Index for the (i, j) th view angle coordinate of the light field microimage;
finally weighting a plurality of visual angles of the optical field to obtain a final saturated pixel point image without highlight;
step 8, turning to step 2 to calculate mirror candidate pixel point LcdIf L iscd<LthAnd outputting the highlight-free image after highlight removal, otherwise, executing the step 2 to the step 7.
Further, the method also comprises a step 9, and for the highlight-free image output in the step 8, the method proposes to use the mirror surface residual ratio SR and the image information entropy H to perform quantitative evaluation on the image processing effect, and defines the mirror surface residual ratio SR:
Figure BDA0002789085090000024
Sifor the input image specular candidate pixels, SoMirror candidate pixel points of the output image;
using simultaneously the image information entropy:
Figure BDA0002789085090000025
P(ai) Representing the gray value of the image as aiIs a ratio ofiThe pixel value of the pixel point i, n is the number of all pixel points in the image
To this end, SRH can be obtained from the above analysis by combining SR and H:
Figure BDA0002789085090000031
further, in the step 1, a group of original light field images are obtained by shooting through a light field camera Lytro2, 5D light field images are obtained by decoding through an LFTools toolbox, and a central visual angle image I is extracted from the 5D light field images through MATLABoObtaining accurate light field depth estimation as I by using the existing light field-based EPI structure tensor technologydepWhile obtaining I using the existing eigen-image-based decomposition methodoIntrinsic albedo image R ofe
Further, the image brightness values are clustered into four classes by using K-means clustering in the step 2.
Further, the specific implementation manner of step 4 is as follows,
first of all use IdepRefocusing unsaturated pixel points to obtain angle pixel clusters Cp at each angle of a light field, clustering Cp into two classes in HSI brightness space by using K-means clustering, wherein the centroid of the first class is a diffuse reflection color pixel point RbThe second type of centroid is a color pixel point R of specular reflection color or diffuse reflection and specular pixel, the first type of pixel point set is used, and R exists at the momentsLet α (p) be 1, and obtain from a two-color reflectance model:
Ib=α(p)Rb+β(p)*0 (2)
Ib=Rb (3)
further, in step 6, a 9 × 9 window with highlight pixel P as the center is used, and the farther the distance P is, the higher the pixel weight is, the more the viewpoint (u) is obtained0,v0) And recovering the obtained saturated pixel points:
Figure BDA0002789085090000032
wherein, for effective omega direction information, | omega | means effectiveThe sum of the number of directions, m being the total number of window layers centered on P, Ib'(xk',yk',u0,v0) Is at (u)0,v0) Unsaturated pixel point of visual angle recovery is in (x)k',yk') light radiation of pixel point, q is 2, (x)k',yk') is the kth window pixel point in the 9 × 9 window;
the definition of the effective direction information is as follows: aiming at a saturated pixel point P, taking surrounding pixel points in 1-8 directions as candidate pixel points for recovering a P point, if the color of the pixel point in a certain direction is close to that of the P point, namely the pixel point is a highlight point, the pixel point in the direction cannot be used as candidate information for recovering the highlight point; meanwhile, in order to eliminate the interference of the edge information on highlight removal, a Canny edge detection operator is used for removing the invalid direction of the edge, and the final remaining valid direction is omega.
Further, the specific implementation that weighting is performed on a plurality of visual angles of the light field in the step 7 to obtain a final saturated pixel point image without highlight is as follows;
Figure BDA0002789085090000041
where N is the number of views of the light field image, ui,vjAnd representing the visual angle under a light field angular domain coordinate system.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. in factory product detection, particularly, a machine vision method is used for flaw identification of mechanical parts, but in the occasions with higher precision requirements, if highlight cannot be sufficiently and effectively removed, original color information of a workpiece is recovered, and great potential safety hazards are brought to subsequent production. Light field formation of image is as neotype visual imaging technique, only needs once to expose just can obtain a plurality of angle information, reduces the huge cost and the installation and debugging problem that traditional application polyphaser array brought, improves the space-time efficiency that the product detected.
2. Compared with the traditional highlight removal method, the invention discloses a novel combined light field highlight removal algorithm. The algorithm can detect each highlight pixel point, and the efficiency is as high as 99%.
3. The method uses light field multi-angle information to divide mirror surface candidate points into unsaturated pixel points and saturated pixel points respectively, applies a two-color reflection model and combines an intrinsic image, can restore unsaturated highlight point information while keeping image texture feature points, can self-adaptively select effective directions, and restores saturated highlight information points by combining light field angle domain features.
4. The invention provides a quantitative evaluation algorithm for the highlight removal effect of the image, which can monitor the highlight removal effect in real time, provide quantitative feedback data for machine detection and ensure the reliability and safety of product detection.
Drawings
FIG. 1 is a general flow diagram of the present invention.
FIG. 2 is a detailed flow chart of the present invention.
Fig. 3 is a distribution diagram of saturated pixels, (a) is a central view image, and (b) is a window distribution of mirror pixels.
FIG. 4 is a highlight removal and contrast for a Stanford dataset image in an embodiment of the present invention.
Fig. 5 shows the light field highlight removal and contrast obtained by Lytro2 photography.
FIG. 6 shows the mirror residual ratio and the image information entropy effect comparison.
Detailed Description
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
As shown in fig. 1 and 2, the present invention provides a method for removing high reflected light accurately based on light field iteration, which specifically includes the following steps:
step 1: shooting by using a light field camera (Lytro 2) to obtain a group of original light field images, decoding by using an LFTools toolbox to obtain 5D light field images, and extracting a central view angle image I from the 5D light field images by using MATLABoObtained by applying the prior art based on the EPI structure tensor technology of the light fieldAccurate depth estimation of the light field as Idep. While obtaining I using existing optimized eigen-image-based decomposition methodsoIs a reverse image R of the intrinsic imagee
Step 2: handle IoConverting the image into HSI color space (color space: chromaticity, saturation and brightness respectively), carrying out K-means clustering on the image by using Matlab, clustering the brightness value of the image into four classes, taking the brightness value of the largest class of centroid pixel points as a threshold th1, and regarding the image IoGreater than th1 is regarded as a mirror candidate pixel point Icd
And step 3: depth map I using step 1depRefocusing to each angle domain of the mirror candidate pixel, converting to HSI brightness space, calculating the variance of the angle domain, wherein the variance is greater than a threshold th2 and is an unsaturated pixel point LusOtherwise, it is a saturated pixel point Ls
And 4, step 4: for the unsaturated pixel point L obtained in the step 3usIn order to recover the intrinsic color information of the unsaturated pixel points, the information of the unsaturated pixel points is recovered by a method of combining a bicolor reflection model (1) and an intrinsic image; wherein, IbFor p-point color information, α (p) and β (p) are the diffuse and specular reflection coefficients, respectively, at point p, RbAnd RsDiffuse and specular colors at point p, respectively.
Ib=α(p)Rb+β(p)Rs (1)
The specific implementation is as follows: first of all use IdepRefocusing unsaturated pixel points to each angle of a light field to obtain an angle pixel cluster Cp, clustering Cp into two classes by using K-means clustering in HSI (color space: chromaticity, saturation and brightness respectively) brightness space, wherein the centroid of the first class is a diffuse reflection color pixel point RbThe second type of centroid is a color pixel point R of specular reflection color or diffuse reflection and specular pixel, the first type of pixel point set is used in the text, and R exists at the momentsAnd (2) initially obtaining a recovery unsaturated pixel point according to a bicolor reflection model (1) (herein, alpha (p) is 1):
Ib=α(p)Rb+β(p)*0 (2)
Ib=Rb (3)
and 5: in order to solve the problem that the pixels at the edge part of the light field texture have occlusion in the angular domain, an intrinsic image Re confidence coefficient measurement method is used for weakening the occlusion problem, and confidence coefficient is defined as follows:
Figure BDA0002789085090000061
if R is larger, it indicates IbThe lower the reliability of the recovered unsaturated pixel point is, the final unsaturated pixel point is obtained at the moment:
Ib'=Ib*w+(1-w)*Re (5)
step 6: to recover the saturated pixel point LsHere an adaptive directional highlight removal method is used. As shown in fig. 3, a saturated pixel point P, and surrounding pixels in the directions of 1-8 serve as candidate pixel points for recovering the P point, however, it can be noted that: the color of the pixel point in the 1, 2, 3 directions is close to that of the P point, i.e. the pixel point is also a highlight point, and therefore the pixel point cannot be used as candidate information for recovering the highlight point. Meanwhile, in order to eliminate the interference of the edge information on highlight removal, a Canny edge detection operator of Matlab is used here to remove the invalid edge direction, and the final remaining valid direction is Ω at this time. Meanwhile, it is noted that the blue pixel is the intrinsic color information of the object, which is the color attribute that P is to be restored at last, and the intrinsic information amount of the pixel point information near P is small, so that the invention uses a 9 × 9 window with highlight pixel point P as the center, and the pixel point weight farther from P is larger, and the viewpoint (u) is obtained0,v0) And recovering the obtained saturated pixel points:
Figure BDA0002789085090000062
wherein, omega is effective direction information, m is total number of window layers taking p as center, the invention uses m-4, the inventionIn the embodiment, the effective direction information is P4-P7, the sum of the effective direction numbers of the | omega | fingers, Ib'(xk',yk',u0,v0) Is at (u)0,v0) Unsaturated pixel point of visual angle recovery is in (x)k',yk') light radiation of pixel point, q is 2, (x)k',yk') is the kth window pixel in the 9 × 9 window.
And 7: in order to fully utilize the light field multi-view data information obtained according to the step 1, the light field multi-view Gaussian probability distribution model is defined
Figure BDA0002789085090000063
Where λ, ρ control the probability distribution magnitude. Where λ is pi, ρ is 1, (u)c,vc) Obtaining the contribution degree of each visual angle to the highlight point recovery of the central visual angle for the coordinate position of the central visual angle:
Figure BDA0002789085090000064
proijfor the contribution of the point of the (i, j) th view angle in the light field microimage to the central view angle highlight point recovery, (x) the light field microimage is the image under each microlens in the light field original imagei,yj) Indexed for the (i, j) th view angle coordinate of the light-field microimage.
Weighting a plurality of visual angles of the light field to obtain a final saturated pixel point image without highlight, and finally obtaining:
Figure BDA0002789085090000071
where N is the number of views of the light field image, ui,vjAnd (3) representing a visual angle under a light field angle domain coordinate system, wherein m is the total number of layers of the window with p as the center, and m is 4.
And 8: go to step 2 to calculate the mirror plane candidates LcdIf L iscd<Lth(here Lth30), go to step 9, otherwise step 2 to step 7 are performed.
And step 9: and for the image without highlight obtained in the step 7, quantitatively evaluating the image processing effect by using the mirror surface residual ratio (SR) and the existing image information entropy (H), and defining the mirror surface residual ratio (SR):
Figure BDA0002789085090000072
Sinfor the input image specular candidate pixels, SouMirror candidate pixel points of the output image;
using simultaneously the image information entropy:
Figure BDA0002789085090000073
P(ai) Representing the gray value of the image as aiIn the ratio of (a)iAnd the pixel value of the pixel point i is represented, and n is the number of all pixel points in the image. From the above analysis, combining SR and H can yield:
Figure BDA0002789085090000074
example (b):
(1) for the four sets of light field pictures (plants and metal parts) which are shot by using a Stanford dataset (Lego truck, Amethst) and a Lytro2 under natural light in the method, the data have complex textures and fine transition texture characteristics and high reflection intensity, and even if the data are the complex textures and the fine transition texture characteristics, better removal effect, treatment effect and comparison graphs such as fig. 4 and fig. 5 can be obtained by comparing the method with Shen, Yang and Akashi.
(2) The six groups of pictures obtained by the processing of (1) are used for quantitatively evaluating the highlight removal effect of the method and other methods by using the step 9, and the line graph of FIG. 6 can be obtained to be compared with the data in the table 1. As can be seen from fig. 6(a), the solid black line is close to 1, which illustrates that the present invention can effectively detect and remove highlight regions for different scenes, and fig. 6(c) shows that the image processing effect of the present invention is at least 20% greater than that of other methods, and compared with other methods, the present invention can accurately remove highlight with high quality and recover original texture information of highlight regions.
TABLE 1 comparison of SRH of the present invention with other methods
Figure BDA0002789085090000081
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (6)

1. The accurate high-reflectivity removing method based on light field iteration is characterized by comprising the following steps of:
step 1, obtaining a 5D light field image by using an original light field image, and extracting a central visual angle image I from the 5D light field imageoAnd obtaining an accurate estimate of the depth of the light field as IdepAnd IoIntrinsic reflection image R ofe
Step 2, mixing IoConverting the image into HSI color space, clustering the image brightness values into a plurality of classes, taking the brightness value of the largest class of centroid pixel points as a threshold th1, and regarding the image IoGreater than th1 is regarded as a mirror candidate pixel point Icd
Step 3, using the depth map I in the step 1depRefocusing to each angle domain of the mirror candidate pixel, converting to HSI brightness space, calculating the variance of the angle domain, wherein the variance is greater than a threshold th2 and is an unsaturated pixel point LusOtherwise, it is a saturated pixel point Ls
Step 4, for the unsaturated pixel point L obtained in the step 3usUsing two colorsThe method for combining the reflection model (1) and the intrinsic reflection image preliminarily recovers the unsaturated pixel point information, wherein IbFor the recovered image of unsaturated information, α (p) and β (p) are the diffuse and specular reflection coefficients, R, respectively, at point pbAnd RsDiffuse reflection and specular reflection colors at p positions respectively;
Ib=α(p)Rb+β(p)Rs (1)
step 5, weakening the occlusion problem by using an intrinsic reflection image Re confidence coefficient measurement method, defining a confidence coefficient:
Figure FDA0003061639490000011
if R is larger, it indicates IbThe lower the reliability of the recovered unsaturated pixel point is, the final unsaturated pixel point is obtained at the moment:
Ib'=Ib*w+(1-w)*Re (5)
step 6, recovering saturated pixel points L by using a self-adaptive direction highlight removal methodsIntrinsic color information of; the specific implementation mode is as follows:
using a 9 × 9 window centered on highlight pixel p, the farther the distance p, the higher the pixel weight, and the viewpoint (u)0,v0) And recovering the obtained saturated pixel points:
Figure FDA0003061639490000012
wherein Ω is effective direction information, | Ω | is the sum of the effective directions, m is the total number of window layers with p as the center, and Ib'(xk',yk',u0,v0) Is at (u)0,v0) Unsaturated pixel point of visual angle recovery is in (x)k',yk') light radiation of pixel point, q is 2, (x)k',yk') is the kth window pixel point in the 9 × 9 window;
the definition of the effective direction information is as follows: aiming at a saturated pixel point p, taking surrounding pixel points in 1-8 directions as candidate pixel points for recovering a p point, if the color of the pixel point in a certain direction is close to the color of the p point, namely, the pixel point in the certain direction is also a highlight point, the pixel point in the direction cannot be used as candidate information for recovering the highlight point; meanwhile, in order to eliminate the interference of the edge information on highlight removal, a Canny edge detection operator is used for removing the edge invalid direction, and the finally obtained effective direction is omega;
step 7, in order to fully utilize the light field multi-view data information obtained according to the step 1, defining a light field multi-view Gaussian probability distribution model
Figure FDA0003061639490000021
Where λ, ρ controls the probability distribution amplitude, (u)c,vc) Obtaining the contribution degree of each visual angle to the highlight point recovery of the central visual angle for the coordinate position of the central visual angle:
Figure FDA0003061639490000022
proijfor the contribution of the point of the (i, j) th view angle in the light field microimage to the central view angle highlight point recovery, (x) the light field microimage is the image under each microlens in the light field original imagei,yj) Index for the (i, j) th view angle coordinate of the light field microimage;
finally weighting a plurality of visual angles of the optical field to obtain a final saturated pixel point image without highlight;
step 8, turning to step 2 to calculate mirror candidate pixel point LcdIf L iscd<LthAnd outputting the highlight-free image after highlight removal, otherwise, executing the step 2 to the step 7.
2. The accurate hyperreflection removal method based on light field iteration of claim 1, wherein: and 9, for the highlight-free image output in the step 8, proposing quantitative evaluation on the image processing effect by using the mirror surface residual ratio SR and the image information entropy H, and defining the mirror surface residual ratio SR:
Figure FDA0003061639490000023
Sinfor the input image specular candidate pixels, SouMirror candidate pixel points of the output image;
using simultaneously the image information entropy:
Figure FDA0003061639490000024
Figure FDA0003061639490000025
representing the gray value of the image as
Figure FDA0003061639490000026
The ratio of (a) to (b),
Figure FDA0003061639490000027
representing a pixel point i1N is the number of all pixel points in the image, SRH can be obtained from the above analysis in combination with SR and H:
Figure FDA0003061639490000028
3. the accurate hyperreflection removal method based on light field iteration of claim 1, wherein: in the step 1, a group of original light field images are obtained by shooting through a light field camera Lytro Illum, 5D light field images are obtained by decoding through an LFToolbox, and a central visual angle image I is extracted from the 5D light field images through MATLABoUsing existing light-basedThe field EPI structure tensor technique yields an accurate estimate of the depth of the optical field as IdepWhile obtaining I using the existing eigen-image-based decomposition methodoIntrinsic reflection image R ofe
4. The accurate hyperreflection removal method based on light field iteration of claim 1, wherein: and 2, clustering the image brightness values into four classes by using K-means clustering.
5. The accurate hyperreflection removal method based on light field iteration of claim 1, wherein: the specific implementation of step 4 is as follows,
first of all use IdepThe angle pixel cluster C is obtained from each angle from the refocused unsaturated pixel point to the light fieldpAnd using K-means clustering to cluster C in HSI luminance spacepClustering into two classes, the first class having a centroid of diffuse reflectance color RbThe second type has a centroid of specular reflection color RsUsing a set of pixel points of the first type, this time with RsLet α (p) be 1, and obtain from a two-color reflectance model:
Ib=α(p)Rb+β(p)·0 (2)
Ib=Rb (3)。
6. the accurate hyperreflection removal method based on light field iteration of claim 1, wherein: step 7, weighting a plurality of visual angles of the light field to obtain a final saturated pixel point image without highlight;
Figure FDA0003061639490000031
where N is the number of views of the light field image, ui,vjAnd representing the visual angle under a light field angular domain coordinate system.
CN202011308683.9A 2020-11-20 2020-11-20 Accurate high-reflectivity removing method based on light field iteration Active CN112419185B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011308683.9A CN112419185B (en) 2020-11-20 2020-11-20 Accurate high-reflectivity removing method based on light field iteration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011308683.9A CN112419185B (en) 2020-11-20 2020-11-20 Accurate high-reflectivity removing method based on light field iteration

Publications (2)

Publication Number Publication Date
CN112419185A CN112419185A (en) 2021-02-26
CN112419185B true CN112419185B (en) 2021-07-06

Family

ID=74774425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011308683.9A Active CN112419185B (en) 2020-11-20 2020-11-20 Accurate high-reflectivity removing method based on light field iteration

Country Status (1)

Country Link
CN (1) CN112419185B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436206B (en) * 2021-06-17 2022-03-15 易普森智慧健康科技(深圳)有限公司 Pathological tissue section scanning area positioning method based on cluster segmentation
CN115082477B (en) * 2022-08-23 2022-10-28 山东鲁芯之光半导体制造有限公司 Semiconductor wafer processing quality detection method based on light reflection removing effect

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803892A (en) * 2017-03-13 2017-06-06 中国科学院光电技术研究所 A kind of light field high-resolution imaging method based on Optical field measurement
CN110599400A (en) * 2019-08-19 2019-12-20 西安理工大学 EPI-based light field image super-resolution method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10496876B2 (en) * 2016-06-30 2019-12-03 Intel Corporation Specular light shadow removal for image de-noising
CN107103589B (en) * 2017-03-21 2019-09-06 深圳市未来媒体技术研究院 A kind of highlight area restorative procedure based on light field image
CN107330866B (en) * 2017-06-16 2021-03-05 Oppo广东移动通信有限公司 Method and device for eliminating highlight area and terminal
CN107481201A (en) * 2017-08-07 2017-12-15 桂林电子科技大学 A kind of high-intensity region method based on multi-view image characteristic matching
CN110390648A (en) * 2019-06-24 2019-10-29 浙江大学 A kind of image high-intensity region method distinguished based on unsaturation and saturation bloom
CN111080686B (en) * 2019-12-16 2022-09-02 中国科学技术大学 Method for highlight removal of image in natural scene
CN111369455B (en) * 2020-02-27 2022-03-18 复旦大学 Highlight object measuring method based on polarization image and machine learning
CN111652823A (en) * 2020-06-12 2020-09-11 华东交通大学 Method for measuring high-reflectivity object by structured light based on color information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803892A (en) * 2017-03-13 2017-06-06 中国科学院光电技术研究所 A kind of light field high-resolution imaging method based on Optical field measurement
CN110599400A (en) * 2019-08-19 2019-12-20 西安理工大学 EPI-based light field image super-resolution method

Also Published As

Publication number Publication date
CN112419185A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
Engin et al. Cycle-dehaze: Enhanced cyclegan for single image dehazing
Von Stumberg et al. Gn-net: The gauss-newton loss for multi-weather relocalization
Park et al. Look wider to match image patches with convolutional neural networks
Atoum et al. Color-wise attention network for low-light image enhancement
CN112308860A (en) Earth observation image semantic segmentation method based on self-supervision learning
Liu et al. Adaptive learning attention network for underwater image enhancement
CN112419185B (en) Accurate high-reflectivity removing method based on light field iteration
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
CN110910339B (en) Logo defect detection method and device
CN112734915A (en) Multi-view stereoscopic vision three-dimensional scene reconstruction method based on deep learning
Rios et al. Feature visualization for 3D point cloud autoencoders
Li et al. Infrared-visible image fusion method based on sparse and prior joint saliency detection and LatLRR-FPDE
Wang et al. Lightweight multiple scale-patch dehazing network for real-world hazy image
Hsu et al. Object detection using structure-preserving wavelet pyramid reflection removal network
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks
Wang et al. Effective light field de-occlusion network based on swin transformer
Wang et al. Single-image dehazing using color attenuation prior based on haze-lines
CN111161219B (en) Robust monocular vision SLAM method suitable for shadow environment
Rai et al. Low-light robust face image super-resolution via neuro-fuzzy inferencing based locality constrained representation
Sudhakara et al. An edge detection mechanism using L* A* B color-based contrast enhancement for underwater images
AU2021105153A4 (en) An unsupervised learning of point cloud denoising
Haouassi et al. An efficient image haze removal algorithm based on new accurate depth and light estimation algorithm
CN113159158A (en) License plate correction and reconstruction method and system based on generation countermeasure network
Khoond et al. Image enhancement using nonlocal prior and gradient residual minimization for improved visualization of deep underwater image
Riaz et al. Visibility restoration using generalized haze-lines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant