CN110322496A - Saliency measure based on depth information - Google Patents

Saliency measure based on depth information Download PDF

Info

Publication number
CN110322496A
CN110322496A CN201910484632.2A CN201910484632A CN110322496A CN 110322496 A CN110322496 A CN 110322496A CN 201910484632 A CN201910484632 A CN 201910484632A CN 110322496 A CN110322496 A CN 110322496A
Authority
CN
China
Prior art keywords
image
gbvs
activation
pixel
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910484632.2A
Other languages
Chinese (zh)
Inventor
徐湘忆
吴天逸
苏磊
胡正勇
田昊洋
季怡萍
廖巍
崔律
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI JUDIAN ELECTRIC EQUIPMENT Co Ltd
State Grid Shanghai Electric Power Co Ltd
East China Power Test and Research Institute Co Ltd
Original Assignee
SHANGHAI JUDIAN ELECTRIC EQUIPMENT Co Ltd
State Grid Shanghai Electric Power Co Ltd
East China Power Test and Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI JUDIAN ELECTRIC EQUIPMENT Co Ltd, State Grid Shanghai Electric Power Co Ltd, East China Power Test and Research Institute Co Ltd filed Critical SHANGHAI JUDIAN ELECTRIC EQUIPMENT Co Ltd
Priority to CN201910484632.2A priority Critical patent/CN110322496A/en
Publication of CN110322496A publication Critical patent/CN110322496A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of, and the saliency measure based on depth information introduces the depth information of image slices vegetarian refreshments on the basis of GBVS significant numerical metric, and then constructs more balanced more fully saliency measurement.Measure disclosed in this invention, fully consider the situation for focusing that good pixel is relatively clear and significant, poor focusing pixel is more fuzzy, some brightness or contrast is higher but the serious pixel of defocus on image, significance measure also will be smaller.The technology of the present invention solution is simple, and robustness is high, practical, can preferably characterize the significant and readability of image pixel, application range is wider.

Description

Saliency measure based on depth information
Technical field
The present invention relates to saliency measure, specifically a kind of saliency measurement side based on depth information Method.
Background technique
The significance measure of image is a problem in image procossing, and the key problem solved is in image procossing mistake Which region, which pixel should be paid close attention to more or paid close attention to less in journey.Significance measure is the visual attention machine with the mankind Matched processed.Visual attention mechanism is one of important vision perception characteristic, is just caused perhaps in eighties of last century The research and concern of more image studies scholars.Even if the mankind are also always in face of a complicated disorder even milli irregular image It can be usual only to other unessential regions by the pith in the psychological activity Quick positioning map picture of active selection Very "ball-park" estimate is carried out even to ignore completely.Such mechanism can reduce the quantity of the received visual information of human brain, into And improve the treatment effeciency of human brain.The visual attention mechanism of human eye is exactly utilized in saliency measurement.Pass through image Key information areas some in image can be labeled as salient region or area-of-interest, to these by significant assessment Region carries out processing emphatically.Saliency measurement all puts fist to good use in fields such as compression of images, image coding, image enhancements Foot plays irreplaceable role.
Currently, significance measure model can be largely divided into three classes: significance measure from bottom to top, such as ltti algorithm Deng;Top-down significance measure, such as AC algorithm, SR algorithm;In conjunction with from bottom to top with top-down algorithm, such as GBVS Algorithm.Although these methods can capture the profile of object in more complicated background, to the front and back scape information of image It is insensitive, the higher partial error of contrast it can will be judged as significant region in defocus region.Therefore, the reality in image Important information may be blanked, and be unable to get sufficient concern.
Summary of the invention
The deficiency for aiming to overcome that above-mentioned significance measure method of this method, proposes that a kind of stability is stronger and is based on The saliency measure of depth information.
The principle of the invention is as follows:
(1) GBVS significant assessment
Joonathan Harel proposes a kind of significant assessment algorithm from bottom to top calculated based on map: Graph-Based Visual Saliency, GBVS.This algorithm includes characteristic vector pickup, the generation of activation figure, activation icon Three key steps of standardization.
In characteristic vector pickup, GBVS uses the filter of the biology inspiration of similar ltti algorithm to simulate organism Vision system.
It is to be realized by subtracting feature vector chart in different dimensions, and introduce Markov Chain that activation figure, which generates, right Different figures calculates separately dissimilar degree and conspicuousness to define the weight on the side of figure as Markov Chain, will be each on the diagram The equiblibrium mass distribution of position is considered as the numerical value of activation figure.GBVS is not relevant for connection between feature vector and similar.
Traditional activation figure standardization mainly includes following a few class methods: the standardization based on local maximum;Using The convolution iteration of Difference of Gaussian filter;Divide the non-linear of local feature value by the weighted average of neighbouring activation numerical value Process.GBVS looks for another way, and the chaotic numerical value in concern activation figure, chaotic degree can be determined by Markov Chain Justice, and realize chaotic flowing and conduction.
(2) depth of focus is estimated
For the depth of focus of image, the method for the present invention is estimated using a kind of blur estimation method based on Gauss gradient It calculates.This method all has certain robustness to complex scenes such as very noisy, edge blurry, edge crossings.Fig. 1 gives thin The focusing and defocus example of lens.With the distance of the expression focal plane d (r) to the corresponding object of image slices vegetarian refreshments r.When object is placed In focal plane, (distance is dF) when, it can be all pooled on single sensing point from all radiation of object, image pixel is seen Sense is strong prominent.It is d (r)=d+d from distanceFThe radiation that issues of object can to multiple pixels on image to causing Fuzzy region.Fuzzy circle area diameter s (r) can be following expression formula calculate:
Wherein F0Focal length and F-number are respectively corresponded with N.
In the method, estimate that the main flow of the depth of focus is as shown in Figure 2.Firstly, being obscured again by Gaussian kernel Edge gradient, then the gradient magnitude ratio of calculating input image and the step edge of blurred picture again, to utilize marginal position The depth d (r) of the pixel r of greatest gradient amplitude compared estimate marginal position, finally passes through the depth of the marginal position of estimation Interpolation extends to entire image.
Technical solution of the invention is as follows:
A kind of saliency measure based on depth information, it is characterized in that, it the described method comprises the following steps:
Step S1, calculates the significant numerical value of GBVS of image, and the significant numerical value of location of pixels (i, j) is expressed as sm (i, j).
Step S2, using based on Gauss gradient blur estimation method calculate image the depth of focus, by location of pixels (i, J) depth representing is d (i, j).
Step S3 seeks the significance measure of the location of pixels (i, j) of image are as follows:
Sdm (i, j)=sm (i, j) d (i, j)-2 (2)
Step S1 calculates the significant numerical value sm (i, j) of GBVS of each pixel of image, including feature vector using GBVS method It extracts, activation figure generates, activation figure standardization three phases;In the characteristic vector pickup stage, GBVS uses what biology inspired Filter simulates the vision system of organism, and activation figure generation phase realized by subtracting feature vector chart in different dimensions , and Markov Chain is introduced, dissimilar degree and conspicuousness are calculated separately to different figures to define the weight conduct on the side of figure The equiblibrium mass distribution of each position on the diagram is considered as the numerical value of activation figure by Markov Chain.In activation figure normalization period, GBVS Chaotic numerical value in concern activation figure, chaotic degree is defined by Markov Chain, and realizes chaotic flowing and biography It leads.
Step S2 calculates the estimation of Depth numerical value of each pixel using the blur estimation method based on Gauss gradient, specific to wrap It includes: Gauss being carried out to input picture and is obscured again;Input picture and the again margin location of blurred picture are extracted using Canny operator respectively Gradient is set, calculates the gradient magnitude ratio of marginal position, and then estimate the depth value of marginal position;By interpolation method, whole picture is obtained The depth value of image.
In the present invention, it is assumed that striked significance measure and depth value square is inversely proportional.This hypothesis is based on warp The hypothesis tested.
Compared with prior art, the beneficial effects of the present invention are: on the basis of GBVS significant assessment, further consider On the one hand the depth information of image is able to maintain the advantage of GBVS figure conspicuousness segmentation, on the one hand by estimating each pixel Depth, fully consider the clarity of pixel.In general, depth is smaller, focus state is better, and pixel is more clear, more Significantly.Generally speaking, the pixel that the significance measure method based on depth information can overcome GBVS to highlight some defocus The problem of being mistakenly identified as significant point.The introducing of depth information is but also significance measure is more balanced and comprehensive.Image it is significant Property measurement it is significant, be widely used, significance measure technology of the invention can be applied to compression of images, image coding, figure As on the fields such as edge or region enhancing, Target Segmentation and extraction, image co-registration, the concept for introducing depth information is also worthy to be popularized Come on to other characteristics of image and treatment of details.
Detailed description of the invention
Fig. 1 is focusing and the defocus schematic diagram of thin lens
Fig. 2 is the flow chart of depth information estimation
Fig. 3 is the original image of right focusing
Fig. 4 is the significance measure result figure of the original image of Fig. 3
Specific embodiment
The technical problem to be solved by the present invention is to provide it is a kind of can accuracy spirogram as pixel significance method.
Saliency method disclosed in this invention based on depth information, comprising the following steps:
Step S1, calculates the significant numerical value of GBVS of image, and the significant numerical value of location of pixels (i, j) is expressed as sm (i, j), It is generated including characteristic vector pickup, activation figure, activation figure standardization three phases;In the characteristic vector pickup stage, GBVS is used The filter of biological inspiration simulates the vision system of organism, and activation figure generation phase is by subtracting spy in different dimensions It levies what vectogram was realized, and introduces Markov Chain, dissimilar degree and conspicuousness are calculated separately to different figures to define figure The equiblibrium mass distribution of each position on the diagram is considered as the numerical value of activation figure as Markov Chain by the weight on side.In activation icon Quasi-ization stage, GBVS concern activate the chaotic numerical value in figure, and chaotic degree is defined by Markov Chain, and realize mixed Random flowing and conduction.
Step S2, using based on Gauss gradient blur estimation method calculate image the depth of focus, by location of pixels (i, J) depth representing is d (i, j), is specifically included: carrying out Gauss to input picture and obscures again;It is extracted respectively using Canny operator The marginal position gradient of input picture and again blurred picture calculates the gradient magnitude ratio of marginal position, and then estimates marginal position Depth value;By interpolation method, the depth value of entire image is obtained.;
Step S3 constructs the significance measure value of each pixel of image according to formula (2).
Sdm (i, j)=sm (i, j) d (i, j)-2 (2)
By taking right focusedimage shown in Fig. 3 as an example, the significance measure distribution that the method for the present invention obtains is as shown in Figure 4. GBVS concern compares high pixel, and the foreground information that depth information makes depth more shallow is by higher attention, and the two is multiple It closes, significance measure method can be improved to the sensibility of focus condition.From Fig. 4, the method for the present invention is to fuzzy edge Not significant enough region is more sensitive, and only focuses on the key information area that depth is small, contrast is high.This has needle for subsequent There is great help to the image procossing of property.

Claims (3)

1. a kind of saliency measure based on depth information, which is characterized in that the described method comprises the following steps:
Step S1 calculates the significant numerical value sm (i, j) of GBVS of each pixel (i, j) of image using GBVS;
Step S2 calculates the estimation of Depth numerical value d of each pixel (i, j) of image using the blur estimation method based on Gauss gradient (i,j);
Step S3 calculates the significance measure sdm (i, j) of image, and formula is as follows:
Sdm (i, j)=sm (i, j) d (i, j)-2
2. the saliency measure according to claim 1 based on depth information, which is characterized in that step S1 benefit With GBVS method calculate each pixel of image the significant numerical value sm (i, j) of GBVS, including characteristic vector pickup, activation figure generate, Activation figure standardization three phases;In the characteristic vector pickup stage, GBVS uses the filter that biology inspires to simulate biology The vision system of body, activation figure generation phase introduce Ma Erke by subtracting feature vector chart realization in different dimensions Husband's chain, calculates separately dissimilar degree and conspicuousness to different figures to define the weight on the side of figure as Markov Chain, will be The equiblibrium mass distribution of each position is considered as the numerical value of activation figure on figure.In activation figure normalization period, GBVS concern activation figure Chaotic numerical value, chaotic degree are defined by Markov Chain, and realize chaotic flowing and conduction.
3. the saliency measure according to claim 1 based on depth information, which is characterized in that step S2 benefit The estimation of Depth numerical value that each pixel is calculated with the blur estimation method based on Gauss gradient, specifically includes: to input picture into Row Gauss obscures again;Input picture and again the marginal position gradient of blurred picture are extracted using Canny operator respectively, calculate edge The gradient magnitude ratio of position, and then estimate the depth value of marginal position;By interpolation method, the depth value of entire image is obtained.
CN201910484632.2A 2019-06-05 2019-06-05 Saliency measure based on depth information Pending CN110322496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910484632.2A CN110322496A (en) 2019-06-05 2019-06-05 Saliency measure based on depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910484632.2A CN110322496A (en) 2019-06-05 2019-06-05 Saliency measure based on depth information

Publications (1)

Publication Number Publication Date
CN110322496A true CN110322496A (en) 2019-10-11

Family

ID=68120777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910484632.2A Pending CN110322496A (en) 2019-06-05 2019-06-05 Saliency measure based on depth information

Country Status (1)

Country Link
CN (1) CN110322496A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996195A (en) * 2014-05-26 2014-08-20 清华大学深圳研究生院 Image saliency detection method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996195A (en) * 2014-05-26 2014-08-20 清华大学深圳研究生院 Image saliency detection method

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
JONATHAN HAREL等: "Graph-Based Visual Saliency", 《IEEE》 *
RUNMIN CONG等: "Saliency Detection for Stereoscopic Images Based on Depth Confidence Analysis and Multiple Cues Fusion", 《ARXIV》 *
YUN ZHANG等: "Stereoscopic Visual Attention Model for 3D Video", 《SPRINGER-VERLAG BERLIN HEIDELBERG 2010》 *
周洋等: "融合双目多维感知特征的立体视频显著性检测", 《中国图象图形学报》 *
张海龙: "立体视觉显著性研究及其在立体图像视差控制中的应用", 《万方数据知识服务平台》 *
许义臣等: "基于边缘梯度的散焦图像深度恢复", 《贵州大学学报(自然科学版)》 *
陈梦婷等: "基于GBVS改进的Object Bank场景分类方法", 《计算机与现代化》 *

Similar Documents

Publication Publication Date Title
Zhu et al. A fast single image haze removal algorithm using color attenuation prior
Senst et al. Robust local optical flow for feature tracking
Zhang et al. Video dehazing with spatial and temporal coherence
Li et al. Specular reflections removal for endoscopic image sequences with adaptive-RPCA decomposition
CN104978578B (en) Mobile phone photograph text image method for evaluating quality
CN110288019A (en) Image labeling method, device and storage medium
CN107292318B (en) Image significance object detection method based on center dark channel prior information
Qin et al. Generalized gradient vector flow for snakes: new observations, analysis, and improvement
Yang et al. Quality assessment metric of stereo images considering cyclopean integration and visual saliency
Vlachos et al. Finger vein segmentation from infrared images based on a modified separable mumford shah model and local entropy thresholding
Fang et al. Visual acuity inspired saliency detection by using sparse features
Satriya et al. Robust pupil tracking algorithm based on ellipse fitting
Yuan et al. Single image dehazing via NIN-DehazeNet
Kang et al. Using depth and skin color for hand gesture classification
Akamine et al. Video quality assessment using visual attention computational models
Hu et al. Sparse transfer for facial shape-from-shading
CN111784660B (en) Method and system for analyzing frontal face degree of face image
CN110322496A (en) Saliency measure based on depth information
Leimkühler et al. Perceptual real-time 2D-to-3D conversion using cue fusion
Bashier et al. Graphical password: Pass-images Edge detection
Čadík et al. Automated outdoor depth-map generation and alignment
Robinson et al. Foreground segmentation in atmospheric turbulence degraded video sequences to aid in background stabilization
Laco et al. Depth in the visual attention modelling from the egocentric perspective of view
Li et al. Automatic detection and boundary estimation of the optic disk in retinal images using a model-based approach
Chang et al. Robust ghost-free multiexposure fusion for dynamic scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191011