CN106203428B - Image significance detection method based on blur estimation fusion - Google Patents

Image significance detection method based on blur estimation fusion Download PDF

Info

Publication number
CN106203428B
CN106203428B CN201610526947.5A CN201610526947A CN106203428B CN 106203428 B CN106203428 B CN 106203428B CN 201610526947 A CN201610526947 A CN 201610526947A CN 106203428 B CN106203428 B CN 106203428B
Authority
CN
China
Prior art keywords
image
blur estimation
fusion
vision
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610526947.5A
Other languages
Chinese (zh)
Other versions
CN106203428A (en
Inventor
陈震中
丁晓颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201610526947.5A priority Critical patent/CN106203428B/en
Publication of CN106203428A publication Critical patent/CN106203428A/en
Application granted granted Critical
Publication of CN106203428B publication Critical patent/CN106203428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The present invention provides a kind of image significance detection method based on blur estimation fusion, stage and fuzziness feature application stage are obtained including low-level visual feature, the low-level visual feature obtains stage input image to be detected, the significant characteristics figure that detects using classical bottom-up conspicuousness detection algorithm, and will test to image obtains result as the low-level visual feature of image to be detected.The fuzziness feature application stage, first input multiple groups training image blocks, training sparse dictionary;Psychological characteristics when photographer shoots is simulated followed by the mode of blur estimation and it is quantified, and quantized result is used to instruct the fusion of the conspicuousness information under different mechanisms, obtains final saliency testing result, to improve conspicuousness detection accuracy.The obtained saliency testing result of the present invention is more in line with human vision conspicuousness detection pattern.Meanwhile there is better robustness and higher detection accuracy.

Description

Image significance detection method based on blur estimation fusion
Technical field
The present invention relates to saliency detection fields, and in particular to a kind of saliency detection based on blur estimation Method.
Background technique
In recent years, with the continuous development of digital technology, photo size is increasing, and resolution ratio is higher and higher, photo institute The information contained also becomes more and more abundant, it is desirable to which all information in processing image is for computer image system in time It is a no small challenge.It would therefore be desirable to have a kind of selection mechanisms to assist to carry out the information in image selective processing, to subtract The burden of light image processing system.The vision significance of the mankind can help to contain important letter in human visual system's resolution image The region of breath, while reducing the interference of background area.By the way that vision significance model is applied to Computer Image Processing field, The information in image can be more quickly extracted, image procossing precision is improved.Therefore, the visual observation mistake of the mankind how is simulated Journey obtains more efficient accurate image significance detection method, becomes computer vision field project urgently to be resolved.
Currently, the mode for saliency detection can be divided into two kinds from principle, one is utilize Low Level Vision The bottom-up conspicuousness detection mode of feature, such as color characteristic, direction character and contrast metric etc..This detection side The multiple advantages such as data characteristics of the formula based on image has detection rates fast, and computation complexity is low.But this traditional conspicuousness Detection mode is easy to produce detection mistake, thus is seldom used alone.Another kind is to utilize priori using high-rise visual signature Knowledge assists the top-down conspicuousness detection mode that conspicuousness detection is carried out to image.This conspicuousness detection mode detection Precision is high, and can carry out flexibly adjustment with different Detection tasks, but computation complexity is higher, takes a long time. The research that current conspicuousness detection field is the most popular is to show bottom-up conspicuousness detection mode with top-down Work property detection mode blends, and has both utilized the low-level visual feature of image or has utilized the high-rise visual signature of image, has drawn above-mentioned The advantages of two kinds of conspicuousness detection modes, obtains more efficiently accurate conspicuousness testing result.But it merges currently popular certainly Bottom is not directed to psychological characteristics when being shot using photographer with top-down conspicuousness detection mode upwards and assists to carry out significantly Property detection, and the shooting of photographer is psychological is of great significance for the understanding of image.Therefore, the present invention is for current significant Property detection method existing for deficiency, psychology when creative proposition quantization photographer's shooting is intended to and as priori knowledge Auxiliary carries out saliency detection.
Summary of the invention
The present invention proposes a kind of to merge based on blur estimation for deficiency existing for traditional images conspicuousness detection method The technical solution of image significance detection method.
Technical solution of the present invention provides a kind of image significance detection method based on blur estimation fusion, including low layer view Feel that feature obtains stage and fuzziness feature application stage;
The low-level visual feature obtains the stage and comprises the steps of,
Step 1.1, test image to be detected is inputted,
Step 1.2, the test image inputted according to step 1.1, is obtained using bottom-up saliency detection algorithm The low-level visual feature of image is taken, the significant characteristics figure of the corresponding Low Level Vision of image to be detected is generated, uses SBIt indicates;
The fuzziness feature application stage comprises the steps of:
Step 2.1, multiple groups training image blocks are inputted, every group of training image blocks include the image block P focusedfWith the figure defocused As block Pd
Step 2.2, for each training image group Y={ y of input1,...,yn, carry out vectorization, y1,...,ynIt indicates Specific image block in one training image group, n indicate the quantity of image block in a training image group;
And training sparse dictionary D enables input data to be expressed as following formula,
Wherein, xiWhat is indicated is blended using different dictionary atoms to be fitted image yiWhen not homoatomic weight parameter, K indicates xiThe value of sparse degree;
Step 2.3, for step 2.2 gained sparse dictionary D, picture number to be detected that read step 1.1 inputs it is believed that Breath is calculated, and blur estimation figure S is obtainedD
Step 2.4,2.3 gained blur estimation figure S of plot stepDCorresponding grey level histogram HD, and according to intensity histogram Map analysis obtains the maximum value I of blur estimation figuremaxWith minimum value Imin
Step 2.5, step 2.3 gained blur estimation figure S is utilizedDMaximum value ImaxWith minimum value IminCalculate image Michelson contrast C, calculation formula is as follows,
Step 2.6, Michelson contrast C step 2.5 being calculated is mapped to default using following formula Within range,
Wherein, λ is that photographer shoots intent parameter, and the value of parameter a and parameter b are corresponding to preset range;
Step 2.7, according to step 2.3 gained blur estimation figure SD, the focal zone in image to be detected is obtained, with focusing The local maximum point in region is as vision positioning point Fi, Gaussian smoothing is carried out for each vision positioning point, is obtained View-based access control model anchor point FiVisual density figure Di, by all vision positioning point FiCorresponding visual density figure DiSuperposition obtains base In the significant characteristics figure of blur estimation fusion, it is denoted as the significant characteristics figure S of high-rise visionT
Step 2.8, intent parameter λ is shot using step 2.6 gained photographer, as fusion weight, guidance is different The fusion of significant characteristics figure under mechanism, fusion formula is as follows,
S=(1- λ) SB+λST
Wherein, the saliency testing result that S is indicated, SBFor the significant characteristics figure of Low Level Vision, STFor high level The significant characteristics figure of vision.
Moreover, preset range is [0.2,0.8] in step 2.6.
Moreover, the value of parameter a is -1.38, the value of parameter b is 2.76.
Compared with the prior art, the present invention has the following advantages:
1. the present invention fully considers influence of the psychological activity for image taking effect when photographer shoots, by the heart of photographing Neo-Confucianism introduces the conspicuousness detection for assisting to carry out image, is more in line with the perception of human visual system.
2. the shooting intent parameter of innovative introducing photographer of the invention, and be used for assisting different conspicuousness information Fusion, there is better theoretical basis, it is as a result more accurate.
3. conspicuousness detection method proposed by the present invention can be obviously improved the conspicuousness detection accuracy of image, and have compared with Strong robustness has certain dissemination.
Specific embodiment
Method proposed by the present invention is by bottom-up conspicuousness detection mode and top-down conspicuousness detection mode It blends, the frame that psychology when proposing to photographer's shooting is simulated and quantified, and the result of simulation and quantization is used In the fusion for instructing the conspicuousness information under different mechanisms, the low-level visual feature and fuzziness feature for comprehensively utilizing image are carried out Detection.
The present invention is first depending on the image data to be detected of input, calculates the low-level visual feature figure of image;Secondly mould Psychological characteristics when quasi- photographer's shooting carries out the blur estimation of image, calculates the vision positioning point of viewer, obtains corresponding High-rise visual signature figure of the visual density figure as image.When followed by the quantization photographer's shooting of image blur estimation result Intent parameter is strengthened the object that photographer is intended by and (that is to say that viewer is easiest to be concerned about), is instructed using quantized result High-rise visual signature figure is merged with low-level visual feature figure, obtains final saliency testing result.Present invention simulation Psychological characteristics when photographer shoots meets the internal relation that image taking is intended between viewer's perception, obtained figure As conspicuousness testing result is more in line with human vision conspicuousness detection pattern.Meanwhile fully utilizing the Low Level Vision of image Feature and high-rise visual signature, have better robustness and higher detection accuracy.
Technical solution of the present invention can be used computer software mode and support automatic running process, with reference to embodiments in detail Illustrate technical solution of the present invention.
Embodiment includes that low-level visual feature obtains stage and fuzziness feature application stage.
The low-level visual feature acquisition stage comprises the steps of:
Step 1.1, test image to be detected is inputted;
Step 1.2, the test image inputted according to step 1.1 is detected using classical bottom-up saliency Algorithm obtains the low-level visual feature of image, generates the corresponding Low Level Vision significant characteristics figure of image to be detected, and use SBTable Show.
The stage is obtained by the above low-level visual feature, test image to be detected in the present embodiment is all located Reason, obtains corresponding Low Level Vision significant characteristics figure, the fusion for significant characteristics under subsequent different mechanisms.
Classical bottom-up saliency detection algorithm can be found in following documents, and it will not go into details by the present invention:
[1]X.Hou and L.Zhang,“Saliency detection:a spectral residual approach,”in Proc.CVPR,2007.
[2]R.Achanta,S.Hemami,F.Estrada,and S.Susstrunk,“Frequency-tuned salient region detection,”in Proc.CVPR,2009.
The fuzziness feature application stage comprises the steps of:
Step 2.1, it is contemplated that photographer strengthens often through focal length is changed during shooting image and wants to clap The object taken the photograph, so, it is likely to be photographer more the object focused and wants the content strengthened.Therefore, this method considers to utilize The mode of the blur estimation of image wants the content strengthened to assess photographer in image.Input the different training image of multiple groups Block, every group of training image blocks include the image block P focusedfWith the image block P defocusedd;Embodiment uses the image having a size of 8 × 8 Block is trained.
The training image blocks are derived from and are split to the thousands of images grabbed at random, and are carried out to image block A degree of Gaussian Blur processing.Can voluntarily be pre-selected by those skilled in the art when specific implementation focusing image block and The image block defocused.
Step 2.2, for each training image group Y={ y of input1,…,yn, carry out vectorization, y1,…,ynIndicate one Specific image block in a training image group, n indicates the quantity of image block in a training image group, in each training image group Image block including focusing and the image block defocused.
And training sparse dictionary D is expressed as input data for following formula.
Wherein, xiWhat is indicated is blended using different dictionary atoms to be fitted image yiWhen not homoatomic weight parameter, K is to indicate xiThe value of sparse degree.Formula can help fitting result more approaching to reality image data, estimate so that image is fuzzy It is more accurate to count result.
Step 2.3, for obtain focus de-focus sparse dictionary D, read step 1.1 input picture number to be detected it is believed that Breath is calculated, and blur estimation figure S is obtainedD
In blur estimation figure SDIn, the corresponding gray value of each pixel dictionary atomicity table for constituting the pixel Show.The dictionary atomicity of composition is more, indicates that object corresponding to the pixel is more clear, and the dictionary atomicity of composition is fewer, Indicate that corresponding object is fuzzyyer.
Calculating acquisition blur estimation figure can be found in following documents, and it will not go into details by the present invention:
[3]J.Shi,L.Xu,and J.Jia,“Just noticeable defocus blur detection and estimation,”in Proc.CVPR,2015.
Step 2.4, blur estimation figure S is drawnDCorresponding grey level histogram HD, and obtained according to intensity histogram map analysis The maximum value I of blur estimation figuremaxWith minimum value Imin
Grey level histogram HDIn, the smaller expression picture material of gray scale is fuzzyyer, and gray scale is bigger, and expression picture material is more clear. The same object is often made of one group of pixel with close fog-level, and different objects is due to arriving camera lens Distance difference and often have different fog-levels.Therefore, different peaks and troughs are showed in grey level histogram.Wave Peak indicates the set with the pixel of close fog-level, thus corresponding same type objects, and trough then indicates different objects Between fuzziness difference, can be used for distinguishing different objects.In the method, near the first of grey level histogram origin The corresponding peak value of a wave crest is considered representing average background pixel fuzziness, is recorded as Imin, and it is farthest from grey level histogram origin The corresponding peak value of wave crest be considered the mean pixel fuzziness of the object most focused in representative image, be recorded as Imax
Step 2.5, blur estimation figure S is utilizedDMaximum value ImaxWith minimum value IminCalculate the Michelson comparison of image It spends C (visibility), calculation formula is as follows:
Step 2.6, Michelson contrast C step 2.5 being calculated, in the present embodiment by its numerical value benefit Within the scope of being mapped to [0.2,0.8] with following formula.When it is implemented, mapping range can be according to image data feature feelings Condition adjusts accordingly, and those skilled in the art can voluntarily preset, and should control within the scope of [0,1].
For the numerical value λ being calculated, it is defined as the shooting intent parameter of photographer, for describing photographer's shooting When photography be intended in the picture show situation.In the present embodiment, in order to by Michelson contrast C value control [0.2, 0.8] within, the value of parameter a is -1.38, and the value of parameter b is 2.76, and those skilled in the art can be voluntarily when specific implementation The value of parameter preset a, b.
Step 2.7, the blur estimation figure S obtained is calculated according to step 2.3D, when specific implementation can obtain with given threshold Focal zone in image to be detected, in the present embodiment, threshold value value be 35, gray value greater than 35 region be identified as to The focal zone of detection image.With blur estimation figure SDThe local maximum point of middle focal zone indicates what the region focused the most Local (the vision positioning point being most possibly concerned about by the vision system of viewer), is recorded as Fi, for each vision Anchor point carries out Gaussian smoothing to it, obtains based on vision positioning point FiVisual density figure (Density Map), It is recorded as Di, by all vision positioning point FiCorresponding visual density figure DiSuperposition obtains the conspicuousness merged based on blur estimation Characteristic pattern is recorded as ST
Step 2.8, intent parameter λ is shot using the photographer that step 2.6 obtains, as fusion weight, guidance is not It is merged with (low-level visual feature with fuzziness feature) significant characteristics figure under mechanism, fusion formula is as follows:
S=(1- λ) SB+λST
Wherein, the saliency testing result that S is indicated, SBFor Low Level Vision significant characteristics figure, STFor high level view Feel that significant characteristics figure, λ are the shooting intent parameter of photographer.
Specific embodiment described herein is only an example for the spirit of the invention, the neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or be substituted in a similar manner, But not deviate spirit of the invention or beyond the scope of the appended claims.

Claims (3)

1. a kind of image significance detection method based on blur estimation fusion, it is characterised in that: obtained including low-level visual feature Take stage and fuzziness feature application stage;
The low-level visual feature obtains the stage and comprises the steps of,
Step 1.1, test image to be detected is inputted,
Step 1.2, the test image inputted according to step 1.1 is obtained using bottom-up saliency detection algorithm and is schemed The low-level visual feature of picture generates the significant characteristics figure of the corresponding Low Level Vision of image to be detected, uses SBIt indicates;
The fuzziness feature application stage comprises the steps of:
Step 2.1, multiple groups training image blocks are inputted, every group of training image blocks include the image block P focusedfWith the image block defocused Pd
Step 2.2, for each training image group Y={ y of input1..., yn, carry out vectorization, y1..., ynIndicate an instruction Practice the specific image block in image group, n indicates the quantity of image block in a training image group;
And training sparse dictionary D enables input data to be expressed as following formula,
Wherein, xiWhat is indicated is blended using different dictionary atoms to be fitted image yiWhen not homoatomic weight parameter, k table Show xiThe value of sparse degree;
Step 2.3, for step 2.2 gained sparse dictionary D, image data information to be detected that read step 1.1 inputs into Row calculates, and obtains blur estimation figure SD
Step 2.4,2.3 gained blur estimation figure S of plot stepDCorresponding grey level histogram HD, and according to grey level histogram point Analysis obtains the maximum value I of blur estimation figuremaxWith minimum value Imin
Step 2.5, step 2.4 gained blur estimation figure S is utilizedDMaximum value ImaxWith minimum value IminCalculate image Michelson contrast C, calculation formula is as follows,
Step 2.6, Michelson contrast C step 2.5 being calculated is mapped to preset range using following formula Within,
Wherein, λ is that photographer shoots intent parameter, and the value of parameter a and parameter b are corresponding to preset range;
Step 2.7, according to step 2.3 gained blur estimation figure SD, the focal zone in image to be detected is obtained, focal zone is used Local maximum point as vision positioning point Fi, Gaussian smoothing is carried out for each vision positioning point, is based on Vision positioning point FiVisual density figure Di, by all vision positioning point FiCorresponding visual density figure DiSuperposition is obtained based on mould The significant characteristics figure for pasting estimation fusion, is denoted as the significant characteristics figure S of high-rise visionT
Step 2.8, intent parameter λ is shot using step 2.6 gained photographer, as fusion weight, instructs different mechanisms The fusion of lower significant characteristics figure, fusion formula is as follows,
S=(1- λ) SB+λST
Wherein, the saliency testing result that S is indicated, SBFor the significant characteristics figure of Low Level Vision, STFor high-rise vision Significant characteristics figure.
2. the image significance detection method according to claim 1 based on blur estimation fusion, it is characterised in that: step In 2.6, preset range is [0.2,0.8].
3. the image significance detection method according to claim 2 based on blur estimation fusion, it is characterised in that: parameter a Value be -1.38, the value of parameter b is 2.76.
CN201610526947.5A 2016-07-05 2016-07-05 Image significance detection method based on blur estimation fusion Active CN106203428B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610526947.5A CN106203428B (en) 2016-07-05 2016-07-05 Image significance detection method based on blur estimation fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610526947.5A CN106203428B (en) 2016-07-05 2016-07-05 Image significance detection method based on blur estimation fusion

Publications (2)

Publication Number Publication Date
CN106203428A CN106203428A (en) 2016-12-07
CN106203428B true CN106203428B (en) 2019-04-26

Family

ID=57466428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610526947.5A Active CN106203428B (en) 2016-07-05 2016-07-05 Image significance detection method based on blur estimation fusion

Country Status (1)

Country Link
CN (1) CN106203428B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099207B (en) * 2018-01-31 2020-12-01 成都极米科技股份有限公司 Effective image calculation method for overcoming camera instability
CN108805850A (en) * 2018-06-05 2018-11-13 天津师范大学 A kind of frame image interfusion method merging trap based on atom
CN113269808B (en) * 2021-04-30 2022-04-15 武汉大学 Video small target tracking method and device
CN114155208B (en) * 2021-11-15 2022-07-08 中国科学院深圳先进技术研究院 Atrial fibrillation assessment method and device based on deep learning
CN115965844B (en) * 2023-01-04 2023-08-18 哈尔滨工业大学 Multi-focus image fusion method based on visual saliency priori knowledge

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916379A (en) * 2010-09-03 2010-12-15 华中科技大学 Target search and recognition method based on object accumulation visual attention mechanism
CN102063623A (en) * 2010-12-28 2011-05-18 中南大学 Method for extracting image region of interest by combining bottom-up and top-down ways
CN102693426A (en) * 2012-05-21 2012-09-26 清华大学深圳研究生院 Method for detecting image salient regions
CN103996198A (en) * 2014-06-04 2014-08-20 天津工业大学 Method for detecting region of interest in complicated natural environment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120328161A1 (en) * 2011-06-22 2012-12-27 Palenychka Roman Method and multi-scale attention system for spatiotemporal change determination and object detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916379A (en) * 2010-09-03 2010-12-15 华中科技大学 Target search and recognition method based on object accumulation visual attention mechanism
CN102063623A (en) * 2010-12-28 2011-05-18 中南大学 Method for extracting image region of interest by combining bottom-up and top-down ways
CN102693426A (en) * 2012-05-21 2012-09-26 清华大学深圳研究生院 Method for detecting image salient regions
CN103996198A (en) * 2014-06-04 2014-08-20 天津工业大学 Method for detecting region of interest in complicated natural environment

Also Published As

Publication number Publication date
CN106203428A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN106203428B (en) Image significance detection method based on blur estimation fusion
US10872420B2 (en) Electronic device and method for automatic human segmentation in image
Fischer et al. Rt-gene: Real-time eye gaze estimation in natural environments
CN108537749B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
EP4293574A2 (en) Adjusting a digital representation of a head region
CN108537782B (en) Building image matching and fusing method based on contour extraction
WO2019221013A4 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN108416754A (en) A kind of more exposure image fusion methods automatically removing ghost
Jiang et al. Photohelper: portrait photographing guidance via deep feature retrieval and fusion
WO2020248395A1 (en) Follow shot method, apparatus and device, and storage medium
CN111127476A (en) Image processing method, device, equipment and storage medium
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN113343878A (en) High-fidelity face privacy protection method and system based on generation countermeasure network
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
CN108629301A (en) A kind of human motion recognition method based on moving boundaries dense sampling and movement gradient histogram
CN114372932A (en) Image processing method and computer program product
CN111915735B (en) Depth optimization method for three-dimensional structure outline in video
CN112070181A (en) Image stream-based cooperative detection method and device and storage medium
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN111738964A (en) Image data enhancement method based on modeling
CN116797504A (en) Image fusion method, electronic device and storage medium
CN113239867B (en) Mask area self-adaptive enhancement-based illumination change face recognition method
CN114372931A (en) Target object blurring method and device, storage medium and electronic equipment
CN112634331A (en) Optical flow prediction method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant