CN109215031A - The weighting guiding filtering depth of field rendering method extracted based on saliency - Google Patents

The weighting guiding filtering depth of field rendering method extracted based on saliency Download PDF

Info

Publication number
CN109215031A
CN109215031A CN201710531901.7A CN201710531901A CN109215031A CN 109215031 A CN109215031 A CN 109215031A CN 201710531901 A CN201710531901 A CN 201710531901A CN 109215031 A CN109215031 A CN 109215031A
Authority
CN
China
Prior art keywords
image
region
guiding filtering
foreground
weighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710531901.7A
Other languages
Chinese (zh)
Inventor
葛水英
杨真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Science Library Chinese Academy Of Sciences
Original Assignee
National Science Library Chinese Academy Of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Science Library Chinese Academy Of Sciences filed Critical National Science Library Chinese Academy Of Sciences
Priority to CN201710531901.7A priority Critical patent/CN109215031A/en
Publication of CN109215031A publication Critical patent/CN109215031A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of weighting guiding filtering depth of field rendering methods extracted based on saliency, wherein the method comprising the steps of 1: input original image seeks saliency value according to global contrast using RC algorithm, obtains the foreground part of image;Step 2: foreground part edge being refined according to guiding filtering;Step 3: image rest part is obscured using weighting guiding filtering algorithm.

Description

The weighting guiding filtering depth of field rendering method extracted based on saliency
Technical field
The present embodiments relate to computer image processing technology fields, more particularly, to saliency extraction and image Filtering method.
Background technique
The depth of field is human eye and the important optical characteristics of camera gun.Within the depth of field region image objects be clearly, Image except depth of field region is all fuzzy.The depth of field plays a significant role in prominent camera shooting principal.Given piece image, In the case where no depth information, estimation and extraction to depth of field region just seem most important.A large amount of pictures are observed and are sent out Existing, object most outstanding always has the object of minimum depth value, generally also image subject part in image.Therefore it can mention Take display foreground part as depth of field part, image rest part uses filtering mode to be obscured as background.Meanwhile it is if preceding Scape part edge details is more, and the foreground part of extraction is also required to retain details.In view of the above problems, the invention proposes one kind Weighting guiding filtering depth of field rendering method based on image zooming-out extracts display foreground using RC algorithm, before guiding filtering refinement Scape edge, rest part obtain blur effect using weighting guiding filtering.
Summary of the invention
The main purpose of the embodiment of the present invention is to provide a kind of weighting guiding filtering depth of field rendering side based on image zooming-out Method.
To achieve the goals above, according to an aspect of the invention, there is provided following technical scheme:
A kind of weighting guiding filtering depth of field rendering method extracted based on saliency.This method includes at least:
Step 1: input original image seeks saliency value according to global contrast using RC algorithm, obtains the prospect of image Part;
Step 2: foreground part edge being refined using guiding filtering emergence;
Step 3: image rest part is obscured using weighting guiding filtering algorithm.
Further, the step 1 specifically includes:
Input picture is divided into multiple regions using the image partition method based on figure, is then the foundation of each region Color histogram.The spatial weighting item introduced further includes spatial information, increases the influence of immediate area with this And reduction is compared with the influence of far region.
For a region rk, the significant of it is calculated by measuring its color contrast with other regions in image Value, the conspicuousness based on spatial weighting region contrast are defined as follows:
Wherein Ds(rk, ri) it is region rkWith region riBetween space length, σsFor controlling the size of space weight.w (ri) it is region riWeight, determined by the pixel quantity in the region.ws(rk) it is analogous to the spatial prior weighting of centre deviation ?.dkIt is region rkWith the average distance between picture centre pixel, and pixel coordinate is normalized to Between [0,1].Therefore, if region rkRange image center is close, ws(rk) it is just endowed high weight, if range image center It is remote then weight is low.To σsFor, bigger σsValue more can be reduced the influence of space weight, so that the comparison with more far region will more Facilitate the conspicuousness of current region.Space length between two regions is defined as the Europe between the center of gravity in respective region Distance is obtained in several.In test, it usesIts pixel coordinate is normalized to [0,1].
Dr(rk, ri) it is two interregional color distance measurements.w(ri) in use riDistance between pixel come increase with it is bigger The color contrast in region.Region r1And r2Between color distance be defined as follows:
Wherein f (cK, i) it is i-th of color cK, iIn k-th of region rkIn all nkThe frequency occurred in a color.Make Color is used weight of the frequency in this region as this color occur, more to reflect this color and main Difference between color.
Foreground object in original image often has bigger saliency value, therefore the foreground object meeting in image saliency map It is highlighted.The above process can obtain the general profile that image depth region keeps clearly part.
Further, the step 2 specifically includes:
If input picture is p, output image is q, and guidance figure is I, it is believed that q and I is in the window ω centered on pixel kk In there are local linear relationships
Wherein ak, bkFor linear coefficient, and centered on k, r is the local window ω of radiuskIn be constant.Assuming that figure As noise is ni, then exporting image p, there are following relationships with input picture q
qi=pi-ni
To determine linear coefficient akAnd bk, and keep the difference between q and p minimum, it solves filter result and is equivalent to minimum window Mouth ωkLoss function
Wherein, ak, bkFor linear coefficient, and centered on k, radius is the local window ω of rkIn be constant.∈ is anti- Only akExcessive regularization parameter.
For the foreground edge after being refined, the foreground image that step 1 is obtained makes as the input picture of guiding filtering It uses original image as navigational figure, carries out the operation of image guiding filtering, obtain the display foreground part of edge thinning.In reality Edge thinning operation in, r value is that 60, ∈ value is 10^-6.
Further, the step 3 specifically includes:
The non-foreground area of image is filtered, image blur effects are obtained.For " the light for reducing guiding filtering texture edge Halo effect " defect is weighted the regularization parameter of guiding filtering loss function:
Wherein, WG(i) it is filter edge weight, is defined as follows:
Wherein I is navigational figure,For the variance around pixel in the field 3*3, γσFor constant 0.06553, N is The sum of all pixels of navigational figure.At the edge of navigational figure, the corresponding edge weights W of the big pixel p ' of neighborhood internal varianceI (p ') is bigger, and the corresponding filter edge weight of pixel p ' is smaller.Therefore, it can be better maintained image border.
Detailed description of the invention
Attached drawing is as a part of the invention, and for providing further understanding of the invention, of the invention is schematic Examples and descriptions thereof are used to explain the present invention, but does not constitute an undue limitation on the present invention.Obviously, the accompanying drawings in the following description Only some embodiments to those skilled in the art without creative efforts, can be with Other accompanying drawings can also be obtained according to these attached drawings.In the accompanying drawings:
Fig. 1 is that the image depth based on weighting guiding filtering renders process.
Fig. 2 is the foreground part result that image is obtained using RC algorithm.
Fig. 3 is that guiding filtering emergence carries out refinement result to foreground part edge.
Fig. 4 is the Deep Canvas figure that the present invention obtains.
These attached drawings and verbal description are not intended to the conception range limiting the invention in any way, but by reference to Specific embodiment is that those skilled in the art illustrate idea of the invention.
Specific embodiment
The technical issues of with reference to the accompanying drawing and specific embodiment is solved to the embodiment of the present invention, used technical side Case and the technical effect of realization carry out clear, complete description.Obviously, described embodiment is only one of the application Divide embodiment, is not whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not paying creation Property labour under the premise of, all other equivalent or obvious variant the embodiment obtained is fallen within the scope of protection of the present invention. The embodiment of the present invention can be embodied according to the multitude of different ways being defined and covered by claim.
As shown in Figure 1, the weighting guiding filtering depth of field rendering method based on image zooming-out includes three steps.
Step 1: input original image seeks saliency value according to global contrast using RC algorithm, obtains the prospect of image Part.
In this step, input picture is divided into multiple regions using the image partition method based on figure, then for Color histogram is established in each region.The spatial weighting item introduced further includes spatial information, with this come increase compared with Influence of the influence and reduction of near field compared with far region.
For a region rk, the significant of it is calculated by measuring its color contrast with other regions in image Value, the conspicuousness based on spatial weighting region contrast are defined as follows:
Wherein Ds(rk, ri) it is region rkWith region riBetween space length, σsFor controlling the size of space weight.w (ri) it is region riWeight, determined by the pixel quantity in the region.ws(rk) it is analogous to the spatial prior weighting of centre deviation ?.Dk is region rkWith the average distance between picture centre pixel, and pixel coordinate is normalized to Between [0,1].Therefore, if region rkRange image center is close, ws(rk) it is just endowed high weight, if range image center It is remote then weight is low.To σsFor, bigger σsValue more can be reduced the influence of space weight, so that the comparison with more far region will more Facilitate the conspicuousness of current region.Space length between two regions is defined as the Europe between the center of gravity in respective region Distance is obtained in several.In test, it usesIts pixel coordinate is normalized to [0,1].
Dr(rk, ri) it is two interregional color distance measurements.w(ri) in use riDistance between pixel come increase with it is bigger The color contrast in region.Region r1And r2Between color distance be defined as follows:
Wherein f (cK, i) it is i-th of color cK, iIn k-th of region rkIn all nkThe frequency occurred in a color.Make Color is used weight of the frequency in this region as this color occur, more to reflect this color and main Difference between color.
Foreground object in original image often has bigger saliency value, therefore the foreground object meeting in image saliency map It is highlighted.The above process can obtain the general profile that image depth region keeps clearly part.
The result of acquisition is as shown in Figure 2, it can be seen that can extract subject image roughly by step 1, but to such as hair The extraction effect of the detail sections such as hair is unobvious.Therefore, it more accurately extracts to obtain as a result, it is desirable to be carried out to foreground edge thin Change.
Step 2: foreground part edge being refined using guiding filtering emergence;
In this step, if input picture is p, output image is q, and guidance figure is I, it is believed that q and I in being with pixel k The window ω of the heartkIn there are local linear relationships
Wherein ak, bkFor linear coefficient, and centered on k, r is the local window ω of radiuskIn be constant.Assuming that figure As noise is ni, then exporting image p, there are following relationships with input picture q
qi=pi-ni
To determine linear coefficient akAnd bk, and keep the difference between q and p minimum, it solves filter result and is equivalent to minimum window Mouth ωkLoss function
Wherein, ak, bkFor linear coefficient, and centered on k, radius is the local window ω of rkIn be constant.∈ is anti- Only akExcessive regularization parameter.
For the foreground edge after being refined, the foreground image that step 1 is obtained makes as the input picture of guiding filtering It uses original image as navigational figure, carries out the operation of image guiding filtering, obtain the display foreground part of edge thinning.In reality Edge thinning operation in, r value is that 60, ∈ value is 10^-8.
Fig. 3 shows the edge effect after sprouting wings using guiding filtering, it can be seen that image border is refined, and hair etc. is thin Section is retained, and foreground part is accurately extracted.
Step 3: image rest part is obscured using weighting guiding filtering algorithm.
The non-foreground area of image is filtered, image blur effects are obtained.For " the light for reducing guiding filtering texture edge Halo effect " defect is weighted the regularization parameter of guiding filtering loss function:
Wherein, WG(i) it is filter edge weight, is defined as follows:
Wherein I is navigational figure,For the variance around pixel in the field 3*3, γσFor constant 0.06553, N is The sum of all pixels of navigational figure.At the edge of navigational figure, the corresponding edge weights W of the big pixel p ' of neighborhood internal varianceI (p ') is bigger, and the corresponding filter edge weight of pixel p ' is smaller.Therefore it can be better maintained image border.
Use original image as input picture, the foreground image (Fig. 3 b) that step 2 obtains is used as navigational figure, before holding Scape part is constant, is weighted guiding filtering to image rest part, and obtained depth of field rendering image is as shown in Figure 4 b.From figure As can be seen that foreground part details such as hair of the present invention etc. is retained, background parts obtain appropriate fuzzy, realize the depth of field Rendering, has achieved the effect that of the invention.
Each step of the invention can be realized with general computing device, for example, they can concentrate on it is single On computing device, such as: personal computer, server computer, handheld device or portable device, laptop device or more Processor device can also be distributed over a network of multiple computing devices, they can be to be different from sequence herein Shown or described step is executed, perhaps they are fabricated to each integrated circuit modules or will be more in them A module or step are fabricated to single integrated circuit module to realize.Therefore, the present invention is not limited to any specific hardware and soft Part or its combination.
Programmable logic device can be used to realize in method provided by the invention, and it is soft also to may be embodied as computer program Part or program module (it include routines performing specific tasks or implementing specific abstract data types, programs, objects, component or Data structure etc.), such as embodiment according to the present invention can be a kind of computer program product, run the computer program Product executes computer for demonstrated method.The computer program product includes computer readable storage medium, should It include computer program logic or code section on medium, for realizing the method.The computer readable storage medium can To be the built-in medium being mounted in a computer or the removable medium (example that can be disassembled from basic computer Such as: using the storage equipment of hot plug technology).The built-in medium includes but is not limited to rewritable nonvolatile memory, Such as: RAM, ROM, flash memory and hard disk.The removable medium includes but is not limited to: and optical storage media (such as: CD- ROM and DVD), magnetic-optical storage medium (such as: MO), magnetic storage medium (such as: tape or mobile hard disk), can with built-in Rewrite the media (such as: storage card) of nonvolatile memory and the media (such as: ROM box) with built-in ROM.
Present invention is not limited to the embodiments described above, and without departing substantially from substantive content of the present invention, this field is common Any deformation, improvement or the replacement that technical staff is contemplated that each fall within the scope of the present invention.
Although having been shown above, being described and pointed out basic novel feature of the invention suitable for various embodiments Detailed description, it will be understood that do not depart from the invention is intended in the case where, those skilled in the art can be to system Form and details carry out various omissions, substitutions and changes.

Claims (4)

1. a kind of weighting guiding filtering depth of field rendering method extracted based on saliency, which is characterized in that this method is at least Include:
Step 1: input original image seeks saliency value according to global contrast using RC algorithm, obtains the foreground part of image;
Step 2: foreground part edge being refined using guiding filtering emergence;
Step 3: image rest part is obscured using weighting guiding filtering algorithm.
2. the method according to claim 1, wherein the step 1 specifically includes:
Original image is inputted, input picture is divided into multiple regions, then establishes color histogram for each region.One introduced A spatial weighting item further includes spatial information.For a region rk, based on the significant of spatial weighting region contrast Property is defined as follows:
Wherein Ds(rk, ri) it is region rkWith region riBetween space length, σsFor controlling the size of space weight.w(ri) be Region riWeight.ws(rk) it is analogous to the spatial prior weighted term of centre deviation.Dr(rk, ri) it is two interregional colors Distance metric.Region r1And r2Between color distance be defined as follows:
Wherein f (cK, i) it is i-th of color cK, iIn k-th of region rkIn all nkThe frequency occurred in a color.Original graph Foreground object as in often has bigger saliency value, therefore foreground object can be highlighted in image saliency map.It is above-mentioned Process can obtain the general profile that display foreground region keeps clearly part.
3. the method according to claim 1, wherein the step 2 specifically includes:
For the foreground edge after being refined, the foreground image that step 1 is obtained uses original as the input picture of guiding filtering Beginning image carries out the operation of image guiding filtering, obtains the image depth part of edge thinning as navigational figure.If input picture For p, output image is q, and guidance figure is I, solves filter result and is equivalent to minimum window ωkLoss function
Wherein, ak, bkFor linear coefficient, and centered on k, radius is the local window ω of rkIn be constant.ε is to prevent akIt crosses Big regularization parameter.
4. the method according to claim 1, wherein the step 3 specifically includes:
Guiding filtering is weighted to the non-foreground area of image, obtains image blur effects.Guiding filtering after weighting loses letter Number becomes
Wherein, WG(i) it is filter edge weight, is defined as follows:
Wherein I is navigational figure,For the variance around pixel in the field 3*3, γσFor constant 0.06553, N is guidance The sum of all pixels of image.
CN201710531901.7A 2017-07-03 2017-07-03 The weighting guiding filtering depth of field rendering method extracted based on saliency Pending CN109215031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710531901.7A CN109215031A (en) 2017-07-03 2017-07-03 The weighting guiding filtering depth of field rendering method extracted based on saliency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710531901.7A CN109215031A (en) 2017-07-03 2017-07-03 The weighting guiding filtering depth of field rendering method extracted based on saliency

Publications (1)

Publication Number Publication Date
CN109215031A true CN109215031A (en) 2019-01-15

Family

ID=64992682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710531901.7A Pending CN109215031A (en) 2017-07-03 2017-07-03 The weighting guiding filtering depth of field rendering method extracted based on saliency

Country Status (1)

Country Link
CN (1) CN109215031A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961437A (en) * 2019-04-04 2019-07-02 江南大学 A kind of conspicuousness fabric defect detection method under the mode based on machine teaching
CN110163852A (en) * 2019-05-13 2019-08-23 北京科技大学 The real-time sideslip detection method of conveyer belt based on lightweight convolutional neural networks
CN111275642A (en) * 2020-01-16 2020-06-12 西安交通大学 Low-illumination image enhancement method based on significant foreground content

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961437A (en) * 2019-04-04 2019-07-02 江南大学 A kind of conspicuousness fabric defect detection method under the mode based on machine teaching
CN110163852A (en) * 2019-05-13 2019-08-23 北京科技大学 The real-time sideslip detection method of conveyer belt based on lightweight convolutional neural networks
CN110163852B (en) * 2019-05-13 2021-10-15 北京科技大学 Conveying belt real-time deviation detection method based on lightweight convolutional neural network
CN111275642A (en) * 2020-01-16 2020-06-12 西安交通大学 Low-illumination image enhancement method based on significant foreground content
CN111275642B (en) * 2020-01-16 2022-05-20 西安交通大学 Low-illumination image enhancement method based on significant foreground content

Similar Documents

Publication Publication Date Title
CN109348089B (en) Night scene image processing method and device, electronic equipment and storage medium
CN110163640B (en) Method for implanting advertisement in video and computer equipment
US20210104086A1 (en) 3d facial capture and modification using image and temporal tracking neural networks
US8692830B2 (en) Automatic avatar creation
US11004179B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
US10970821B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
CN107925755A (en) The method and system of plane surface detection is carried out for image procossing
CN109215031A (en) The weighting guiding filtering depth of field rendering method extracted based on saliency
CN108109161B (en) Video data real-time processing method and device based on self-adaptive threshold segmentation
US20210390770A1 (en) Object reconstruction with texture parsing
CN108111911B (en) Video data real-time processing method and device based on self-adaptive tracking frame segmentation
US11978216B2 (en) Patch-based image matting using deep learning
CN113688907B (en) A model training and video processing method, which comprises the following steps, apparatus, device, and storage medium
CN110047122A (en) Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN113313730B (en) Method and device for acquiring image foreground area in live scene
Lee et al. Correction of the overexposed region in digital color image
CN112016576A (en) Method for training neural network, image processing method, apparatus, device, and medium
CN109903265A (en) A kind of image change area detecting threshold value setting method, system and its electronic device
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
CN115210773A (en) Method for detecting object in real time by using object real-time detection model and optimization method
CN115967823A (en) Video cover generation method and device, electronic equipment and readable medium
CN106604057A (en) Video processing method and apparatus thereof
CN106651796A (en) Image or video processing method and system
CN112087661A (en) Video collection generation method, device, equipment and storage medium
CN105528772B (en) A kind of image interfusion method based on directiveness filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190115