CN104463853A - Shadow detection and removal algorithm based on image segmentation - Google Patents

Shadow detection and removal algorithm based on image segmentation Download PDF

Info

Publication number
CN104463853A
CN104463853A CN201410675195.XA CN201410675195A CN104463853A CN 104463853 A CN104463853 A CN 104463853A CN 201410675195 A CN201410675195 A CN 201410675195A CN 104463853 A CN104463853 A CN 104463853A
Authority
CN
China
Prior art keywords
shadow
region
algorithm
value
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410675195.XA
Other languages
Chinese (zh)
Inventor
刘颜丽
陈卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201410675195.XA priority Critical patent/CN104463853A/en
Publication of CN104463853A publication Critical patent/CN104463853A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/007Dynamic range modification
    • G06T5/008Local, e.g. shadow enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Abstract

The invention discloses a shadow detection and removal algorithm based on image segmentation, and relates to the technical field of image processing. The algorithm mainly solves the problems about how to judge whether shadows exist in a region or not or whether an edge is a shadow or not and how to remove corresponding shadows. The algorithm includes the steps that firstly, through texture and brightness characteristics, the probability that each pixel point is a shadow edge is estimated through the combination of local information and overall information; an image is segmented through a watershed algorithm and contour information shown in the specification; a shadow region and a non-shadow region in the image are segmented through a region fusion algorithm based on edges, and meanwhile the shadow region and the non-shadow region are segmented into multiple sub-regions respectively; then classifiers SVM are trained respectively for recognizing shadows; then, a shadow detection energy equation is solved through an image segmentation algorithm, and then a final shadow detection result is acquired; finally, according to the shadow detection result, shadow labels are calculated through an image matting algorithm, the shadow region is lightened through the marks, and illumination of the shadow region is restored.

Description

A kind of shadow Detection based on Iamge Segmentation and removal algorithm
Technical field
The present invention is specifically related to a kind of shadow Detection based on Iamge Segmentation and removes algorithm, removes, belong to technical field of image processing for image shadow Detection and penumbra region.
Background technology
Shadow Detection is one of study hotspot of image processing field always.Existence due to shade can increase the difficulty of object identification, Video segmentation scheduling algorithm, by shadow Detection out and its removal can be significantly improved the performance of the many algorithms of image processing field.
The method of a lot of shadow Detection is based on illumination model, or color model proposition.As the shadow detection method under color space HSI, the method utilizes the ratio of H and I in HSI space to detect shade.But the method is more suitable for taking photo by plane, figure or the obvious image of shade, then barely satisfactory for the shadow Detection effect under complex scene.But owing to can not judge that a pixel belongs to shade and still belongs to the darker non-shadow of color well, the method automatically detecting shade remains a major challenge of shadow Detection.
Up to now, there is a large amount of shadow Detection algorithms in image processing field.According to the difference of detection means, existing shadow Detection algorithm can be divided into based on the shadow Detection at edge and the shadow Detection based on study.
First shadow Detection algorithm based on edge needs the gray-scale map of scene cromogram and illumination invariant, and gray-scale map is obtained by the video camera calibrated, and detects shade (G.D.Finlayson, 2006) by the edge at the edge and original image that compare gray-scale map.The method has splendid treatment effect to high quality graphic, and general to normal image effect.Shadow Detection algorithm based on study considers the complicacy of Shadow edge, then from the angle of experience, the method for data-driven is incorporated into shadow Detection, as based on the Information Pull such as intensity of illumination, gradient condition random field (Conditional random field, CRF), judge whether a region is shade; Or utilize CRF to judge an edge whether to be shade.Although utilize the method for condition random field CRF can detect shade well under certain condition this, its training process is tediously long, and higher to the dependency degree of training set.Even to this day, under the impact of the factors such as illumination, object reflectance and shade geometric configuration, shadow Detection remains a very challenging problem.
Summary of the invention
For above-mentioned prior art, the object of the invention is to how a kind of shadow Detection based on Iamge Segmentation is provided and removes algorithm, it can split shade and non-hatched area effectively, can also detect better and remove in scene from shade and cast shadow.
In order to solve the problems of the technologies described above, the present invention adopts following technical scheme:
Based on shadow Detection and the removal algorithm of Iamge Segmentation, it is characterized in that, comprise the steps:
In conjunction with local message and global information, S100: utilize texture and brightness, estimates that each pixel is the probability of Shadow edge;
S200: use watershed algorithm to utilize profile information gPb to split image;
S300: utilize based on edge area merges algorithm by shadow region in image and non-hatched area separated, respectively shadow region and non-hatched area are divided into several subregions simultaneously; Then, utilize the information of single area information and matching area, train a sorter SVM respectively, shade is identified; Use figure cuts the energy equation that Algorithm for Solving detects shade subsequently, obtains final shadow detection result.
S400, according to the result of shadow Detection, uses stingy nomography computational shadowgraph label, utilizes the label obtained to light shadow region, recovers the illumination of shadow region, itself and the unshaded area light of surrounding is taken a picture same.
The main the following steps composition of described step S100:
S101: by calculated direction gradient information G (x, y, θ) build Shadow edge detecting device Pb, Shadow edge detecting device calculates the gradient information G (x, y, θ) of brightness and texture primitive two passages respectively, the construction method of Shadow edge detecting device is (x more in the picture, y), centered by, be that radius draws a circle with r, this circle is divided into two semicircles by the diameter that direction is θ;
S102: by the χ between the histogram that calculates these two semicircles 2distance obtains direction gradient G:
χ 2 ( g , h ) = 1 2 Σ i ( g ( i ) - h ( i ) ) 2 g ( i ) + h ( i ) - - - ( 1 )
Wherein, g and h represents two semicircles, the value in i representative image codomain;
S103: combined by the detecting device Pb calculated, obtains different scale, local message (local cues) mPb that different combination of channels is got up:
mPb ( x , yθ ) = Σ s Σ i α i , s G i , σ ( i , s ) ( x , y , θ ) - - - ( 2 )
Wherein, behalf radius of a circle, i representative feature passage (brightness, texture primitive); G i, σ (i, s)difference between two semicircles that (x, y, θ) compares centered by (x, y), σ (i, s) is radius size, θ is radial direction; α i,sit is then the weight representated by each gradient information;
On each point, choose the mPb value of maximal value as this point of gradient information G, this value represents final local message:
mPb ( x , y , θ ) = max θ { mPb ( x , y , θ ) } - - - ( 3 )
Step S104: build a sparse matrix with mPb, by calculating eigenwert and the proper vector of this matrix, obtains required global information; Sparse matrix is in the region of radius r=5 pixel, each pixel link is got up:
W ij = exp ( - max p ∈ ij ‾ { mPb ( p ) } ρ ) - - - ( 4 )
Wherein, representative be contact between i and j, ρ=0.1, then defines D ii=∑ jw ij, pass through
(D-W)v=λDv (5)
Calculate proper vector { v 0, v 1..., v n, and eigenwert 0=λ 0≤ λ 1≤ ... ≤ λ n.
Step S105: regard each proper vector in step S104 as a sub-picture, by calculating the gaussian filtering under different directions, obtains directional information then, the directional information linear superposition under different characteristic vector is got up to obtain global information sPb:
sPb ( x , y , θ ) = Σ k = 1 n 1 λ k · ▿ θ v k ( x , y ) - - - ( 6 )
Step S106: local message mPb and global information sPb is organically combined analysis chart as profile information gPb:
gPb ( x , y , θ ) = Σ s Σ i { β i , s G i , σ ( x , y , θ ) + γ · sPb ( x , y , θ ) } - - - ( 7 )
Wherein, β i,sthe coefficient of mPb and sPb is represented respectively with γ.
The main the following steps composition of described step S200:
Step S201: estimate any point (x, y) in image is the probability of profile on the θ of direction, obtains the maximal value that this dot profile detects:
E ( x , y ) = max θ E ( x , y , θ ) - - - ( 8 )
Step S202: utilize mathematical morphology, calculates each region with the minimum value of E (x, y) in region for " retaining basin ", and each " retaining basin " corresponding region, is designated as P 0; Liang Ge retaining basin intersection is " watershed divide ", is designated as K 0;
Step S203: watershed algorithm can produce over-segmentation problem, the place that should not be limit by this is labeled as watershed divide, utilizes region merging algorithm to solve over-segmentation problem;
Described region merging algorithm is as follows: define a non-directed graph G=(P 0, K 0, W (K 0), E (P 0)), wherein W (K 0) represent and the weights of every bar watershed divide obtained, E (P by the gross energy that watershed divide is put divided by the number that watershed divide is put 0) representing the energy value of each retaining basin, the zero energy in each basin is zero, W (K 0) describe diversity between adjacent two regions; By watershed divide according to its weights, ascending stored in queue.
Region merging algorithm comprise the following steps into:
One, the limit C finding weight minimum *=argminW (C)
Assuming that R 1and R 2by limit C *segmentation, and R=R 1∪ R 2if, min{E (R 1), E (R 2) ≠ 0, judge whether to merge, merging condition is: then W (K 0)≤τ min{E (R 1), E (R 2) (9)
Or min{E (R 1), E (R 2)=0 (10),
Wherein τ is a constant;
If two merge, then upgrade E (R), P 0and K 0, E (R), P 0and K 0update method be:
E(R)=max{E(R 1),E(R 2),W(C *)} (11)
P 0←P 0\{R 1,R 2}∪R (12)
K 0←K 0\{C *} (13)
More a step ground, adjusts merging condition by adjustment τ, enters the size that face controls final area, and τ is larger, and the final region area merged is larger.
Obtain final shadow detection result by separating following energy equation in described step S300, energy equation cuts algorithm by figure to solve:
y ^ = arg min y Σ k cos t k unary ( y k ) + α 2 Σ { i , j } ∈ E same c ij same 1 ( y i ≠ y j ) - - - ( 15 )
Meanwhile,
cos t k unary ( y k ) = - c k shadow y k - α 1 Σ { i = k , j } ∈ E diff c ij diff y k + α 1 Σ { i , j = k } ∈ E diff c ij diff y k - - - ( 16 )
Wherein, represent that Region Matching sorter is for two area light photograph estimations together, represent Region Matching sorter for two area light according to different estimations, the estimation of whether be single region classifier to single regions be shade, { i, j} ∈ E samerepresent two regions of same light photograph, { i, j} ∈ E diffrepresent two regions of different light; Y={-1,1} n, when for 1 time represent that this region is shadow region.
Further be described as described step S400: use and scratch nomography computational shadowgraph label, this algorithm thinks an image I ican by prospect F iwith background B imix, its formula is as follows:
I i=k iF i+(1-k i)B i(18)
=k i(L dR i+L eR i)+(1-k i)L eR i(19)
Wherein, L ddirect light, L esurround lighting, k ishade label, R iit is the reflection coefficient of an i.
Prospect is labeled as non-shadow, and context marker is shade, obtains k by the minimum value calculating following energy equation isize, k irepresent the label of some i, obtained by (20), k is k ithe vector of composition, obtains k and just obtains k i.
E ( k ) = k T Lk + λ ( k - k ^ ) T D ( k - k ^ ) - - - ( 20 )
K tbe the transposition of k, k is k ithe vector of composition, λ is a very large numerical value, specifically determines by putting into practice. the label y obtained by the energy equation of step S300 kthe vector of composition, in the value of each element be exactly the y obtained in formula k, each element value is not 0 is here exactly 1.Please note: this formula is exactly that the k calculated is a vector, the label of each pixel of each element representation in vector in order to calculate k; But the span of the value of each element becomes [0,1], and that is the label of one part of pixel has been become the value of 0 to 1 interval range from 1 or 0; Label is still just expression shadow region and the non-hatched area of 0 or 1, label value expression penumbra region between zero and one.
Wherein, L is the Laplacian Matrix of stingy figure, and D (i, i) is a diagonal matrix, and D (i, i)=1 represents that pixel i is the edge of shadow region, and D (i, i)=0 item represents other institute a little;
Utilize the label obtained to light shadow region, recover the illumination of shadow region:
According to shadow model, if a pixel is lit, then
I i shadow - free = ( L d + L e ) R i ( 21 ) = ( k i · L d + L e ) R i ( L d + L e ) ( k i · L d + L e ) ( 22 ) = r + 1 k i r + 1 I i ( 23 )
Wherein, r=L d/ L edirect light L dwith surround lighting L eratio, I irepresent pixel i value originally, so calculating r just can by shadow removal;
Known
I i=(k i·L d+L e)R i(24)
I j=(k j·L d+L e)R j(25)
If the reflection coefficient of two pixels is identical, i.e. R i=R j, then
r = I j - I i k j I i - k i I j
The object of formula (24) and (25) calculates r, and the thought calculating this value is exactly, and finding material, identical (namely reflection coefficient is identical, R i=R j), illumination is different from (L dand L ethe same, but shade label k is different) two somes i, j, utilization mathematical relation between the two just can obtain r.I jjust represent the pixel value of such some j.
Compared with prior art, the present invention has following beneficial effect:
One, utilize identical shadow character segmentation image and detection shade organically to be combined, split the shade in image and non-hatched area well, make Iamge Segmentation more accurate, more accurate, better effects if is detected in shadow region;
Two, the profile of shade is more intactly detected, the antijamming capability of detection algorithm provided by the invention to complex background is better, the image removed not only saves good textural characteristics in inside, shadow region, and comparatively smooth at Shadow edge place, show that algorithm achieves illumination transition effect well in penumbra region herein.
Three, according to profile information, a kind of algorithm of region merging technique is proposed, and the parameter that can be merged by control area, control the size of combined region easily, these advantages improve the effect of shadow removal jointly.
Accompanying drawing explanation
Fig. 1 is algorithm flow chart of the present invention;
A () is original image; B () is the image after segmentation; C shade that () is detected for single area identifier and non-hatched area, white represents shadow region; D shade that () is detected for matching area recognizer and non-hatched area; (e) shade label k for calculating through stingy nomography igray-scale map; F () is the image after shadow removal.
Fig. 2 is region merging algorithm process flow diagram;
Fig. 3 is region matching graph, and in figure, red line represents the matching area of same light photograph, and blue line represents the matching area of different light.
Embodiment
Below in conjunction with the drawings and the specific embodiments, the invention will be further described.
The present invention, according to the feature of shade, is merged Shadow edge and is detected and shadow region detection, segmentation shade and non-hatched area.Algorithm is divided into two steps: Shadow edge detects and Shadow segmentation.
Embodiment Shadow edge detects
First, Shadow edge detecting device Pb is built by calculated direction gradient information G (x, y, θ).The method built is more in the picture centered by (x, y), and be that radius draws a circle with r, this circle is divided into two semicircles by the diameter that direction is θ.Finally by calculate these two semicircles histogram between χ 2distance obtains direction gradient G:
χ 2 ( g , h ) = 1 2 Σ i ( g ( i ) - h ( i ) ) 2 g ( i ) + h ( i ) - - - ( 1 )
Wherein, g and h represents two semicircles, the value in i representative image codomain.
Detecting device calculates the gradient information G (x, y, θ) of brightness and texture primitive two passages respectively.The account form of texture primitive is that the odd even Gaussian filter in selection 16 directions carries out filtering, and the texture primitive through k average algorithm cluster is divided into 32 groups.
Secondly, the detecting device Pb calculated is combined, obtains different scale, local message (local cues) mPb that different combination of channels is got up:
mPb ( x , y , θ ) = Σ s Σ i α i , s G i , σ ( i , s ) ( x , y , θ ) - - - ( 2 )
Wherein, behalf radius of a circle, i representative feature passage (brightness, texture primitive); G i, σ (i, s)difference between two semicircles that (x, y, θ) compares centered by (x, y), σ (i, s) is radius size, θ is radial direction, α i,sit is then the weight representated by each gradient information.
On each point, choose the mPb value of maximal value as this point of gradient information G, the final local message that this value represents:
mPb ( x , y , θ ) = max θ { mPb ( x , y , θ ) } - - - ( 3 )
Then, in order to extract profile information better, the method using standard normalization figure to cut obtains global information (global cues).Building a sparse matrix with mPb, by calculating eigenwert and the proper vector of this matrix, obtaining required global information.Sparse matrix is in the region of radius r=5 pixel, each pixel link is got up:
W ij = exp ( - max p ∈ ij ‾ { mPb ( p ) } ρ ) - - - ( 4 )
Wherein, representative be contact between i and j, ρ=0.1, then defines D ii=∑ jw ij, pass through
(D-W)v=λDv (5)
Calculate proper vector { v 0, v 1..., v n, and eigenwert 0=λ 0≤ λ 1≤ ... ≤ λ n.
Although normalization figure cuts can not split image well, the profile of image can be reflected well.So, regarding each proper vector as a sub-picture, by calculating the gaussian filtering under different directions, obtaining directional information then, the directional information linear superposition under different characteristic vector is got up:
sPb ( x , y , θ ) = Σ k = 1 n 1 λ k · ▿ θ v k ( x , y ) - - - ( 6 )
MPb and sPb represents different marginal informations respectively.MPb covers marginate information, is local message; SPb then illustrates the information on limit the most outstanding, is global information.Just because of this, both organically being combined can analysis chart be as profile information effectively, and the present invention is defined as gPb:
gPb ( x , y , θ ) = Σ s Σ i { β i , s G i , σ ( x , y , θ ) + γ · sPb ( x , y , θ ) } - - - ( 7 )
In formula, β i,sthe coefficient of mPb and sPb is represented respectively with γ.
Embodiment Iamge Segmentation
GPb effectively can represent profile, but these profiles neither be complete totally enclosed, therefore can not be used as segmentation image.In order to split image better, watershed algorithm is used to utilize profile information gPb to carry out region segmentation herein.
In order to better describe watershed algorithm, first consider any one contour detecting device E (x, y, θ).It is the probability of profile that this detecting device can estimate any point (x, y) in image on the θ of direction, and this value is larger, represents that this point is that the probability of profile is higher.Each point is asked to the maximal value of contour detecting:
E ( x , y ) = max θ E ( x , y , θ ) - - - ( 8 )
Then, utilize mathematical morphology, calculate each region with the minimum value of E (x, y) in region for " retaining basin ".Each " retaining basin " corresponding region, is designated as P 0; Liang Ge retaining basin intersection is " watershed divide ", is designated as K 0.But watershed algorithm can produce over-segmentation problem, should not that the place on limit is labeled as watershed divide by this.In order to address this problem, utilize a kind of new region merging algorithm.The prerequisite of this algorithm is: watershed algorithm is bound to produce undue problem, and namely initial each retaining basin needs to merge.
Region merging algorithm is described below: define a non-directed graph G=(P 0, K 0, W (K 0), E (P 0)), wherein W (K 0) represent the weights of every bar watershed divide, sentence by the gross energy that watershed divide is put number that watershed divide is put and obtain; E (P 0) representing the energy value of each retaining basin, the zero energy in each basin is zero.While it is noted that in the drawings, every bar splits two regions all just, W (K 0) describe diversity between adjacent two regions.By watershed divide according to its weights, ascending stored in queue, the process flow diagram of algorithm as shown in Figure 2.
Assuming that R 1and R 2by limit C *segmentation, and R=R 1∪ R 2.The condition merged is: if min{E is (R 1), E (R 2) ≠ 0, then
W(K 0)≤τ·min{E(R 1),E(R 2)} (9)
Or min{E (R 1), E (R 2)=0 (10)
Wherein, τ represents a constant.Just can adjust merging condition by adjustment τ, thus control the size of final area, τ is larger, and the final region area merged is larger.
E (R), P 0and K 0update method be:
E(R)=max{E(R 1),E(R 2),W(C *)} (11)
P 0←P 0\{R 1,R 2}∪R (12)
K 0←K 0\{C *} (13)
Embodiment shadow Detection
Single region recognition: train a SVM classifier to judge that single region is the probability of shade; Training set manual markings has gone out dash area, and sorter utilizes the brightness of shade and texture primitive to classify, and exports as representing that region is the probability of shade.
Matching area identification: whether be shadow region, region that should be similar to its texture compares if detecting a region.If with brightness similar, then under the two is in same intensity of illumination; And if brightness different, then assert that the darker region of brightness is shade.
The present invention utilizes sorter to train four features to judge shadow region:
1. the χ of brightness and texture primitive 2distance;
2. average RGB ratio
When the matching area contrasted is identical material, the triple channel value of non-hatched area is higher.Computing formula is as follows:
ρ R = R avg 1 R avg 2 ρ G = G avg 1 G avg 2 ρ B = B avg 1 B avg 2 - - - ( 14 )
Wherein, R avg1represent the mean value of first region R passage.
3. color degree of registration
Shade/the non-shadow of identical material keeps alignment to the color on rgb space.This parameter is by calculating ρ r/ ρ gand ρ g/ ρ bobtain.
4. region normalized cumulant
Whether whether the material due to matching area not necessarily identical and adjacent relevant, so this parameter is also classified as one of training characteristics by this.This value be by matching area between Euclidean distance calculated.
Build following energy equation, use figure cuts algorithm and obtains final shadow detection result:
y ^ = arg min y Σ k cos t k unary ( y k ) + α 2 Σ { i , j } ∈ E same c ij same 1 ( y i ≠ y j ) - - - ( 15 )
Meanwhile,
cos t k unary ( y k ) = - c k shadow y k - α 1 Σ { i = k , j } ∈ E diff c ij diff y k + α 1 Σ { i , j = k } ∈ E diff c ij diff y k - - - ( 16 )
Wherein, represent that Region Matching sorter is for two area light photograph estimations together, represent Region Matching sorter for two area light according to different estimations, the estimation of whether be single region classifier to single regions be shade; { i, j} ∈ E samerepresent two regions of same light photograph, { i, j} ∈ E di frepresent two regions of different light; Y={-1,1} n, when for 1 time represent that this region is shadow region.
Embodiment shadow removal
Removing for effectively carrying out shade, a suitable shadow model must be set up.The illumination of shadow model is determined jointly by direct light and surround lighting:
I i=(k i·L d+L e)R i(17)
For pixel i, k irepresent shade label.Work as k iwhen=0, represent that this pixel is in shadow region; Work as k iwhen=1, represent that this pixel is in non-hatched area; Work as 0<k i<1, represents that this pixel is in penumbra region.
Although shadow Detection imparts a label (0 or 1) for each pixel, however in actual scene Shadow edge by non-shadow to shade progressively transition.In order to remove Shadow edge better, use herein and scratching nomography calculating penumbra region:
I i=k iF i+(1-k i)B i(18)
=k i(L dR i+L eR i)+(1-k i)L eR i(19)
Prospect is labeled as non-shadow, and context marker is shade.K is obtained by the minimum value calculating following energy equation isize:
E ( k ) = k T Lk + &lambda; ( k - k ^ ) T D ( k - k ^ ) - - - ( 20 )
Wherein, L is the Laplacian Matrix of stingy figure, and D (i, i) is a diagonal matrix.In experiment, D (i, i)=1 represents that pixel i is the edge of shadow region, and D (i, i)=0 item represents other institute a little.
According to shadow model, if a pixel is lit, then
I i shadow - free = ( L d + L e ) R i ( 21 ) = ( k i &CenterDot; L d + L e ) R i ( L d + L e ) ( k i &CenterDot; L d + L e ) ( 22 ) = r + 1 k i r + 1 I i ( 23 )
Wherein, r=L d/ L edirect light L dwith surround lighting L eratio, I irepresent pixel i value originally.So calculating r just can by shadow removal.
Known
I i=(k i·L d+L e)R i(24)
I j=(k j·L d+L e)R j(25)
If the reflection coefficient of two pixels is identical, i.e. R i=R j, then
r = I j - I i k j I i - k i I j
Choose the most close pixel at adjacent shade and non-hatched area both sides, then calculate r value according to formula (24).
Above content is only some detailed descriptions carried out the present invention in conjunction with concrete scheme, can not assert that the concrete enforcement of invention is only limited to these explanations.Concerning general technical staff of the technical field of the invention, do not departing under concept thereof of the present invention, simple deduction and replacement can also made, all should be considered as in protection scope of the present invention.

Claims (7)

1., based on shadow Detection and the removal algorithm of Iamge Segmentation, it is characterized in that, comprise the steps:
In conjunction with local message and global information, S100: utilize texture and brightness, estimates that each pixel is the probability of Shadow edge;
S200: use watershed algorithm to utilize profile information gPb to split image;
S300: utilize based on edge area merges algorithm by shadow region in image and non-hatched area separated, respectively shadow region and non-hatched area are divided into several subregions simultaneously; Then, utilize the information of single area information and matching area, train a sorter SVM respectively, shade is identified; Use figure cuts the energy equation that Algorithm for Solving detects shade subsequently, obtains final shadow detection result.
S400: according to the result of shadow Detection, uses stingy nomography computational shadowgraph label, utilizes the label obtained to light shadow region, recovers the illumination of shadow region, itself and the unshaded area light of surrounding is taken a picture same.
2. the shadow Detection based on Iamge Segmentation according to claim 1 and removal algorithm, is characterized in that, the main the following steps composition of described step S100:
S101: by calculated direction gradient information G (x, y, θ) build Shadow edge detecting device Pb, Shadow edge detecting device calculates the gradient information G (x, y, θ) of brightness and texture primitive two passages respectively, the construction method of Shadow edge detecting device is (x more in the picture, y), centered by, be that radius draws a circle with r, this circle is divided into two semicircles by the diameter that direction is θ;
S102: by the χ between the histogram that calculates these two semicircles 2distance obtains direction gradient G:
Wherein, g and h represents two semicircles, the value in i representative image codomain;
S103: combined by the detecting device Pb calculated, obtains different scale, local message (local cues) mPb that different combination of channels is got up:
Wherein, behalf radius of a circle, i representative feature passage (brightness, texture primitive); G i, σ (i, s)difference between two semicircles that (x, y, θ) compares centered by (x, y), σ (i, s) is radius size, θ is radial direction; α i, sit is then the weight representated by each gradient information;
On each point, choose the mPb value of maximal value as this point of gradient information G, this value represents final local message:
mPb(x,y,θ)=max θ{mPb(x,y,θ)} (3);
Step S104: build a sparse matrix with mPb, by calculating eigenwert and the proper vector of this matrix, obtains required global information; Sparse matrix is in the region of radius r=5 pixel, each pixel link is got up:
Wherein, representative be contact between i and j, ρ=0.1, then defines D ii=∑ jw ij, pass through
(D-W)v=λDv (5)
Calculate proper vector { v 0, v 1..., v n, and eigenwert 0=λ 0≤ λ 1≤ ... ≤ λ n;
Step S105: regard each proper vector in step S104 as a sub-picture, by calculating the gaussian filtering under different directions, obtains directional information then, the directional information linear superposition under different characteristic vector is got up to obtain global information sPb:
Step S106: local message mPb and global information sPb is organically combined analysis chart as profile information gPb:
Wherein, β i, sthe coefficient of mPb and sPb is represented respectively with γ.
3. the shadow Detection based on Iamge Segmentation according to claim 1 and removal algorithm, is characterized in that, the main the following steps composition of described step S200:
Step S201: estimate any point (x, y) in image is the probability of profile on the θ of direction, obtains the maximal value that this dot profile detects:
Step S202: utilize mathematical morphology, calculates each region with the minimum value of E (x, y) in region for " retaining basin ", and each " retaining basin " corresponding region, is designated as P 0; Liang Ge retaining basin intersection is " watershed divide ", is designated as K 0;
Step S203: watershed algorithm can produce over-segmentation problem, the place that should not be limit by this is labeled as watershed divide, utilizes region merging algorithm to solve over-segmentation problem;
Described region merging algorithm is as follows: define a non-directed graph G=(P 0, K 0, W (K 0), E (P 0)), wherein W (K 0) represent and the weights of every bar watershed divide obtained, E (P by the gross energy that watershed divide is put divided by the number that watershed divide is put 0) representing the energy value of each retaining basin, the zero energy in each basin is zero, W (K 0) describe diversity between adjacent two regions; By watershed divide according to its weights, ascending stored in queue.
4. the shadow Detection based on Iamge Segmentation according to claim 3 with remove algorithm, it is characterized in that, region merging algorithm comprise the following steps into:
One, the limit C finding weight minimum *=argminW (C)
Assuming that R 1and R 2by limit C *segmentation, and R=R 1∪ R 2if, min{E (R 1), E (R 2) ≠ 0, judge whether to merge, merging condition is: then W (K 0)≤τ min{E (R 1), E (R 2) (9)
Or min{E (R 1), E (R 2)=0 (10),
Wherein τ is a constant;
If two merge, then upgrade E (R), P 0and K 0, E (R), P 0and K 0update method be:
E(R)=max{E(R 1),E(R 2),W(C *)} (11)
P 0←P 0\{R 1,R 2}∪R (12)
K 0←K 0\{C *} (13)。
5. the shadow Detection based on Iamge Segmentation according to claim 1 and removal algorithm, is characterized in that, obtain final shadow detection result by separating following energy equation in described step S300, energy equation cuts algorithm by figure to solve:
Meanwhile,
Wherein, represent that Region Matching sorter is for two area light photograph estimations together, represent Region Matching sorter for two area light according to different estimations, the estimation of whether be single region classifier to single regions be shade, { i, j} ∈ E samerepresent two regions of same light photograph, { i, j} ∈ E diffrepresent two regions of different light; Y={-1,1} n, when for 1 time represent that this region is shadow region.
6. the shadow Detection based on Iamge Segmentation according to claim 1 with remove algorithm, it is characterized in that, use in described step S400 and scratch nomography computational shadowgraph label, this algorithm thinks an image I ican by prospect F iwith background B imix, its formula is as follows:
I i=k iF i+(1-k i)B i(18)
=k i(L dR i+L eR i)+(1-k i)L eR i(19)
Wherein, L ddirect light, L esurround lighting, k ishade label, R iit is the reflection coefficient of an i.
Prospect is labeled as non-shadow, and context marker is shade, obtains k by the minimum value calculating following energy equation isize, k irepresent the label of some i, obtained by (20), k is k ithe vector of composition, obtains k and just obtains k i.
K tbe the transposition of k, k is k ithe vector of composition, λ is a very large numerical value, specifically determines by putting into practice. the label y obtained by the energy equation of step S300 kthe vector of composition, in the value of each element be exactly the y obtained in formula k, each element value is not 0 is here exactly 1.Please note: this formula is exactly that the k calculated is a vector, the label of each pixel of each element representation in vector in order to calculate k; But the span of the value of each element becomes [0,1], and that is the label of one part of pixel has been become the value of 0 to 1 interval range from 1 or 0; Label is still just expression shadow region and the non-hatched area of 0 or 1, label value expression penumbra region between zero and one; L is the Laplacian Matrix of stingy figure, and D (i, i) is a diagonal matrix, and D (i, i)=1 represents that pixel i is the edge of shadow region, and D (i, i)=0 item represents other institute a little.
7. the shadow Detection based on Iamge Segmentation according to claim 6 and removal algorithm, is characterized in that, utilize the label obtained to light shadow region, recover the illumination of shadow region:
According to shadow model, if a pixel is lit, then
Wherein, r=L d/ L edirect light L dwith surround lighting L eratio, I irepresent pixel i value originally, so calculating r just can by shadow removal;
Known
I i=(k i·L d+L e)R i(24)
I j=(k j·L d+L e)R j(25)
If the reflection coefficient of two pixels is identical, i.e. R i=R j, then
CN201410675195.XA 2014-11-22 2014-11-22 Shadow detection and removal algorithm based on image segmentation Pending CN104463853A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410675195.XA CN104463853A (en) 2014-11-22 2014-11-22 Shadow detection and removal algorithm based on image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410675195.XA CN104463853A (en) 2014-11-22 2014-11-22 Shadow detection and removal algorithm based on image segmentation

Publications (1)

Publication Number Publication Date
CN104463853A true CN104463853A (en) 2015-03-25

Family

ID=52909835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410675195.XA Pending CN104463853A (en) 2014-11-22 2014-11-22 Shadow detection and removal algorithm based on image segmentation

Country Status (1)

Country Link
CN (1) CN104463853A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447501A (en) * 2015-11-02 2016-03-30 北京旷视科技有限公司 Clustering-based license image shadow detection method and apparatus
CN106023113A (en) * 2016-05-27 2016-10-12 哈尔滨工业大学 Satellite high-score image shadow region recovery method based on non-local sparse
CN106295570A (en) * 2016-08-11 2017-01-04 北京暴风魔镜科技有限公司 Block filtration system and method alternately
CN106408648A (en) * 2015-08-03 2017-02-15 青岛海信医疗设备股份有限公司 Medical-tissue slice-image three-dimensional reconstruction method and equipment thereof
CN106488180A (en) * 2015-08-31 2017-03-08 上海悠络客电子科技有限公司 Video shadow detection method
CN107507146A (en) * 2017-08-28 2017-12-22 武汉大学 A kind of natural image soft shadowses removing method
CN109493406A (en) * 2018-11-02 2019-03-19 四川大学 Quick percentage is close to soft shadows method for drafting
CN110427950A (en) * 2019-08-01 2019-11-08 重庆师范大学 Purple soil soil image shadow detection method
US10504282B2 (en) 2018-03-21 2019-12-10 Zoox, Inc. Generating maps without shadows using geometry
CN110765875A (en) * 2019-09-20 2020-02-07 浙江大华技术股份有限公司 Method, equipment and device for detecting boundary of traffic target
WO2020119618A1 (en) * 2018-12-12 2020-06-18 中国科学院深圳先进技术研究院 Image inpainting test method employing texture feature fusion
US10699477B2 (en) * 2018-03-21 2020-06-30 Zoox, Inc. Generating maps without shadows
CN111526263A (en) * 2019-02-01 2020-08-11 光宝电子(广州)有限公司 Image processing method, device and computer system
WO2021147408A1 (en) * 2020-01-22 2021-07-29 腾讯科技(深圳)有限公司 Pixel point identification method and apparatus, illumination rendering method and apparatus, electronic device and storage medium
CN113256666A (en) * 2021-07-19 2021-08-13 广州中望龙腾软件股份有限公司 Contour line generation method, system, equipment and storage medium based on model shadow
CN114742836A (en) * 2022-06-13 2022-07-12 浙江太美医疗科技股份有限公司 Medical image processing method and device and computer equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295013A (en) * 2013-05-13 2013-09-11 天津大学 Pared area based single-image shadow detection method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295013A (en) * 2013-05-13 2013-09-11 天津大学 Pared area based single-image shadow detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PABLO ARBELA´ EZ等: "Contour Detection and Hierarchical Image Segmentation", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
RUIQI GUO等: "Single-Image Shadow Detection and Removal using Paired Regions", 《COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408648A (en) * 2015-08-03 2017-02-15 青岛海信医疗设备股份有限公司 Medical-tissue slice-image three-dimensional reconstruction method and equipment thereof
CN106488180A (en) * 2015-08-31 2017-03-08 上海悠络客电子科技有限公司 Video shadow detection method
CN105447501A (en) * 2015-11-02 2016-03-30 北京旷视科技有限公司 Clustering-based license image shadow detection method and apparatus
CN105447501B (en) * 2015-11-02 2019-03-01 徐州旷视数据科技有限公司 License image shadow detection method and device based on cluster
CN106023113A (en) * 2016-05-27 2016-10-12 哈尔滨工业大学 Satellite high-score image shadow region recovery method based on non-local sparse
CN106023113B (en) * 2016-05-27 2018-12-14 哈尔滨工业大学 Based on the high partial image shadow region restoration methods of the sparse satellite of non-local
CN106295570A (en) * 2016-08-11 2017-01-04 北京暴风魔镜科技有限公司 Block filtration system and method alternately
CN106295570B (en) * 2016-08-11 2019-09-13 北京暴风魔镜科技有限公司 Filtration system and method are blocked in interaction
CN107507146A (en) * 2017-08-28 2017-12-22 武汉大学 A kind of natural image soft shadowses removing method
CN107507146B (en) * 2017-08-28 2021-04-16 武汉大学 Natural image soft shadow elimination method
US10504282B2 (en) 2018-03-21 2019-12-10 Zoox, Inc. Generating maps without shadows using geometry
US10699477B2 (en) * 2018-03-21 2020-06-30 Zoox, Inc. Generating maps without shadows
CN109493406A (en) * 2018-11-02 2019-03-19 四川大学 Quick percentage is close to soft shadows method for drafting
CN109493406B (en) * 2018-11-02 2022-11-11 四川大学 Fast percentage approaching soft shadow drawing method
WO2020119618A1 (en) * 2018-12-12 2020-06-18 中国科学院深圳先进技术研究院 Image inpainting test method employing texture feature fusion
CN111526263A (en) * 2019-02-01 2020-08-11 光宝电子(广州)有限公司 Image processing method, device and computer system
CN111526263B (en) * 2019-02-01 2022-03-18 光宝电子(广州)有限公司 Image processing method, device and computer system
CN110427950A (en) * 2019-08-01 2019-11-08 重庆师范大学 Purple soil soil image shadow detection method
CN110427950B (en) * 2019-08-01 2021-08-27 重庆师范大学 Purple soil image shadow detection method
CN110765875A (en) * 2019-09-20 2020-02-07 浙江大华技术股份有限公司 Method, equipment and device for detecting boundary of traffic target
CN110765875B (en) * 2019-09-20 2022-04-19 浙江大华技术股份有限公司 Method, equipment and device for detecting boundary of traffic target
WO2021147408A1 (en) * 2020-01-22 2021-07-29 腾讯科技(深圳)有限公司 Pixel point identification method and apparatus, illumination rendering method and apparatus, electronic device and storage medium
CN113256666A (en) * 2021-07-19 2021-08-13 广州中望龙腾软件股份有限公司 Contour line generation method, system, equipment and storage medium based on model shadow
CN114742836A (en) * 2022-06-13 2022-07-12 浙江太美医疗科技股份有限公司 Medical image processing method and device and computer equipment
CN114742836B (en) * 2022-06-13 2022-09-09 浙江太美医疗科技股份有限公司 Medical image processing method and device and computer equipment

Similar Documents

Publication Publication Date Title
CN104463853A (en) Shadow detection and removal algorithm based on image segmentation
CN104834898B (en) A kind of quality classification method of personage&#39;s photographs
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN102663354B (en) Face calibration method and system thereof
CN103971126B (en) A kind of traffic sign recognition method and device
US8294794B2 (en) Shadow removal in an image captured by a vehicle-based camera for clear path detection
EP3036730B1 (en) Traffic light detection
CN101916370A (en) Method for processing non-feature regional images in face detection
CN104318262A (en) Method and system for replacing skin through human face photos
CN102426649B (en) Simple steel seal digital automatic identification method with high accuracy rate
CN102819728A (en) Traffic sign detection method based on classification template matching
CN107862667B (en) Urban shadow detection and removal method based on high-resolution remote sensing image
CN107945200B (en) Image binarization segmentation method
CN107610114A (en) Optical satellite remote sensing image cloud snow mist detection method based on SVMs
CN102147867B (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN102509098A (en) Fisheye image vehicle identification method
CN103530600A (en) License plate recognition method and system under complicated illumination
Yang et al. Real-time traffic sign detection via color probability model and integral channel features
CN107392968B (en) The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure
CN103295013A (en) Pared area based single-image shadow detection method
CN103295010A (en) Illumination normalization method for processing face images
CN104636754B (en) Intelligent image sorting technique based on tongue body subregion color characteristic
CN103208115A (en) Detection method for salient regions of images based on geodesic line distance
CN104021566A (en) GrabCut algorithm-based automatic segmentation method of tongue diagnosis image
US11238301B2 (en) Computer-implemented method of detecting foreign object on background object in an image, apparatus for detecting foreign object on background object in an image, and computer-program product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
C53 Correction of patent for invention or patent application
CB03 Change of inventor or designer information

Inventor after: Liu Yanli

Inventor after: Chen Zhuo

Inventor before: Liu Yanli

Inventor before: Chen Zhuo

RJ01 Rejection of invention patent application after publication

Application publication date: 20150325

RJ01 Rejection of invention patent application after publication