CN104463853A  Shadow detection and removal algorithm based on image segmentation  Google Patents
Shadow detection and removal algorithm based on image segmentation Download PDFInfo
 Publication number
 CN104463853A CN104463853A CN201410675195.XA CN201410675195A CN104463853A CN 104463853 A CN104463853 A CN 104463853A CN 201410675195 A CN201410675195 A CN 201410675195A CN 104463853 A CN104463853 A CN 104463853A
 Authority
 CN
 China
 Prior art keywords
 shadow
 region
 algorithm
 value
 label
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Pending
Links
 238000001514 detection method Methods 0.000 title claims abstract description 44
 238000003709 image segmentation Methods 0.000 title abstract 3
 238000005286 illumination Methods 0.000 claims abstract description 13
 230000000875 corresponding Effects 0.000 claims abstract description 4
 230000011218 segmentation Effects 0.000 claims description 23
 239000011159 matrix material Substances 0.000 claims description 15
 239000000203 mixture Substances 0.000 claims description 15
 238000001914 filtration Methods 0.000 claims description 4
 238000004458 analytical method Methods 0.000 claims description 3
 230000001174 ascending Effects 0.000 claims description 3
 239000003550 marker Substances 0.000 claims description 3
 238000010276 construction Methods 0.000 claims description 2
 230000017105 transposition Effects 0.000 claims description 2
 230000004927 fusion Effects 0.000 abstract 1
 230000000694 effects Effects 0.000 description 7
 239000000463 material Substances 0.000 description 4
 238000000034 method Methods 0.000 description 4
 201000000522 chronic kidney disease Diseases 0.000 description 3
 238000010586 diagram Methods 0.000 description 2
 238000010606 normalization Methods 0.000 description 2
 238000005516 engineering process Methods 0.000 description 1
 238000006748 scratching Methods 0.000 description 1
 230000002393 scratching Effects 0.000 description 1
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T5/00—Image enhancement or restoration
 G06T5/007—Dynamic range modification
 G06T5/008—Local, e.g. shadow enhancement

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/10—Segmentation; Edge detection
 G06T7/11—Regionbased segmentation
Abstract
The invention discloses a shadow detection and removal algorithm based on image segmentation, and relates to the technical field of image processing. The algorithm mainly solves the problems about how to judge whether shadows exist in a region or not or whether an edge is a shadow or not and how to remove corresponding shadows. The algorithm includes the steps that firstly, through texture and brightness characteristics, the probability that each pixel point is a shadow edge is estimated through the combination of local information and overall information; an image is segmented through a watershed algorithm and contour information shown in the specification; a shadow region and a nonshadow region in the image are segmented through a region fusion algorithm based on edges, and meanwhile the shadow region and the nonshadow region are segmented into multiple subregions respectively; then classifiers SVM are trained respectively for recognizing shadows; then, a shadow detection energy equation is solved through an image segmentation algorithm, and then a final shadow detection result is acquired; finally, according to the shadow detection result, shadow labels are calculated through an image matting algorithm, the shadow region is lightened through the marks, and illumination of the shadow region is restored.
Description
Technical field
The present invention is specifically related to a kind of shadow Detection based on Iamge Segmentation and removes algorithm, removes, belong to technical field of image processing for image shadow Detection and penumbra region.
Background technology
Shadow Detection is one of study hotspot of image processing field always.Existence due to shade can increase the difficulty of object identification, Video segmentation scheduling algorithm, by shadow Detection out and its removal can be significantly improved the performance of the many algorithms of image processing field.
The method of a lot of shadow Detection is based on illumination model, or color model proposition.As the shadow detection method under color space HSI, the method utilizes the ratio of H and I in HSI space to detect shade.But the method is more suitable for taking photo by plane, figure or the obvious image of shade, then barely satisfactory for the shadow Detection effect under complex scene.But owing to can not judge that a pixel belongs to shade and still belongs to the darker nonshadow of color well, the method automatically detecting shade remains a major challenge of shadow Detection.
Up to now, there is a large amount of shadow Detection algorithms in image processing field.According to the difference of detection means, existing shadow Detection algorithm can be divided into based on the shadow Detection at edge and the shadow Detection based on study.
First shadow Detection algorithm based on edge needs the grayscale map of scene cromogram and illumination invariant, and grayscale map is obtained by the video camera calibrated, and detects shade (G.D.Finlayson, 2006) by the edge at the edge and original image that compare grayscale map.The method has splendid treatment effect to high quality graphic, and general to normal image effect.Shadow Detection algorithm based on study considers the complicacy of Shadow edge, then from the angle of experience, the method for datadriven is incorporated into shadow Detection, as based on the Information Pull such as intensity of illumination, gradient condition random field (Conditional random field, CRF), judge whether a region is shade; Or utilize CRF to judge an edge whether to be shade.Although utilize the method for condition random field CRF can detect shade well under certain condition this, its training process is tediously long, and higher to the dependency degree of training set.Even to this day, under the impact of the factors such as illumination, object reflectance and shade geometric configuration, shadow Detection remains a very challenging problem.
Summary of the invention
For abovementioned prior art, the object of the invention is to how a kind of shadow Detection based on Iamge Segmentation is provided and removes algorithm, it can split shade and nonhatched area effectively, can also detect better and remove in scene from shade and cast shadow.
In order to solve the problems of the technologies described above, the present invention adopts following technical scheme:
Based on shadow Detection and the removal algorithm of Iamge Segmentation, it is characterized in that, comprise the steps:
In conjunction with local message and global information, S100: utilize texture and brightness, estimates that each pixel is the probability of Shadow edge;
S200: use watershed algorithm to utilize profile information gPb to split image;
S300: utilize based on edge area merges algorithm by shadow region in image and nonhatched area separated, respectively shadow region and nonhatched area are divided into several subregions simultaneously; Then, utilize the information of single area information and matching area, train a sorter SVM respectively, shade is identified; Use figure cuts the energy equation that Algorithm for Solving detects shade subsequently, obtains final shadow detection result.
S400, according to the result of shadow Detection, uses stingy nomography computational shadowgraph label, utilizes the label obtained to light shadow region, recovers the illumination of shadow region, itself and the unshaded area light of surrounding is taken a picture same.
The main the following steps composition of described step S100:
S101: by calculated direction gradient information G (x, y, θ) build Shadow edge detecting device Pb, Shadow edge detecting device calculates the gradient information G (x, y, θ) of brightness and texture primitive two passages respectively, the construction method of Shadow edge detecting device is (x more in the picture, y), centered by, be that radius draws a circle with r, this circle is divided into two semicircles by the diameter that direction is θ;
S102: by the χ between the histogram that calculates these two semicircles
^{2}distance obtains direction gradient G:
Wherein, g and h represents two semicircles, the value in i representative image codomain;
S103: combined by the detecting device Pb calculated, obtains different scale, local message (local cues) mPb that different combination of channels is got up:
Wherein, behalf radius of a circle, i representative feature passage (brightness, texture primitive); G
_{i, σ (i, s)}difference between two semicircles that (x, y, θ) compares centered by (x, y), σ (i, s) is radius size, θ is radial direction; α
_{i,s}it is then the weight representated by each gradient information;
On each point, choose the mPb value of maximal value as this point of gradient information G, this value represents final local message:
Step S104: build a sparse matrix with mPb, by calculating eigenwert and the proper vector of this matrix, obtains required global information; Sparse matrix is in the region of radius r=5 pixel, each pixel link is got up:
Wherein,
representative be contact between i and j, ρ=0.1, then defines D
_{ii}=∑
_{j}w
_{ij}, pass through
(DW)v＝λDv (5)
Calculate proper vector { v
_{0}, v
_{1}..., v
_{n}, and eigenwert 0=λ
_{0}≤ λ
_{1}≤ ... ≤ λ
_{n}.
Step S105: regard each proper vector in step S104 as a subpicture, by calculating the gaussian filtering under different directions, obtains directional information
then, the directional information linear superposition under different characteristic vector is got up to obtain global information sPb:
Step S106: local message mPb and global information sPb is organically combined analysis chart as profile information gPb:
Wherein, β
_{i,s}the coefficient of mPb and sPb is represented respectively with γ.
The main the following steps composition of described step S200:
Step S201: estimate any point (x, y) in image is the probability of profile on the θ of direction, obtains the maximal value that this dot profile detects:
Step S202: utilize mathematical morphology, calculates each region with the minimum value of E (x, y) in region for " retaining basin ", and each " retaining basin " corresponding region, is designated as P
_{0}; Liang Ge retaining basin intersection is " watershed divide ", is designated as K
_{0};
Step S203: watershed algorithm can produce oversegmentation problem, the place that should not be limit by this is labeled as watershed divide, utilizes region merging algorithm to solve oversegmentation problem;
Described region merging algorithm is as follows: define a nondirected graph G=(P
_{0}, K
_{0}, W (K
_{0}), E (P
_{0})), wherein W (K
_{0}) represent and the weights of every bar watershed divide obtained, E (P by the gross energy that watershed divide is put divided by the number that watershed divide is put
_{0}) representing the energy value of each retaining basin, the zero energy in each basin is zero, W (K
_{0}) describe diversity between adjacent two regions; By watershed divide according to its weights, ascending stored in queue.
Region merging algorithm comprise the following steps into:
One, the limit C finding weight minimum
^{*}=argminW (C)
Assuming that R
_{1}and R
_{2}by limit C
^{*}segmentation, and R=R
_{1}∪ R
_{2}if, min{E (R
_{1}), E (R
_{2}) ≠ 0, judge whether to merge, merging condition is: then W (K
_{0})≤τ min{E (R
_{1}), E (R
_{2}) (9)
Or min{E (R
_{1}), E (R
_{2})=0 (10),
Wherein τ is a constant;
If two merge, then upgrade E (R), P
_{0}and K
_{0}, E (R), P
_{0}and K
_{0}update method be:
E(R)＝max{E(R
_{1}),E(R
_{2}),W(C
^{*})} (11)
P
_{0}←P
_{0}\{R
_{1},R
_{2}}∪R (12)
K
_{0}←K
_{0}\{C
^{*}} (13)
More a step ground, adjusts merging condition by adjustment τ, enters the size that face controls final area, and τ is larger, and the final region area merged is larger.
Obtain final shadow detection result by separating following energy equation in described step S300, energy equation cuts algorithm by figure to solve:
Meanwhile,
Wherein,
represent that Region Matching sorter is for two area light photograph estimations together,
represent Region Matching sorter for two area light according to different estimations,
the estimation of whether be single region classifier to single regions be shade, { i, j} ∈ E
_{same}represent two regions of same light photograph, { i, j} ∈ E
_{diff}represent two regions of different light; Y={1,1}
^{n}, when for 1 time represent that this region is shadow region.
Further be described as described step S400: use and scratch nomography computational shadowgraph label, this algorithm thinks an image I
_{i}can by prospect F
_{i}with background B
_{i}mix, its formula is as follows:
I
_{i}＝k
_{i}F
_{i}+(1k
_{i})B
_{i}(18)
＝k
_{i}(L
_{d}R
_{i}+L
_{e}R
_{i})+(1k
_{i})L
_{e}R
_{i}(19)
Wherein, L
_{d}direct light, L
_{e}surround lighting, k
_{i}shade label, R
_{i}it is the reflection coefficient of an i.
Prospect is labeled as nonshadow, and context marker is shade, obtains k by the minimum value calculating following energy equation
_{i}size, k
_{i}represent the label of some i, obtained by (20), k is k
_{i}the vector of composition, obtains k and just obtains k
_{i}.
K
^{t}be the transposition of k, k is k
_{i}the vector of composition, λ is a very large numerical value, specifically determines by putting into practice.
the label y obtained by the energy equation of step S300
_{k}the vector of composition,
in the value of each element be exactly the y obtained in formula
_{k}, each element value is not 0 is here exactly 1.Please note: this formula is exactly that the k calculated is a vector, the label of each pixel of each element representation in vector in order to calculate k; But the span of the value of each element becomes [0,1], and that is the label of one part of pixel has been become the value of 0 to 1 interval range from 1 or 0; Label is still just expression shadow region and the nonhatched area of 0 or 1, label value expression penumbra region between zero and one.
Wherein, L is the Laplacian Matrix of stingy figure, and D (i, i) is a diagonal matrix, and D (i, i)=1 represents that pixel i is the edge of shadow region, and D (i, i)=0 item represents other institute a little;
Utilize the label obtained to light shadow region, recover the illumination of shadow region:
According to shadow model, if a pixel is lit, then
Wherein, r=L
_{d}/ L
_{e}direct light L
_{d}with surround lighting L
_{e}ratio, I
_{i}represent pixel i value originally, so calculating r just can by shadow removal;
Known
I
_{i}＝(k
_{i}·L
_{d}+L
_{e})R
_{i}(24)
I
_{j}＝(k
_{j}·L
_{d}+L
_{e})R
_{j}(25)
If the reflection coefficient of two pixels is identical, i.e. R
_{i}=R
_{j}, then
The object of formula (24) and (25) calculates r, and the thought calculating this value is exactly, and finding material, identical (namely reflection coefficient is identical, R
_{i}=R
_{j}), illumination is different from (L
_{d}and L
_{e}the same, but shade label k is different) two somes i, j, utilization mathematical relation between the two just can obtain r.I
_{j}just represent the pixel value of such some j.
Compared with prior art, the present invention has following beneficial effect:
One, utilize identical shadow character segmentation image and detection shade organically to be combined, split the shade in image and nonhatched area well, make Iamge Segmentation more accurate, more accurate, better effects if is detected in shadow region;
Two, the profile of shade is more intactly detected, the antijamming capability of detection algorithm provided by the invention to complex background is better, the image removed not only saves good textural characteristics in inside, shadow region, and comparatively smooth at Shadow edge place, show that algorithm achieves illumination transition effect well in penumbra region herein.
Three, according to profile information, a kind of algorithm of region merging technique is proposed, and the parameter that can be merged by control area, control the size of combined region easily, these advantages improve the effect of shadow removal jointly.
Accompanying drawing explanation
Fig. 1 is algorithm flow chart of the present invention;
A () is original image; B () is the image after segmentation; C shade that () is detected for single area identifier and nonhatched area, white represents shadow region; D shade that () is detected for matching area recognizer and nonhatched area; (e) shade label k for calculating through stingy nomography
_{i}grayscale map; F () is the image after shadow removal.
Fig. 2 is region merging algorithm process flow diagram;
Fig. 3 is region matching graph, and in figure, red line represents the matching area of same light photograph, and blue line represents the matching area of different light.
Embodiment
Below in conjunction with the drawings and the specific embodiments, the invention will be further described.
The present invention, according to the feature of shade, is merged Shadow edge and is detected and shadow region detection, segmentation shade and nonhatched area.Algorithm is divided into two steps: Shadow edge detects and Shadow segmentation.
Embodiment Shadow edge detects
First, Shadow edge detecting device Pb is built by calculated direction gradient information G (x, y, θ).The method built is more in the picture centered by (x, y), and be that radius draws a circle with r, this circle is divided into two semicircles by the diameter that direction is θ.Finally by calculate these two semicircles histogram between χ
^{2}distance obtains direction gradient G:
Wherein, g and h represents two semicircles, the value in i representative image codomain.
Detecting device calculates the gradient information G (x, y, θ) of brightness and texture primitive two passages respectively.The account form of texture primitive is that the odd even Gaussian filter in selection 16 directions carries out filtering, and the texture primitive through k average algorithm cluster is divided into 32 groups.
Secondly, the detecting device Pb calculated is combined, obtains different scale, local message (local cues) mPb that different combination of channels is got up:
Wherein, behalf radius of a circle, i representative feature passage (brightness, texture primitive); G
_{i, σ (i, s)}difference between two semicircles that (x, y, θ) compares centered by (x, y), σ (i, s) is radius size, θ is radial direction, α
_{i,s}it is then the weight representated by each gradient information.
On each point, choose the mPb value of maximal value as this point of gradient information G, the final local message that this value represents:
Then, in order to extract profile information better, the method using standard normalization figure to cut obtains global information (global cues).Building a sparse matrix with mPb, by calculating eigenwert and the proper vector of this matrix, obtaining required global information.Sparse matrix is in the region of radius r=5 pixel, each pixel link is got up:
Wherein,
representative be contact between i and j, ρ=0.1, then defines D
_{ii}=∑
_{j}w
_{ij}, pass through
(DW)v＝λDv (5)
Calculate proper vector { v
_{0}, v
_{1}..., v
_{n}, and eigenwert 0=λ
_{0}≤ λ
_{1}≤ ... ≤ λ
_{n}.
Although normalization figure cuts can not split image well, the profile of image can be reflected well.So, regarding each proper vector as a subpicture, by calculating the gaussian filtering under different directions, obtaining directional information
then, the directional information linear superposition under different characteristic vector is got up:
MPb and sPb represents different marginal informations respectively.MPb covers marginate information, is local message; SPb then illustrates the information on limit the most outstanding, is global information.Just because of this, both organically being combined can analysis chart be as profile information effectively, and the present invention is defined as gPb:
In formula, β
_{i,s}the coefficient of mPb and sPb is represented respectively with γ.
Embodiment Iamge Segmentation
GPb effectively can represent profile, but these profiles neither be complete totally enclosed, therefore can not be used as segmentation image.In order to split image better, watershed algorithm is used to utilize profile information gPb to carry out region segmentation herein.
In order to better describe watershed algorithm, first consider any one contour detecting device E (x, y, θ).It is the probability of profile that this detecting device can estimate any point (x, y) in image on the θ of direction, and this value is larger, represents that this point is that the probability of profile is higher.Each point is asked to the maximal value of contour detecting:
Then, utilize mathematical morphology, calculate each region with the minimum value of E (x, y) in region for " retaining basin ".Each " retaining basin " corresponding region, is designated as P
_{0}; Liang Ge retaining basin intersection is " watershed divide ", is designated as K
_{0}.But watershed algorithm can produce oversegmentation problem, should not that the place on limit is labeled as watershed divide by this.In order to address this problem, utilize a kind of new region merging algorithm.The prerequisite of this algorithm is: watershed algorithm is bound to produce undue problem, and namely initial each retaining basin needs to merge.
Region merging algorithm is described below: define a nondirected graph G=(P
_{0}, K
_{0}, W (K
_{0}), E (P
_{0})), wherein W (K
_{0}) represent the weights of every bar watershed divide, sentence by the gross energy that watershed divide is put number that watershed divide is put and obtain; E (P
_{0}) representing the energy value of each retaining basin, the zero energy in each basin is zero.While it is noted that in the drawings, every bar splits two regions all just, W (K
_{0}) describe diversity between adjacent two regions.By watershed divide according to its weights, ascending stored in queue, the process flow diagram of algorithm as shown in Figure 2.
Assuming that R
_{1}and R
_{2}by limit C
^{*}segmentation, and R=R
_{1}∪ R
_{2}.The condition merged is: if min{E is (R
_{1}), E (R
_{2}) ≠ 0, then
W(K
_{0})≤τ·min{E(R
_{1}),E(R
_{2})} (9)
Or min{E (R
_{1}), E (R
_{2})=0 (10)
Wherein, τ represents a constant.Just can adjust merging condition by adjustment τ, thus control the size of final area, τ is larger, and the final region area merged is larger.
E (R), P
_{0}and K
_{0}update method be:
E(R)＝max{E(R
_{1}),E(R
_{2}),W(C
^{*})} (11)
P
_{0}←P
_{0}\{R
_{1},R
_{2}}∪R (12)
K
_{0}←K
_{0}\{C
^{*}} (13)
Embodiment shadow Detection
Single region recognition: train a SVM classifier to judge that single region is the probability of shade; Training set manual markings has gone out dash area, and sorter utilizes the brightness of shade and texture primitive to classify, and exports as representing that region is the probability of shade.
Matching area identification: whether be shadow region, region that should be similar to its texture compares if detecting a region.If with brightness similar, then under the two is in same intensity of illumination; And if brightness different, then assert that the darker region of brightness is shade.
The present invention utilizes sorter to train four features to judge shadow region:
1. the χ of brightness and texture primitive
^{2}distance;
2. average RGB ratio
When the matching area contrasted is identical material, the triple channel value of nonhatched area is higher.Computing formula is as follows:
Wherein, R
_{avg1}represent the mean value of first region R passage.
3. color degree of registration
Shade/the nonshadow of identical material keeps alignment to the color on rgb space.This parameter is by calculating ρ
_{r}/ ρ
_{g}and ρ
_{g}/ ρ
_{b}obtain.
4. region normalized cumulant
Whether whether the material due to matching area not necessarily identical and adjacent relevant, so this parameter is also classified as one of training characteristics by this.This value be by matching area between Euclidean distance calculated.
Build following energy equation, use figure cuts algorithm and obtains final shadow detection result:
Meanwhile,
Wherein,
represent that Region Matching sorter is for two area light photograph estimations together,
represent Region Matching sorter for two area light according to different estimations,
the estimation of whether be single region classifier to single regions be shade; { i, j} ∈ E
_{same}represent two regions of same light photograph, { i, j} ∈ E
_{di} ^{f}represent two regions of different light; Y={1,1}
^{n}, when for 1 time represent that this region is shadow region.
Embodiment shadow removal
Removing for effectively carrying out shade, a suitable shadow model must be set up.The illumination of shadow model is determined jointly by direct light and surround lighting:
I
_{i}＝(k
_{i}·L
_{d}+L
_{e})R
_{i}(17)
For pixel i, k
_{i}represent shade label.Work as k
_{i}when=0, represent that this pixel is in shadow region; Work as k
_{i}when=1, represent that this pixel is in nonhatched area; Work as 0<k
_{i}<1, represents that this pixel is in penumbra region.
Although shadow Detection imparts a label (0 or 1) for each pixel, however in actual scene Shadow edge by nonshadow to shade progressively transition.In order to remove Shadow edge better, use herein and scratching nomography calculating penumbra region:
I
_{i}＝k
_{i}F
_{i}+(1k
_{i})B
_{i}(18)
＝k
_{i}(L
_{d}R
_{i}+L
_{e}R
_{i})+(1k
_{i})L
_{e}R
_{i}(19)
Prospect is labeled as nonshadow, and context marker is shade.K is obtained by the minimum value calculating following energy equation
_{i}size:
Wherein, L is the Laplacian Matrix of stingy figure, and D (i, i) is a diagonal matrix.In experiment, D (i, i)=1 represents that pixel i is the edge of shadow region, and D (i, i)=0 item represents other institute a little.
According to shadow model, if a pixel is lit, then
Wherein, r=L
_{d}/ L
_{e}direct light L
_{d}with surround lighting L
_{e}ratio, I
_{i}represent pixel i value originally.So calculating r just can by shadow removal.
Known
I
_{i}＝(k
_{i}·L
_{d}+L
_{e})R
_{i}(24)
I
_{j}＝(k
_{j}·L
_{d}+L
_{e})R
_{j}(25)
If the reflection coefficient of two pixels is identical, i.e. R
_{i}=R
_{j}, then
Choose the most close pixel at adjacent shade and nonhatched area both sides, then calculate r value according to formula (24).
Above content is only some detailed descriptions carried out the present invention in conjunction with concrete scheme, can not assert that the concrete enforcement of invention is only limited to these explanations.Concerning general technical staff of the technical field of the invention, do not departing under concept thereof of the present invention, simple deduction and replacement can also made, all should be considered as in protection scope of the present invention.
Claims (7)
1., based on shadow Detection and the removal algorithm of Iamge Segmentation, it is characterized in that, comprise the steps:
In conjunction with local message and global information, S100: utilize texture and brightness, estimates that each pixel is the probability of Shadow edge;
S200: use watershed algorithm to utilize profile information gPb to split image;
S300: utilize based on edge area merges algorithm by shadow region in image and nonhatched area separated, respectively shadow region and nonhatched area are divided into several subregions simultaneously; Then, utilize the information of single area information and matching area, train a sorter SVM respectively, shade is identified; Use figure cuts the energy equation that Algorithm for Solving detects shade subsequently, obtains final shadow detection result.
S400: according to the result of shadow Detection, uses stingy nomography computational shadowgraph label, utilizes the label obtained to light shadow region, recovers the illumination of shadow region, itself and the unshaded area light of surrounding is taken a picture same.
2. the shadow Detection based on Iamge Segmentation according to claim 1 and removal algorithm, is characterized in that, the main the following steps composition of described step S100:
S101: by calculated direction gradient information G (x, y, θ) build Shadow edge detecting device Pb, Shadow edge detecting device calculates the gradient information G (x, y, θ) of brightness and texture primitive two passages respectively, the construction method of Shadow edge detecting device is (x more in the picture, y), centered by, be that radius draws a circle with r, this circle is divided into two semicircles by the diameter that direction is θ;
S102: by the χ between the histogram that calculates these two semicircles
^{2}distance obtains direction gradient G:
Wherein, g and h represents two semicircles, the value in i representative image codomain;
S103: combined by the detecting device Pb calculated, obtains different scale, local message (local cues) mPb that different combination of channels is got up:
Wherein, behalf radius of a circle, i representative feature passage (brightness, texture primitive); G
_{i, σ (i, s)}difference between two semicircles that (x, y, θ) compares centered by (x, y), σ (i, s) is radius size, θ is radial direction; α
_{i, s}it is then the weight representated by each gradient information;
On each point, choose the mPb value of maximal value as this point of gradient information G, this value represents final local message:
mPb(x，y，θ)＝max
_{θ}{mPb(x，y，θ)} (3)；
Step S104: build a sparse matrix with mPb, by calculating eigenwert and the proper vector of this matrix, obtains required global information; Sparse matrix is in the region of radius r=5 pixel, each pixel link is got up:
Wherein,
representative be contact between i and j, ρ=0.1, then defines D
_{ii}=∑
_{j}w
_{ij}, pass through
(DW)v＝λDv (5)
Calculate proper vector { v
_{0}, v
_{1}..., v
_{n}, and eigenwert 0=λ
_{0}≤ λ
_{1}≤ ... ≤ λ
_{n};
Step S105: regard each proper vector in step S104 as a subpicture, by calculating the gaussian filtering under different directions, obtains directional information
then, the directional information linear superposition under different characteristic vector is got up to obtain global information sPb:
Step S106: local message mPb and global information sPb is organically combined analysis chart as profile information gPb:
Wherein, β
_{i, s}the coefficient of mPb and sPb is represented respectively with γ.
3. the shadow Detection based on Iamge Segmentation according to claim 1 and removal algorithm, is characterized in that, the main the following steps composition of described step S200:
Step S201: estimate any point (x, y) in image is the probability of profile on the θ of direction, obtains the maximal value that this dot profile detects:
Step S202: utilize mathematical morphology, calculates each region with the minimum value of E (x, y) in region for " retaining basin ", and each " retaining basin " corresponding region, is designated as P
_{0}; Liang Ge retaining basin intersection is " watershed divide ", is designated as K
_{0};
Step S203: watershed algorithm can produce oversegmentation problem, the place that should not be limit by this is labeled as watershed divide, utilizes region merging algorithm to solve oversegmentation problem;
Described region merging algorithm is as follows: define a nondirected graph G=(P
_{0}, K
_{0}, W (K
_{0}), E (P
_{0})), wherein W (K
_{0}) represent and the weights of every bar watershed divide obtained, E (P by the gross energy that watershed divide is put divided by the number that watershed divide is put
_{0}) representing the energy value of each retaining basin, the zero energy in each basin is zero, W (K
_{0}) describe diversity between adjacent two regions; By watershed divide according to its weights, ascending stored in queue.
4. the shadow Detection based on Iamge Segmentation according to claim 3 with remove algorithm, it is characterized in that, region merging algorithm comprise the following steps into:
One, the limit C finding weight minimum
^{*}=argminW (C)
Assuming that R
_{1}and R
_{2}by limit C
^{*}segmentation, and R=R
_{1}∪ R
_{2}if, min{E (R
_{1}), E (R
_{2}) ≠ 0, judge whether to merge, merging condition is: then W (K
_{0})≤τ min{E (R
_{1}), E (R
_{2}) (9)
Or min{E (R
_{1}), E (R
_{2})=0 (10),
Wherein τ is a constant;
If two merge, then upgrade E (R), P
_{0}and K
_{0}, E (R), P
_{0}and K
_{0}update method be:
E(R)＝max{E(R
_{1})，E(R
_{2})，W(C
^{*})} (11)
P
_{0}←P
_{0}\{R
_{1}，R
_{2}}∪R (12)
K
_{0}←K
_{0}\{C
^{*}} (13)。
5. the shadow Detection based on Iamge Segmentation according to claim 1 and removal algorithm, is characterized in that, obtain final shadow detection result by separating following energy equation in described step S300, energy equation cuts algorithm by figure to solve:
Meanwhile,
Wherein,
represent that Region Matching sorter is for two area light photograph estimations together,
represent Region Matching sorter for two area light according to different estimations,
the estimation of whether be single region classifier to single regions be shade, { i, j} ∈ E
_{same}represent two regions of same light photograph, { i, j} ∈ E
_{diff}represent two regions of different light; Y={1,1}
^{n}, when for 1 time represent that this region is shadow region.
6. the shadow Detection based on Iamge Segmentation according to claim 1 with remove algorithm, it is characterized in that, use in described step S400 and scratch nomography computational shadowgraph label, this algorithm thinks an image I
_{i}can by prospect F
_{i}with background B
_{i}mix, its formula is as follows:
I
_{i}＝k
_{i}F
_{i}+(1k
_{i})B
_{i}(18)
＝k
_{i}(L
_{d}R
_{i}+L
_{e}R
_{i})+(1k
_{i})L
_{e}R
_{i}(19)
Wherein, L
_{d}direct light, L
_{e}surround lighting, k
_{i}shade label, R
_{i}it is the reflection coefficient of an i.
Prospect is labeled as nonshadow, and context marker is shade, obtains k by the minimum value calculating following energy equation
_{i}size, k
_{i}represent the label of some i, obtained by (20), k is k
_{i}the vector of composition, obtains k and just obtains k
_{i}.
K
^{t}be the transposition of k, k is k
_{i}the vector of composition, λ is a very large numerical value, specifically determines by putting into practice.
the label y obtained by the energy equation of step S300
_{k}the vector of composition,
in the value of each element be exactly the y obtained in formula
_{k}, each element value is not 0 is here exactly 1.Please note: this formula is exactly that the k calculated is a vector, the label of each pixel of each element representation in vector in order to calculate k; But the span of the value of each element becomes [0,1], and that is the label of one part of pixel has been become the value of 0 to 1 interval range from 1 or 0; Label is still just expression shadow region and the nonhatched area of 0 or 1, label value expression penumbra region between zero and one; L is the Laplacian Matrix of stingy figure, and D (i, i) is a diagonal matrix, and D (i, i)=1 represents that pixel i is the edge of shadow region, and D (i, i)=0 item represents other institute a little.
7. the shadow Detection based on Iamge Segmentation according to claim 6 and removal algorithm, is characterized in that, utilize the label obtained to light shadow region, recover the illumination of shadow region:
According to shadow model, if a pixel is lit, then
Wherein, r=L
_{d}/ L
_{e}direct light L
_{d}with surround lighting L
_{e}ratio, I
_{i}represent pixel i value originally, so calculating r just can by shadow removal;
Known
I
_{i}＝(k
_{i}·L
_{d}+L
_{e})R
_{i}(24)
I
_{j}＝(k
_{j}·L
_{d}+L
_{e})R
_{j}(25)
If the reflection coefficient of two pixels is identical, i.e. R
_{i}=R
_{j}, then
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201410675195.XA CN104463853A (en)  20141122  20141122  Shadow detection and removal algorithm based on image segmentation 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201410675195.XA CN104463853A (en)  20141122  20141122  Shadow detection and removal algorithm based on image segmentation 
Publications (1)
Publication Number  Publication Date 

CN104463853A true CN104463853A (en)  20150325 
Family
ID=52909835
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201410675195.XA Pending CN104463853A (en)  20141122  20141122  Shadow detection and removal algorithm based on image segmentation 
Country Status (1)
Country  Link 

CN (1)  CN104463853A (en) 
Cited By (16)
Publication number  Priority date  Publication date  Assignee  Title 

CN105447501A (en) *  20151102  20160330  北京旷视科技有限公司  Clusteringbased license image shadow detection method and apparatus 
CN106023113A (en) *  20160527  20161012  哈尔滨工业大学  Satellite highscore image shadow region recovery method based on nonlocal sparse 
CN106295570A (en) *  20160811  20170104  北京暴风魔镜科技有限公司  Block filtration system and method alternately 
CN106408648A (en) *  20150803  20170215  青岛海信医疗设备股份有限公司  Medicaltissue sliceimage threedimensional reconstruction method and equipment thereof 
CN106488180A (en) *  20150831  20170308  上海悠络客电子科技有限公司  Video shadow detection method 
CN107507146A (en) *  20170828  20171222  武汉大学  A kind of natural image soft shadowses removing method 
CN109493406A (en) *  20181102  20190319  四川大学  Quick percentage is close to soft shadows method for drafting 
CN110427950A (en) *  20190801  20191108  重庆师范大学  Purple soil soil image shadow detection method 
US10504282B2 (en)  20180321  20191210  Zoox, Inc.  Generating maps without shadows using geometry 
CN110765875A (en) *  20190920  20200207  浙江大华技术股份有限公司  Method, equipment and device for detecting boundary of traffic target 
WO2020119618A1 (en) *  20181212  20200618  中国科学院深圳先进技术研究院  Image inpainting test method employing texture feature fusion 
US10699477B2 (en) *  20180321  20200630  Zoox, Inc.  Generating maps without shadows 
CN111526263A (en) *  20190201  20200811  光宝电子(广州)有限公司  Image processing method, device and computer system 
WO2021147408A1 (en) *  20200122  20210729  腾讯科技（深圳）有限公司  Pixel point identification method and apparatus, illumination rendering method and apparatus, electronic device and storage medium 
CN113256666A (en) *  20210719  20210813  广州中望龙腾软件股份有限公司  Contour line generation method, system, equipment and storage medium based on model shadow 
CN114742836A (en) *  20220613  20220712  浙江太美医疗科技股份有限公司  Medical image processing method and device and computer equipment 
Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN103295013A (en) *  20130513  20130911  天津大学  Pared area based singleimage shadow detection method 

2014
 20141122 CN CN201410675195.XA patent/CN104463853A/en active Pending
Patent Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN103295013A (en) *  20130513  20130911  天津大学  Pared area based singleimage shadow detection method 
NonPatent Citations (2)
Title 

PABLO ARBELA´ EZ等: "Contour Detection and Hierarchical Image Segmentation", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * 
RUIQI GUO等: "SingleImage Shadow Detection and Removal using Paired Regions", 《COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 * 
Cited By (25)
Publication number  Priority date  Publication date  Assignee  Title 

CN106408648A (en) *  20150803  20170215  青岛海信医疗设备股份有限公司  Medicaltissue sliceimage threedimensional reconstruction method and equipment thereof 
CN106488180A (en) *  20150831  20170308  上海悠络客电子科技有限公司  Video shadow detection method 
CN105447501A (en) *  20151102  20160330  北京旷视科技有限公司  Clusteringbased license image shadow detection method and apparatus 
CN105447501B (en) *  20151102  20190301  徐州旷视数据科技有限公司  License image shadow detection method and device based on cluster 
CN106023113A (en) *  20160527  20161012  哈尔滨工业大学  Satellite highscore image shadow region recovery method based on nonlocal sparse 
CN106023113B (en) *  20160527  20181214  哈尔滨工业大学  Based on the high partial image shadow region restoration methods of the sparse satellite of nonlocal 
CN106295570A (en) *  20160811  20170104  北京暴风魔镜科技有限公司  Block filtration system and method alternately 
CN106295570B (en) *  20160811  20190913  北京暴风魔镜科技有限公司  Filtration system and method are blocked in interaction 
CN107507146A (en) *  20170828  20171222  武汉大学  A kind of natural image soft shadowses removing method 
CN107507146B (en) *  20170828  20210416  武汉大学  Natural image soft shadow elimination method 
US10504282B2 (en)  20180321  20191210  Zoox, Inc.  Generating maps without shadows using geometry 
US10699477B2 (en) *  20180321  20200630  Zoox, Inc.  Generating maps without shadows 
CN109493406A (en) *  20181102  20190319  四川大学  Quick percentage is close to soft shadows method for drafting 
CN109493406B (en) *  20181102  20221111  四川大学  Fast percentage approaching soft shadow drawing method 
WO2020119618A1 (en) *  20181212  20200618  中国科学院深圳先进技术研究院  Image inpainting test method employing texture feature fusion 
CN111526263A (en) *  20190201  20200811  光宝电子(广州)有限公司  Image processing method, device and computer system 
CN111526263B (en) *  20190201  20220318  光宝电子(广州)有限公司  Image processing method, device and computer system 
CN110427950A (en) *  20190801  20191108  重庆师范大学  Purple soil soil image shadow detection method 
CN110427950B (en) *  20190801  20210827  重庆师范大学  Purple soil image shadow detection method 
CN110765875A (en) *  20190920  20200207  浙江大华技术股份有限公司  Method, equipment and device for detecting boundary of traffic target 
CN110765875B (en) *  20190920  20220419  浙江大华技术股份有限公司  Method, equipment and device for detecting boundary of traffic target 
WO2021147408A1 (en) *  20200122  20210729  腾讯科技（深圳）有限公司  Pixel point identification method and apparatus, illumination rendering method and apparatus, electronic device and storage medium 
CN113256666A (en) *  20210719  20210813  广州中望龙腾软件股份有限公司  Contour line generation method, system, equipment and storage medium based on model shadow 
CN114742836A (en) *  20220613  20220712  浙江太美医疗科技股份有限公司  Medical image processing method and device and computer equipment 
CN114742836B (en) *  20220613  20220909  浙江太美医疗科技股份有限公司  Medical image processing method and device and computer equipment 
Similar Documents
Publication  Publication Date  Title 

CN104463853A (en)  Shadow detection and removal algorithm based on image segmentation  
CN104834898B (en)  A kind of quality classification method of personage's photographs  
CN106651872B (en)  Pavement crack identification method and system based on Prewitt operator  
CN102663354B (en)  Face calibration method and system thereof  
CN103971126B (en)  A kind of traffic sign recognition method and device  
US8294794B2 (en)  Shadow removal in an image captured by a vehiclebased camera for clear path detection  
EP3036730B1 (en)  Traffic light detection  
CN101916370A (en)  Method for processing nonfeature regional images in face detection  
CN104318262A (en)  Method and system for replacing skin through human face photos  
CN102426649B (en)  Simple steel seal digital automatic identification method with high accuracy rate  
CN102819728A (en)  Traffic sign detection method based on classification template matching  
CN107862667B (en)  Urban shadow detection and removal method based on highresolution remote sensing image  
CN107945200B (en)  Image binarization segmentation method  
CN107610114A (en)  Optical satellite remote sensing image cloud snow mist detection method based on SVMs  
CN102147867B (en)  Method for identifying traditional Chinese painting images and calligraphy images based on subject  
CN102509098A (en)  Fisheye image vehicle identification method  
CN103530600A (en)  License plate recognition method and system under complicated illumination  
Yang et al.  Realtime traffic sign detection via color probability model and integral channel features  
CN107392968B (en)  The image significance detection method of Fusion of Color comparison diagram and Colorspatial distribution figure  
CN103295013A (en)  Pared area based singleimage shadow detection method  
CN103295010A (en)  Illumination normalization method for processing face images  
CN104636754B (en)  Intelligent image sorting technique based on tongue body subregion color characteristic  
CN103208115A (en)  Detection method for salient regions of images based on geodesic line distance  
CN104021566A (en)  GrabCut algorithmbased automatic segmentation method of tongue diagnosis image  
US11238301B2 (en)  Computerimplemented method of detecting foreign object on background object in an image, apparatus for detecting foreign object on background object in an image, and computerprogram product 
Legal Events
Date  Code  Title  Description 

C06  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
C53  Correction of patent for invention or patent application  
CB03  Change of inventor or designer information 
Inventor after: Liu Yanli Inventor after: Chen Zhuo Inventor before: Liu Yanli Inventor before: Chen Zhuo 

RJ01  Rejection of invention patent application after publication 
Application publication date: 20150325 

RJ01  Rejection of invention patent application after publication 