CN110349099A - A kind of complex scene video shadow Detection and removing method - Google Patents

A kind of complex scene video shadow Detection and removing method Download PDF

Info

Publication number
CN110349099A
CN110349099A CN201910523329.9A CN201910523329A CN110349099A CN 110349099 A CN110349099 A CN 110349099A CN 201910523329 A CN201910523329 A CN 201910523329A CN 110349099 A CN110349099 A CN 110349099A
Authority
CN
China
Prior art keywords
pixel
shadow
shade
confidence level
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910523329.9A
Other languages
Chinese (zh)
Other versions
CN110349099B (en
Inventor
肖春霞
吴文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910523329.9A priority Critical patent/CN110349099B/en
Publication of CN110349099A publication Critical patent/CN110349099A/en
Application granted granted Critical
Publication of CN110349099B publication Critical patent/CN110349099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses a kind of screen shadow Detection and removing method based on depth information, the normal information of each pixel is estimated first with the depth information of image, point cloud location information, by comparing the characteristic similarity between space-time local neighborhood pixel in each pixel and its screen stream, estimate the shade confidence value of each pixel, and shade confidence level is advanced optimized using Laplace operator to obtain final shadow detection result, finally the illumination restoration optimization method based on screen stream is constructed using shadow detection result, obtain final shadow removing result.The present invention has the following advantages: effectively reducing interference of the texture information to shadow Detection using texture filtering, initial shade confidence level is optimized using Laplace operator, obtain more perfect shadow detection result, shade is eliminated using the correlation of coloration constraint and before and after frames, can effectively ensure that the coloration invariance and interframe continuity of result.

Description

A kind of complex scene video shadow Detection and removing method
Technical field
The invention belongs to technical field of video processing more particularly to a kind of complex scene video shadow Detection and removing methods.
Background technique
Shade is natural phenomena common in our daily lifes, they can provide important letter for the understanding of visual scene Breath, such as several how information of light environment, scene.These information analyze illumination, the application such as illumination, augmented reality has again Important role.Therefore, being effectively detected and eliminating shade in computer vision field is an important topic.However, automatic Detecting and eliminating shade is a very difficult task, it is not only influenced by local grain material information, it is also necessary to be considered Global structure information and light environment information in scene.Existing most of shadow Detections and elimination algorithm are all based on part Chrominance information and gradient information etc. carry out the detection and classification of shade, do not account for global structure information, therefore this kind of algorithm The shade in complicated shade and complex scene can not usually be effectively treated.
Shadows Processing work in complex scene refers in complex environment, automatic using global information and local message Ground detects and eliminates shade, meanwhile, shadow removing result should retain the light and shade gradient information in scene, prevent vision distortion. The reason for causing complex scene shadow removing relatively difficult is mainly there are two aspect, the material texture information first in complex scene Compared with horn of plenty, shade distribution is not also relatively concentrated in a jumble, is increased difficulty for detection work, is known even with man-machine interactively priori Know, complicated shade scene will increase mark burden, and be difficult to carry out efficient batch processing;Secondly, complex scene shadow image Since mark is difficult, lack corresponding data set, it is difficult to the shade in large scene is eliminated using the method for deep learning.For this The problem of sample, this algorithm one complex scene shade based on image depth information of proposition detects automatically and elimination algorithm, the calculation Method does not need man-machine interactively and acquires the depth information of image, and the image estimated using existing picture depth algorithm for estimating is deep Information is spent, can be detected and eliminate the shade in complex scene.
Summary of the invention
The complex scene video shadow Detection that in view of the deficiencies of the prior art, the present invention provides a kind of based on depth information with Removing method.
The technical scheme is that a kind of complex scene video shadow Detection removing method comprising the steps of:
Step 1, for input video stream V, its depth information is obtained;
Step 2, it to each frame input video frame I, is filtered using texture filtering operator, reduces texture effects Retain the shadow information in video frame simultaneously;
Step 3, to each filtered video frame Ti, its adjacent associated video stream is chosen, is found out each in video frame The initial shade confidence level and brightness confidence level of pixel, and the shade confidence level of each frame is optimized, it obtains final Video shadow detection result;
Step 4, using the total variation of shade confidence level and brightness confidence level and intrinsic variation, shade side is further calculated out Boundary's confidence level;
Step 5, after the shadow detection result for obtaining each frame, using shadow image model β=I/F by current frame image I It is decomposed into unblanketed image F and shadow factor β, and constructs and shade optimization method is gone to constrain and optimize each frame;
Step 6, to going shade optimization method to be iterated Optimization Solution, obtain final video shadow removing result F and Shadow factor β.
Further, the specific implementation of step 3 includes following sub-step:
Step 3.1, using the depth information of each video frame, combining camera parameter carries out a cloud estimation, obtains each pixel After the information of the point cloud of point, k-d tree is constructed using point cloud information, its most like several points is found to the point cloud of each pixel Cloud recycles the normal information of these similitude cloud computing pixel place space sections;
Step 3.2, for each filtered video frame Ti, utilize Gauss similarity calculation wherein each pixel p With the point q ∈ R in its spatial neighborhoodpColoration similarity, space length similarity and normal similarity, then by three kinds of similarities Multiplication obtains final characteristic similarity αpq;Wherein, RpFor the spatial neighborhood of pixel p;
Step 3.3, the similarity between pixel, more each pixel p and all neighbours' pictures in its spatial neighborhood are utilized Vegetarian refreshments q ∈ RpImage intensity weighted average, estimate the shade confidence level of each pixelWith brightness confidence level
Wherein, the strength weighted average value of neighborhood where p pointIpAnd IqRespectively Indicating the intensity of pixel p and q, σ is adjustable parameter, | Rp| indicate contiguous range RpIn pixel number;
Step 3.4, it in each video frame, using Laplace operator, is set in conjunction with the initial shade calculated in step 3.3 Reliability and brightness confidence level as a result, constitution optimization equation, obtains final shadow detection result S:
Wherein, first two are data constraint item, and Section 3 is smooth item;pkIndicate that k-th of pixel, N are picture in image Vegetarian refreshments number, SkFor the shade confidence level optimum results of k-th of pixel, ωkFor the local window where k-th of pixel, si And sjFor window ωkThe interior corresponding shade confidence level optimum results of two pixels i and j, for the pixel in smooth window Point, wijIt is the matting Laplacian value of i and j point in neighborhood.
Further, the formula of calculating shadow edge confidence level is in step 3,
Wherein,WithThe respectively total variation of p pixel confidence figure and intrinsic variation, ∈ is constant;
Wherein, R (p) is the rectangular neighborhood put centered on p,Indicate the weight function defined by gaussian filtering,WithRespectively q point shade confidence level and brightness confidence level,It is local derviation symbol,It indicates to shade confidence levelOr it is bright Spend confidence levelCarry out the local derviation on the direction x or y.
Further, the shade optimization method is gone to be described in step 5,
E (F, β)=Edata(F, β)+λ1Esmooth(F, β)+λ2Echromaticity(β)+λ3Econst(β)
Wherein, data item EdataiwC ∈ { R, G, B }ωc·|Ic-Fc·βc|2, for constraining each data of present frame , using shadow model to the data I under different color channelc, Fc, βcIt is constrained, wherein { ωR, ωG, ωBIt is that RGB is each Color channel constrains weight;Image pixel intensities weights omegaiw=1- ωintensity(1- | I (x) |), wherein ωintensityIt is adjustable Parameter, I (x) are the image pixel intensities of pixel x;
Smooth item Esmooth=ESF+γESM, wherein γ is balance factor, ESFBased on the assumption that the image after shade is removed F carries out smoothness constraint: in the same space plane, the picture with similar chrominance information, normal information and three-dimensional point cloud location information Vegetarian refreshments should have similar pixel value (color value) after shadow removing,
Wherein, smoothness constraint of the first item between present frame neighbor pixel, Section 2 are similar using feature in video flowing Non-shadow pixel shadows pixels are constrained, RsFor the direct-shadow image vegetarian refreshments that shadow Detection in present frame obtains,It indicates In space-time local neighborhood in t frame all non-shadow pixels point set, T be current video stream in totalframes;
ESMUtilize the shadow edge confidence level C estimatedboundSmoothness constraint is carried out to shadow factor β:
Echromaticity(F)=| | c (p)-cF(p)||2It is not illuminated by the light the influence of variation to original using the coloration for assuming image Video frame carries out that coloration is consistent constrains with the video frame after shadow removing, and c is the coloration of present frame I, cFFor shadow removing result F Coloration;
Utilize the image non-hatched area pixel color after hypothesis shadow removing It should remain unchanged, i.e. shadow factor levels off to 1, to non-hatched area NbIt is constrained, non-shadow pixel region is all shades Pixel collection other than point and its neighbor pixel point, direct-shadow image vegetarian refreshments are the point that confidence level is greater than 0.1.
Further, it is solved in step 6 using the method for iteration optimization, enabling the initial value of F is I, and the initial value of β is Shade confidence level S is calculated final as a result, maximum number of iterations is 1000 by iteration optimization.
Further, in step 1, for the video of live shooting, believed using the depth that Kinect V2 acquires video in real time Breath;For existing video, the depth information of each frame of video is estimated using the method for deep learning.
The present invention has the following advantages: 1. present invention have carried out texture filtering to each frame when estimating video frame shade and have located in advance Reason, can effectively reduce interference of the texture information to shadow Detection;2. the present invention is using Laplace operator to initial shade confidence Degree optimizes, and while obtaining more perfect shadow detection result, remains the relative intensity and shadow edge of shadow information Gradient information facilitates the elimination of complicated shade and shadow edge;3. shadow removing optimization algorithm of the invention, is disappeared using shade Except the principle that the chrominance information of front and back image remains unchanged, chrominance information is constrained, ensure that result coloration invariance;4, Shadow removing algorithm of the present invention takes full advantage of the correlation of before and after frames, can effectively ensure that the interframe of result while eliminating shade Continuity.
Detailed description of the invention
Fig. 1 is video shadow removing flow chart of the invention.
Fig. 2 is the flow chart of video shadow Detection of the present invention.
Fig. 3 is the effect picture of present invention processing instance of video.Wherein (a) is input video stream, (b) corresponding for video flowing Depth information, (c) be video flowing shade reliability estimating as a result, (d) be video flowing optimization after shadow detection result (set Reliability), it (e) is to go the video flowing result after shade.
Specific embodiment
Below with reference to examples of implementation and attached drawing, the present invention is described in further detail, but embodiments of the present invention are not It is limited to this.
Referring to Fig.1, a kind of flow chart of the invention, video shadow removing method, includes the following steps:
Step 1, it for input video stream V, need to first obtain its depth information: for the video of live shooting, utilize Kinect V2 acquires the depth information of video in real time;For existing video, it is every that video is estimated using the method for deep learning The depth information of one frame.Input video frame and corresponding depth map as shown in Fig. 3 (a), (b), respectively in example.
Step 2, using texture filtering operator, each frame I in video is filtered, small scale texture pair is reduced Shadow Detection bring influences, while retaining original shadow information.
Step 3, to each filtered video frame Ti, choose its corresponding associated video stream { Ti-2, Ti-1, Ti, Ti+1, Ti+2, the initial shade confidence level and brightness confidence level of each pixel in video frame are found out, and to the shade confidence of each frame Degree optimizes, and obtains final video shadow detection result.It is comprised the steps of: in the step 3
Step 3.1, the depth information of each video frame is utilized, combining camera parameter carries out a cloud estimation;Obtain each picture After the information of the point cloud of vegetarian refreshments, k-d tree is constructed using point cloud information, its most like 300 are found to the point cloud of each pixel Point cloud recycles the normal information of these similitude cloud computing pixel place space sections.
Step 3.2, as shown in Fig. 2 shadow Detection flow chart, for each filtered video frame Ti, in its correlation view The corresponding space-time local neighborhood pixel q ∈ R of each pixel p is found out in frequency streamp, and calculate pixel p and all neighbours The coloration similarity of point q, space length similarity and normal similarity, then three kinds of similarities are multiplied to obtain final feature phase Like degree αpq.Wherein, RpFor the space-time neighborhood of pixel p, usually 50 × 50 × 5 space-time block of pixels.
Step 3.3, the similarity between pixel, more each pixel p and all neighbours' pictures in its spatial neighborhood are utilized Vegetarian refreshments q ∈ RpImage intensity weighted average m (p), estimate the shade confidence level of each pixelWith brightness confidence level
If Fig. 3 (c) is the corresponding initial shade reliability estimating result of frame each in example.
Step 3.4, optimization method is utilized Obtain the optimum results of final shade confidence level S.Wherein, first two are data constraint item, and Section 3 is smooth item;pkIt indicates K-th of pixel, N are pixel number in image, SkFor the shade confidence level optimum results of k-th of pixel, ωkIt is k-th Local window where pixel, siAnd sjFor window ωkThe corresponding shade confidence level optimization knot of two interior pixels i and j Fruit, for the pixel in smooth window, wijIt is the matting Laplacian value of i and j point in neighborhood.In each video frame In, using Laplace operator, in conjunction with the initial shade confidence level and brightness confidence level calculated in step 3.3 as a result, to yin Shadow testing result is constrained, and relevant flat using Laplce stingy graphic calculation (matting Laplacian) progress gradient It is sliding, the shadow detection result optimized.It is final shade confidence level optimum results shown in sample result such as Fig. 3 (d).
Step 4, the total variation of shade confidence level and brightness confidence level is utilizedIt is deteriorated with intrinsicIt further calculates Shadow edge confidence level out:
Wherein,WithRespectively total variation and intrinsic variation, ∈ is typically set to 0.001 in an experiment, in order to prevent denominator The case where being 0:
Wherein, R (p) is 7 × 7 rectangular neighborhoods put centered on p,Indicate the weight function defined by gaussian filtering, WithRespectively q point shade confidence level and brightness confidence level,It indicates to shade confidence levelOr brightness confidence level The local derviation on x or direction is carried out, total variation D is obtained by carrying out gaussian filtering after taking absolute value to local derviation result again, intrinsic to become DifferenceBy taking absolute value to obtain after carrying out gaussian filtering to local derviation result.
Step 5, after the shadow detection result for obtaining each frame, using shadow image model β=I/F by current frame image I It is decomposed into unblanketed image F and shadow factor β, the two is all unknown quantity.The present invention constructs following optimization method, is based on video Stream is constrained and is optimized to each frame, and final video shadow removing result F and shadow factor β is obtained.
E (F, β)=Edata(F, β)+λ1Esmooth(F, β)+λ2Echromaticity(β)+λ3Econst(β)
In experimental implementation, we are usually by parameter lambda1, λ2And λ3It is respectively set to 1,0.5 and 1.
Data item EdataiwC ∈ { R, G, B }ωc·|Ic-Fc·βc|2, for constraining each data item of present frame, benefit With shadow model to the data I under different color channelc, Fc, βcIt is constrained.Wherein, ωcFor tri- color channel constraints of RGB Weight, the weight in each channel are respectively { ωR, ωG, ωB}={ 0.299,0.587,0.144 }.ωiwFor image pixel intensities correlation Weight, ωiw=1- ωintensity(1- | I (x) |), wherein ωintensityFor adjustable parameter, I (x) is the picture of pixel x Plain intensity.
Smooth item Esmooth=ESF+γESM, wherein γ is balance factor, is typically set to 1 in experimental implementation.Based on the assumption that: In the same space plane, the pixel with similar chrominance information, normal information and three-dimensional point cloud location information is in shadow removing After should have similar pixel value (color value).To going the image F after shade to carry out smoothness constraint:
Wherein, smoothness constraint of the first item between present frame neighbor pixel, Section 2 are similar using feature in video flowing Non-shadow pixel shadows pixels are constrained, RsFor the direct-shadow image vegetarian refreshments that shadow Detection in present frame obtains,It indicates In space-time local neighborhood in t frame all non-shadow pixels point set, T be current video stream in totalframes.
Utilize the shadow edge confidence level estimated CboundSmoothness constraint is carried out to shadow factor β.
Echromaticity(F)=| | c (p)-cF(p)||2It is not illuminated by the light the influence of variation to original using the coloration for assuming image Video frame carries out that coloration is consistent constrains with the video frame after shadow removing, and c is the coloration of present frame I, cFFor shadow removing result F Coloration.
Utilize the image non-hatched area pixel color after hypothesis shadow removing It should remain unchanged, i.e. shadow factor levels off to 1, to non-hatched area NbIt is constrained, direct-shadow image vegetarian refreshments is that confidence level is greater than 0.1 Point, non-shadow pixel region is the pixel collection other than all shadow spots and its neighbor pixel point.
Step 6, since the equation includes two unknown quantitys of unblanketed image F and shadow factor β, this algorithm uses iteration The method of optimization is solved, and enabling the initial value of F is I, and the initial value of β is shade confidence level S, is calculated by iteration optimization final As a result, maximum number of iterations be 1000.It is the shadow removing result of example as shown in Fig. 3 (e).
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.

Claims (6)

1. a kind of complex scene video shadow Detection and removing method, which comprises the following steps:
Step 1, for input video stream V, its depth information is obtained;
Step 2, it to each frame input video frame I, is filtered using texture filtering operator, while reducing texture effects Retain the shadow information in video frame;
Step 3, to each filtered video frame Ti, its adjacent associated video stream is chosen, each pixel in video frame is found out Initial shade confidence level and brightness confidence level, and the shade confidence level of each frame is optimized, obtains final video yin Shadow testing result;
Step 4, it using the total variation of shade confidence level and brightness confidence level and intrinsic variation, further calculates out shadow edge and sets Reliability;
Step 5, after the shadow detection result for obtaining each frame, current frame image I is decomposed using shadow image model β=I/F For unblanketed image F and shadow factor β, and constructs and shade optimization method is gone to constrain and optimize each frame;
Step 6, to going shade optimization method to be iterated Optimization Solution, final video shadow removing result F and shade is obtained Factor-beta.
2. a kind of complex scene video shadow Detection as described in claim 1 and removing method, it is characterised in that: step 3 Specific implementation includes following sub-step:
Step 3.1, using the depth information of each video frame, combining camera parameter carries out a cloud estimation, obtains each pixel After the information of point cloud, k-d tree is constructed using point cloud information, its most like several points cloud is found to the point cloud of each pixel, Recycle the normal information of these similitude cloud computing pixel place space sections;
Step 3.2, for each filtered video frame Ti, using Gauss similarity calculation, wherein each pixel p is empty with it Between point q ∈ R in neighborhoodpColoration similarity, space length similarity and normal similarity, then three kinds of similarities are mutually multiplied To final characteristic similarity αpq;Wherein, RpFor the spatial neighborhood of pixel p;
Step 3.3, the similarity between pixel, more each pixel p and all neighbor pixel point q in its spatial neighborhood are utilized ∈RpImage intensity weighted average, estimate the shade confidence level of each pixelWith brightness confidence level
Wherein, the strength weighted average value of neighborhood where p pointIpAnd IqRespectively indicate picture The intensity of plain p and q, σ are adjustable parameter, | Rp| indicate contiguous range RpIn pixel number;
Step 3.4, in each video frame, using Laplace operator, in conjunction with the initial shade confidence level calculated in step 3.3 With brightness confidence level as a result, constitution optimization equation, obtains final shadow detection result S:
Wherein, first two are data constraint item, and Section 3 is smooth item;pkIndicate that k-th of pixel, N are pixel in image Number, SkFor the shade confidence level optimum results of k-th of pixel, ωkFor the local window where k-th of pixel, siAnd sjFor Window ωkThe interior corresponding shade confidence level optimum results of two pixels i and j, for the pixel in smooth window, wijIt is The matting Laplacian value of i and j point in neighborhood.
3. a kind of complex scene video shadow Detection as claimed in claim 2 and removing method, it is characterised in that: in step 3 The formula of computational shadowgraph boundary confidence level is,
Wherein,WithThe respectively total variation of p pixel confidence figure and intrinsic variation, ∈ is constant;
Wherein, R (p) is the rectangular neighborhood put centered on p,Indicate the weight function defined by gaussian filtering,WithPoint Not Wei q point shade confidence level and brightness confidence level,It is local derviation symbol,It indicates to shade confidence levelOr brightness is set ReliabilityCarry out the local derviation on the direction x or y.
4. a kind of complex scene video shadow Detection as claimed in claim 3 and removing method, it is characterised in that: in step 5 It is described to go the shade optimization method to be,
E (F, β)=Edata(F, β)+λ1Esmooth(F, β)+λ2Echromaticity(β)+λ3Econst(β)
Wherein, data item EdataiwC ∈ { R, G, B }ωc·|Ic-Fc·βc|2, for constraining each data item of present frame, benefit With shadow model to the data I under different color channelc, Fc, βcIt is constrained, wherein { ωR, ωG, ωBIt is each color of RGB Channel constrains weight;Image pixel intensities weights omegaiw=1- ωintensity(1- | I (x) |), wherein ωintensityFor adjustable parameter, I (x) image pixel intensities for being pixel x;
Smooth item Esmooth=ESF+γESM, wherein γ is balance factor, ESFBased on the assumption that going the image F after shade to carry out Smoothness constraint: in the same space plane, the pixel with similar chrominance information, normal information and three-dimensional point cloud location information exists There should be similar pixel value (color value) after shadow removing,
Wherein, smoothness constraint of the first item between present frame neighbor pixel, Section 2 are similar non-using feature in video flowing Direct-shadow image vegetarian refreshments constrains shadows pixels, RsFor the direct-shadow image vegetarian refreshments that shadow Detection in present frame obtains,Indicate space-time In local neighborhood in t frame all non-shadow pixels point set, T be current video stream in totalframes;
ESMUtilize the shadow edge confidence level C estimatedboundSmoothness constraint is carried out to shadow factor β:
Echromaticity(F)=| | c (p)-cF (p) | |2It is not illuminated by the light the influence of variation to original video using the coloration for assuming image Frame carries out that coloration is consistent constrains with the video frame after shadow removing, and c is the coloration of present frame I, cFFor the color of shadow removing result F Degree;
It should be protected using the image non-hatched area pixel color after hypothesis shadow removing Hold constant, i.e. shadow factor levels off to 1, to non-hatched area NbConstrained, non-shadow pixel region be all shadow spots and Pixel collection other than its neighbor pixel point, direct-shadow image vegetarian refreshments are the point that confidence level is greater than 0.1.
5. a kind of complex scene video shadow Detection as claimed in claim 4 and removing method, it is characterised in that: in step 6 It is solved using the method for iteration optimization, enabling the initial value of F is I, and the initial value of β is shade confidence level S, passes through iteration optimization It calculates final as a result, maximum number of iterations is 1000.
6. a kind of complex scene video shadow Detection as described in claim 1 and removing method, it is characterised in that: in step 1, For the video of live shooting, the depth information of video is acquired in real time using Kinect V2;For existing video, depth is utilized The method of degree study estimates the depth information of each frame of video.
CN201910523329.9A 2019-06-17 2019-06-17 Complex scene video shadow detection and elimination method Active CN110349099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910523329.9A CN110349099B (en) 2019-06-17 2019-06-17 Complex scene video shadow detection and elimination method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910523329.9A CN110349099B (en) 2019-06-17 2019-06-17 Complex scene video shadow detection and elimination method

Publications (2)

Publication Number Publication Date
CN110349099A true CN110349099A (en) 2019-10-18
CN110349099B CN110349099B (en) 2021-04-02

Family

ID=68182147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910523329.9A Active CN110349099B (en) 2019-06-17 2019-06-17 Complex scene video shadow detection and elimination method

Country Status (1)

Country Link
CN (1) CN110349099B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419196A (en) * 2020-11-26 2021-02-26 武汉大学 Unmanned aerial vehicle remote sensing image shadow removing method based on deep learning
CN112598592A (en) * 2020-12-24 2021-04-02 广东博智林机器人有限公司 Image shadow removing method and device, electronic equipment and storage medium
CN113361360A (en) * 2021-05-31 2021-09-07 山东大学 Multi-person tracking method and system based on deep learning
CN113378775A (en) * 2021-06-29 2021-09-10 武汉大学 Video shadow detection and elimination method based on deep learning
CN114782616A (en) * 2022-06-20 2022-07-22 北京飞渡科技有限公司 Model processing method, model processing device, storage medium and electronic equipment
CN112686936B (en) * 2020-12-18 2023-08-04 北京百度网讯科技有限公司 Image depth completion method, apparatus, computer device, medium, and program product
CN116704316A (en) * 2023-08-03 2023-09-05 四川金信石信息技术有限公司 Substation oil leakage detection method, system and medium based on shadow image reconstruction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430715B1 (en) * 2015-05-01 2016-08-30 Adobe Systems Incorporated Identifying and modifying cast shadows in an image
CN106339995A (en) * 2016-08-30 2017-01-18 电子科技大学 Space-time multiple feature based vehicle shadow eliminating method
CN107038690A (en) * 2017-03-27 2017-08-11 湘潭大学 A kind of motion shadow removal method based on multi-feature fusion
CN107203975A (en) * 2017-04-18 2017-09-26 南京航空航天大学 Shadow removal method based on YCbCr color spaces and topology cutting
CN107808366A (en) * 2017-10-21 2018-03-16 天津大学 A kind of adaptive optical transfer single width shadow removal method based on Block- matching
CN109064411A (en) * 2018-06-13 2018-12-21 长安大学 A kind of pavement image based on illumination compensation removes shadow method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430715B1 (en) * 2015-05-01 2016-08-30 Adobe Systems Incorporated Identifying and modifying cast shadows in an image
CN106339995A (en) * 2016-08-30 2017-01-18 电子科技大学 Space-time multiple feature based vehicle shadow eliminating method
CN107038690A (en) * 2017-03-27 2017-08-11 湘潭大学 A kind of motion shadow removal method based on multi-feature fusion
CN107203975A (en) * 2017-04-18 2017-09-26 南京航空航天大学 Shadow removal method based on YCbCr color spaces and topology cutting
CN107808366A (en) * 2017-10-21 2018-03-16 天津大学 A kind of adaptive optical transfer single width shadow removal method based on Block- matching
CN109064411A (en) * 2018-06-13 2018-12-21 长安大学 A kind of pavement image based on illumination compensation removes shadow method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAO XIAO ET AL: "Shadow Removal from Single RGB-D Images", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419196A (en) * 2020-11-26 2021-02-26 武汉大学 Unmanned aerial vehicle remote sensing image shadow removing method based on deep learning
CN112419196B (en) * 2020-11-26 2022-04-26 武汉大学 Unmanned aerial vehicle remote sensing image shadow removing method based on deep learning
CN112686936B (en) * 2020-12-18 2023-08-04 北京百度网讯科技有限公司 Image depth completion method, apparatus, computer device, medium, and program product
CN112598592A (en) * 2020-12-24 2021-04-02 广东博智林机器人有限公司 Image shadow removing method and device, electronic equipment and storage medium
CN113361360A (en) * 2021-05-31 2021-09-07 山东大学 Multi-person tracking method and system based on deep learning
CN113361360B (en) * 2021-05-31 2023-07-25 山东大学 Multi-person tracking method and system based on deep learning
CN113378775A (en) * 2021-06-29 2021-09-10 武汉大学 Video shadow detection and elimination method based on deep learning
CN114782616A (en) * 2022-06-20 2022-07-22 北京飞渡科技有限公司 Model processing method, model processing device, storage medium and electronic equipment
CN116704316A (en) * 2023-08-03 2023-09-05 四川金信石信息技术有限公司 Substation oil leakage detection method, system and medium based on shadow image reconstruction

Also Published As

Publication number Publication date
CN110349099B (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN110349099A (en) A kind of complex scene video shadow Detection and removing method
Zhao et al. Infrared image enhancement through saliency feature analysis based on multi-scale decomposition
Sun et al. A novel approach for edge detection based on the theory of universal gravity
Wang et al. Haze removal based on multiple scattering model with superpixel algorithm
CN108377374B (en) Method and system for generating depth information related to an image
Vosters et al. Background subtraction under sudden illumination changes
Muthukumar et al. Analysis of image inpainting techniques with exemplar, poisson, successive elimination and 8 pixel neighborhood methods
Kim et al. Low-light image enhancement based on maximal diffusion values
CN111161222A (en) Printing roller defect detection method based on visual saliency
CN110268442B (en) Computer-implemented method of detecting a foreign object on a background object in an image, device for detecting a foreign object on a background object in an image, and computer program product
CN108510496A (en) The fuzzy detection method that SVD based on Image DCT Domain is decomposed
CN108038458B (en) Method for automatically acquiring outdoor scene text in video based on characteristic abstract diagram
Chen et al. Visual depth guided image rain streaks removal via sparse coding
Wang et al. Robust image chroma-keying: a quadmap approach based on global sampling and local affinity
Yu et al. Content-adaptive rain and snow removal algorithms for single image
Wang Image matting with transductive inference
Parzych et al. Automatic people density maps generation with use of movement detection analysis
Tezuka et al. A precise and stable foreground segmentation using fine-to-coarse approach in transform domain
Zhang et al. Video super-resolution with registration-reliability regulation and adaptive total variation
Shen et al. Re-texturing by intrinsic video
Hsia et al. Efficient light balancing techniques for text images in video presentation systems
Panchal et al. A comprehensive survey on shadow detection techniques
Veeravasarapu et al. Fast and fully automated video colorization
CN109949400A (en) Shadow estimation and reconstructing method suitable for the virtual soft dress synthesis of AR
Yan et al. Re-texturing by intrinsic video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant