CN101399968B - Shadow and high brightness detection method in single color monitoring camera - Google Patents
Shadow and high brightness detection method in single color monitoring camera Download PDFInfo
- Publication number
- CN101399968B CN101399968B CN2007101517674A CN200710151767A CN101399968B CN 101399968 B CN101399968 B CN 101399968B CN 2007101517674 A CN2007101517674 A CN 2007101517674A CN 200710151767 A CN200710151767 A CN 200710151767A CN 101399968 B CN101399968 B CN 101399968B
- Authority
- CN
- China
- Prior art keywords
- pixel
- background
- gauss
- zone
- penumbra
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method for detecting shadow and high light area from foreground area in monitoring system. The method comprises: capturing new image; comparing new image with background model, updating background model by new image; acquiring difference image of new image and background model; extracting half-shadow area by estimating edge acutance of difference image and extending background area; extracting shadow area based on extraction result of half-shadow area.
Description
Technical field
Relate to shadow and highlight according to the method for the invention and detect, more particularly, relate to a kind of method that can in single color monitoring camera, detect shadow and highlight.
Background technology
Be security purpose, monitoring camera is in public places by extensive installation.Whether there is sensitive event to take place but need additional effort power to check.Therefore, the much-talked-about topic in the surveillance be to detect, the research of the smart camera of the action of tracking and analyst and other objects.
The detection and tracking of motion object are the key issue in application of much handling image sequence.Main challenge during these are used is the shade that identifying object is cast and in scene, moved with object.Because shadow spots is categorized into prospect by error, so to the motion object fragments and when extracting, shade can produce serious problem.Shade can cause the merging of object, and object shapes distortion even object are lost (because shadow falls is on another object).
Because shade and object are shared two important visual signatures, so cause the difficulty relevant with shadow Detection.At first, because shadow spots significantly is different from background usually,, they are prospect so can detecting; Secondly, shade has the motion identical with the object of casting this shade.For this reason, no matter be for rest image or image sequence (video), shade identification is all most important, and has become the active research field now.
Usually, there are three kinds of methods to discern shade and subject area.
It all is known structure in the face of system that first method is based on supposition light source, object shapes and ground.Therefore, suppose to have the zone of certain shade and object combination, which this method can guess partly is object, and which partly is the shade that object is cast.For example; Be 20070110309 at publication number and be called that " Shadow Detection in Images " U.S. Patent application and name are called in No. 5592567 United States Patent (USP) of " Method fordetecting and separating the shadow of moving objects in a sequence of digitalimages "; If the shadow region is on the left side or the right of object, it is always very narrow in vertical direction so.
Second method is based on the consistency of image, such as color and texture (be 20060290780 referring to publication number respectively and be called the U.S. Patent application of " Method for Modeling Cast Shadows in Videos " and No. 6469734 United States Patent (USP) that name is called " Video Safety Detector with Shadow Elimination ").Color invariance is meant: in the shadow region, brightness reduces, so the reduction of the intensity of respective pixel, and is less relatively but its color harmony saturation changes.Therefore, in this method, test color mediation saturation infromation is discerned shade and object.And the texture consistency is meant: even intensity reduces in the shadow region, but its edge can not move.But in foreground object, edge and intensity all change.
The third method is based on border width.The shadow region comprises umbra zone and penumbra zone.The penumbra zone is the arrowband between umbra and the background.The penumbra zone is the change gradually in zone from the background to the umbra.Therefore, the edge of shadow region is wide and level and smooth, and the edge of object is narrow and sharp-pointed.On network address http://cvrr.ucsd.edu/aton/publications/pdfpapers/TRshadow.pdf by Andrea Prati; What Ivana Mikic delivered is entitled as in " Detecting Moving Shadows:Formulation; Algorithmsand Evaluation " technical report; Provided a kind of method of estimated edge width, therefore, this characteristic is used to identifying object and shade edge.
Summary of the invention
Provide one side of the present invention to solve above-mentioned and/or otherwise problem and shortcoming.
According to an aspect of the present invention, can not receive such as the restriction of the unknown hypothesis of clean background, grain background and light source in the camera surveillance from foreground area identification shadow region and highlight area.
According to an aspect of the present invention, provide a kind of and in surveillance, detected the regional method of shadow and highlight from foreground area, said method comprises: catch new images; New images and background model are compared, and come update background module through new images; Obtain the difference image between said new images and the background model; Acutance through edge in the estimated difference image is also expanded the background area and is extracted the penumbra zone; Extract the result based on the penumbra zone and extract the shadow region.
According to an aspect of the present invention, background model is the set of pixel distribution, and describes the distribution of each pixel with mixed Gauss model: ∑ w
iN (u
i, σ
i), wherein, w
iBe each single Gauss's weights, (u is that the center is that u, variance are the Gaussian distribution of σ σ) to N.
According to an aspect of the present invention, the step through the new images update background module can comprise: confirm whether the pixel in the new images belongs to a Gauss in the mixed Gauss model; If said pixel belongs to said Gauss, then upgrade said Gauss's center and variance, increase said Gauss's weights and reduce other Gausses' weights; If said pixel does not belong to any Gauss, then deletion has minimum w
iGauss, and to increase with said color of pixel be the new Gauss at center.
According to an aspect of the present invention, the step of acquisition difference image can comprise: the pixel in the new images is compared with each Gauss's center in the background model, and find nearest background Gauss; Confirm that distance between said pixel and said nearest background Gauss's the center is whether less than nearest background Gauss's corresponding variance; If said distance then is labeled as background with said pixel less than said corresponding variance, and the respective differences in the difference image is set to 0; If said distance is not less than said corresponding variance, then corresponding distance is stored in the difference image as respective differences.
According to an aspect of the present invention, the step of extracting the penumbra zone can comprise: calculate in the new images pixel respectively among a small circle in and gradient g1 and g2 on a large scale; Through with g1 divided by g2 or through g1 divided by g2 and constant A and estimate the acutance of said pixel.
According to an aspect of the present invention, the step of the acutance of estimation pixel also can comprise: if g1<thr
Grad, then the acutance of said pixel is set to 0, wherein, and thr
GradIt is the system parameters of being scheduled to.
According to an aspect of the present invention, the operation of expansion background area can comprise in the step of extracting the penumbra zone: find each background section of every horizontal scanning line, and come in the horizontal direction by the pixel-expanded background section to every horizontal scanning line; Find each background section of every vertical scan line, and come in vertical direction by the pixel-expanded background section to every vertical scan line.
According to an aspect of the present invention, the operation of expanding background section in the horizontal direction can comprise: if in the acutance of the pixel p on the tight left side of said background section less than thr
SharpAnd diff (p)<thr
Diff, then said pixel is marked as penumbra, and the pixel to next left side repeats this operation then, does not satisfy up to above-mentioned condition; Carry out the extended operation to the right of section then in a similar manner, wherein, diff (p) is poor between pixel p and the background model, thr
SharpAnd thr
DiffBe two predetermined system parameterss.
According to an aspect of the present invention, the operation of expanding background section in vertical direction can comprise: if in the acutance of the pixel p of the tight top of said background section less than thr
SharpAnd diff (p)<thr
Diff, then said pixel is marked as penumbra, and the pixel to next top repeats this operation then, does not satisfy up to above-mentioned condition; Carry out the downward extended operation of section then in a similar manner, wherein, diff (p) is poor between pixel p and the background model, thr
SharpAnd thr
DiffBe two predetermined system parameterss.
According to an aspect of the present invention, the step of extracting the umbra zone can comprise: when the zone is centered on by the penumbra pixel, be the umbra zone with whole zone marker; When the part of foreground area is centered on by the penumbra pixel, said part is labeled as the umbra zone through application level and vertical scanning.
According to an aspect of the present invention, said part being labeled as the regional step of umbra can comprise: if scan lines has penumbra-object-penumbra pattern, and the length of object part is less than the Thr as system parameters
Length, then said object part is marked as umbra.
Description of drawings
From below in conjunction with the description of accompanying drawing to exemplary embodiment, of the present inventionly above-mentionedly will more know and be more readily understood with other aspects, wherein:
Fig. 1 illustrates the block diagram of shadow Detection system according to an exemplary embodiment of the present invention;
Fig. 2 is the exemplary layout that shows penumbra and umbra zone;
Fig. 3 illustrates the method for calculating difference image;
Fig. 4 is the block diagram that the structure of penumbra zone extraction unit 200 is shown;
Fig. 5 and Fig. 6 have shown two kinds of edges, and what Fig. 5 showed is wide and mild edge, and the edge among Fig. 6 is narrow and sharp-pointed;
Fig. 7 shows the filter of the acutance be used to estimate each pixel, one in the horizontal direction and another one in vertical direction;
Fig. 8 extracts the displaying that the result finds the umbra zone according to penumbra;
Fig. 9 extracts the displaying that the result finds the umbra zone according to penumbra when umbra is adjacent with foreground object;
Figure 10 is the outline flowchart that illustrates from the method in foreground area identification shadow and highlight zone; With
Figure 11 shows two experimental results, and wherein, one is high light extraction, and another is that shade extracts.
Embodiment
With reference to accompanying drawing certain exemplary embodiments of the present invention will be described in more detail.
Fig. 1 illustrates the block diagram of shadow Detection system according to an exemplary embodiment of the present invention.Said shadow Detection system comprises: image capturing unit 110, and catch new images and new images is input to background model unit 120; Background model unit 120 comprises the distributed model of background and comes update background module by this new images; Difference image obtains unit 100, obtains difference image through new images is compared with background model; Penumbra zone extraction unit 200 extracts the penumbra zone; With umbra extracted region unit 300, extract the umbra zone.
At first, image capturing unit 110 captures said system with new images.This image is imported into background model unit 120 and compares with background model and obtain difference image.
Background model is the historical statistical model of scene.In a kind of enforcement, this model is a width of cloth background reference image that has no foreground object to exist.In another was implemented, this model was the set of pixel distribution.The distribution of each pixel is described in mixed Gauss model.
∑w
i·N(u
i,σ
i)
Wherein, w
iBe each single Gauss's weights, (u is that the center is that u, variance are the Gaussian distribution of σ σ) to N.
In background model unit 120, carry out the renewal of background model, in order to upgrade the mixed Gaussian background model, need the pixel in the new images (below, be called new pixel) and each Gaussian distribution be compared.If new pixel belongs to a Gauss, then upgrade its center and variance, and refreshing weight.Wherein, On average upgrade said center through what old Gauss center and new input color of pixel are got weighting; All upgrade variance through old Gauss's variance is made even to the distance at old Gauss center with new pixel, and come refreshing weight through weights that increase this Gauss and the weights that reduce other Gausses.If new pixel does not belong to any Gauss, then deletion has minimum w
iOld Gauss, and to increase with this new color of pixel be the new Gauss at center.
Occupy the many hypothesis of picture point time according to background, have big w if pixel belongs to
iGauss, then this pixel belongs to background.Otherwise it is prospect, shade or highlight area.Task of the present invention is that which kind of type is the pixel of each change of identification belong to.
For highlight area, situation is similar.Shown in figure 11, the border of highlight area also is variation gradually.Unique difference is that highlight area is brighter than original background, and the shadow region is darker than the background area.Based on this fact, in said system, detect shadow and highlight in an identical manner.In aft section of the present invention,, can use identical method to detect highlight area although use the shadow region to explain method of the present invention as an example.
Before the illustrated in detail shadow Detection, be necessary to explain the phenomenon in penumbra zone.Fig. 2 has shown the exemplary layout in penumbra and umbra zone.
Usually, light source is not a point-source of light.If object has blocked all light that send from light source, the zone of covering so is the umbra zone.In the umbra zone, pixel is almost with the same ratio deepening.If object has blocked the part light source, then the shinny zone of part is known as penumbra zone (right-hand component among Fig. 2).In the penumbra zone, the light that covers increases gradually, therefore poorly changes gradually from 0 to maximum.
Difference image obtains unit 100 and is used to obtain the difference image between new images and the background mode and stores difference image.At first, difference image obtains unit 100 and is compared in each Gauss center in new images pixel and the background model, and finds nearest background Gauss.Confirm then and nearest background Gauss between distance whether less than this nearest background Gauss's variance.If less than, then difference image obtains unit 100 this pixel is labeled as background, and the respective differences in the difference image is set to 0.Otherwise difference image obtains unit 100 and in difference image, said distance is stored as respective differences.
Fig. 3 shows the method for calculating difference image.The 101st, such as the color space of RGB, the 102nd, the Gauss of prospect or shade, the 103rd, the Gauss of background, the 104th, input pixel.This input pixel is not in any background Gauss's scope, so it is stored in the difference image to nearest background Gauss's (103) distance.The 105th, another imports pixel, and it is in 103 acceptable scopes, so 105 be marked as background.
Fig. 4 is the block diagram that illustrates according to the structure of the penumbra of exemplary embodiment zone extraction unit 200.
Penumbra zone extraction unit 200 comprises acutance estimation unit 210 and background expanding element 220.The acutance at edge in the said acutance estimation unit 210 estimated difference images.Background expanding element 220 is expanded the background area through level and vertical scanning.Wherein, the foreground image that reaches if can not pass any sharp-pointed edge, then this pixel is marked as penumbra.
Usually, the penumbra zone always has non-constant width and mild edge, and object bounds then tends to have narrow and sharp-pointed edge.In the present invention, acutance estimation unit 210 is used to the acutance of estimated edge.For pixel p, acutance estimation unit 210 is the gradient of calculating pixel p among a small circle and on a large scale respectively, and obtain g respectively
1And g
2Pass through g
1/ g
2Estimate the acutance sharp (p) of pixel.For example, Fig. 5 has shown mild edge, and Fig. 6 has shown sharp-pointed edge.Although their absolute gradient is similar, according to method of estimation of the present invention, because total gradient g
2Less, so g
1/ g
2Relatively large, as shown in Figure 6.
In order to calculate acutance, acutance estimation unit 210 use filters as shown in Figure 7.Exist level and vertical filter to calculate acutance.In Fig. 7, top piece is used to the calculated level acutance, and following piece is used to calculate vertical sharpness.At first, calculate Vi through summation to the gray value of a plurality of pixels among the regional i, 1,2,3 and 4 in the i presentation graphs 7 here, and pixel p is represented with S.Then through sharp (p)=| v2-v3|/(| v1-v4|+A) come the acutance of calculating pixel p, wherein, A is the constant that is used to punish little gradient.In addition, if | v2-v3|<thr
Grad, this means that the gradient of p is very little, then directly the acutance of pixel p is arranged to 0, wherein, thr
GradThe system parameters that expression is predetermined.
Background expanding element 220 extracts the penumbra zone through application level scanning and vertical scanning.For each horizontal scanning line, background expanding element 220 finds each background section and expands this background section left.The pixel of supposing the background section left side is p
LIf, sharp (p
L)<thr
SharpAnd diff (p
L)<thr
Diff, then this pixel is marked as penumbra, wherein, and diff (p
L) be pixel p
LAnd poor between the background model, it is stored in the difference image, and thr
SharpAnd thr
DiffBe two system parameterss.To repeatedly carry out this operation does not in the horizontal direction satisfy up to above-mentioned condition.After the extended operation of accomplishing left, the extended operation to the right of the section of execution in a similar manner.
After horizontal sweep, detected most penumbra zone, but, therefore, come in a similar manner every vertical line is scanned because picture noise has hindered horizontal extension with having some sharp-pointed edges.If acutance sharp (the p)<thr of tight pixel p on background section
SharpAnd diff (p)<thr
Diff, then pixel p is marked as penumbra, and operation repeats to the pixel above next then, does not satisfy up to above-mentioned condition, carries out the downward extended operation of section then in a similar manner.Wherein, diff (p
L) be poor between pixel p L and the background model that is stored in the difference image, and thr
SharpAnd thr
DiffBe two system parameterss.
After aforesaid operations, extracted the penumbra zone through acutance estimation unit 210 and background expanding element 220.
In umbra extracted region unit 300, extract the umbra zone through space constraint.Fig. 8 and Fig. 9 show two kinds of typical case.
Fig. 8 extracts the displaying that the result finds the umbra zone according to penumbra.In exemplary embodiment of the present, if certain zone is centered on by the penumbra pixel, then whole zone just is marked as the umbra zone.In Fig. 8, zone 302 is umbra zones, and zone 303 is penumbra zones.Zone 302 is centered on by penumbra zone 303, so this whole zone is marked as the umbra zone.
Fig. 9 extracts the displaying that the result finds the umbra zone according to penumbra when umbra is adjacent with foreground object.Zone 301 is subject area, and zone 302 is umbra zones, and zone 303 is penumbra zones.In the exemplary embodiment, if the part of foreground area is centered on by the penumbra zone, then this part also should be marked as the umbra zone, shown in the zone among Fig. 9 302.For whether a part of confirming foreground area is centered on by the penumbra zone, the present invention has used level and vertical scanning.If it is not very longly (to liken the threshold value Thr into system parameters to that scan lines has the length of penumbra-object-penumbra pattern and object part
LengthShort), then this object part is marked as umbra.
Figure 10 is the flow chart that the method in identification shadow and highlight zone from foreground area is shown.
At step S1010, image capturing unit 110 is caught new images.At step S1020, new images and background model are compared in background model unit 120, and utilize new images to come update background module.At step S1030, difference image obtains the difference image that unit 100 obtains between new images and the background model.At step S1040, penumbra zone extraction unit 200 through edge in the estimated difference image acutance and expand the background area and extract the penumbra zone.At step S1050, umbra extracted region unit 300 extracts the result based on penumbra and extracts the umbra zone.
Figure 11 shows two experimental results.One is high light extraction, and another is that shade extracts.
In Figure 11, two experimental results have been shown.From left to right, four row are respectively (a) original image, (b) difference image, (c) acutance image, and last row are that (d) object and shade (Gao Guang) extract the result.
First is high light extraction result.In this example, road is illuminated by auto bulb.Second example is indoor scene.Here, to such an extent as to there is big shadow region very by force on the ground in bias light.But the edge that can see bright area is very level and smooth.Based on acutance method of estimation according to the present invention, the acutance at said edge is less than general target edges.Thereby the shadow region is successfully extracted.
Use the present invention, can from the foreground area the camera surveillance, discern the shadow region.And there is not restriction such as clean background, grain background and supposition light source position.
Although shown and described exemplary embodiment of the present invention,, it should be appreciated by those skilled in the art, under the situation that does not break away from the principle of the present invention that limits claim and spirit, can carry out various modifications to these exemplary embodiments.
Claims (6)
1. one kind is detected the regional method of shadow and highlight from foreground area in surveillance, and said method comprises:
Catch new images;
New images and background model are compared, and come update background module through new images;
Obtain the difference image between said new images and the background model;
Acutance through edge in the estimated difference image is also expanded the background area and is extracted the penumbra zone;
Extract the result based on the penumbra zone and extract the umbra zone,
Wherein, the step of acquisition difference image comprises:
Pixel in the new images is compared with each Gauss's center in the background model, and find nearest background Gauss;
Confirm that distance between said pixel and said nearest background Gauss's the center is whether less than nearest background Gauss's corresponding variance;
If said distance then is labeled as background with said pixel less than said corresponding variance, and the respective differences in the difference image is set to 0; With
If said distance is not less than said corresponding variance, then corresponding distance is stored in the difference image as respective differences,
Wherein, the operation of expansion background area comprises in the step of extracting the penumbra zone:
Find each background section of every horizontal scanning line, and come in the horizontal direction by the pixel-expanded background section to every horizontal scanning line;
Find each background section of every vertical scan line, and come to pursue in vertical direction the pixel-expanded background section to every vertical scan line,
Wherein, the operation of expanding background section in the horizontal direction comprises:
If in the acutance of the pixel p on the tight left side of said background section less than thr
SharpAnd diff (p)<thr
Diff, then said pixel is marked as penumbra, and the pixel to next left side repeats this operation then, does not satisfy up to above-mentioned condition; Carry out the extended operation to the right of section then in a similar manner, wherein, diff (p) is poor between pixel p and the background model that is kept in the difference image, thr
SharpAnd thr
DiffBe two predetermined system parameterss,
Wherein, the operation of expanding background section in vertical direction comprises:
If in the acutance of the pixel p of the tight top of said background section less than thr
SharpAnd diff (p)<thr
Diff, then said pixel is marked as penumbra, and the pixel to next top repeats this operation then, does not satisfy up to above-mentioned condition; Carry out the downward extended operation of section then in a similar manner, wherein, diff (p) is poor between pixel p and the background model, thr
SharpAnd thr
DiffBe two predetermined system parameterss,
Wherein, extracting the regional step of umbra comprises:
When the zone is centered on by the penumbra pixel, be the umbra zone with whole zone marker;
When the part of foreground area is centered on by the penumbra pixel, said part is labeled as the umbra zone through application level and vertical scanning.
2. the method for claim 1, wherein background model is the set of pixel distribution, and describes each color of pixel distribution with mixed Gauss model:
∑w
i·N(u
i,σ
i)
Wherein, w
iBe each single Gauss's weights, (u is that the center is that u, variance are the Gaussian distribution of σ σ) to N.
3. the method for claim 1, wherein the step through the new images update background module comprises:
Confirm whether the pixel in the new images belongs to a Gauss in the mixed Gauss model;
If said pixel belongs to said Gauss, then upgrade said Gauss's center and variance, increase said Gauss's weights and reduce other Gausses' weights;
If said pixel does not belong to any Gauss, then deletion has minimum w
iGauss, and to increase with said color of pixel be the new Gauss at center.
4. the step of the method for claim 1, wherein extracting the penumbra zone comprises:
Calculate respectively pixel in the new images among a small circle with on a large scale in gradient g1 and g2;
Through with g1 divided by g2 or through with g1 divided by with g2 and constant A addition and estimate the acutance of said pixel.
5. method as claimed in claim 4, wherein, estimate that the step of the acutance of pixel also comprises:
If g1<thr
Grad, then the acutance of said pixel is set to 0, wherein, and thr
GradIt is the system parameters of being scheduled to.
6. the method for claim 1, wherein said part being labeled as the regional step of umbra comprises:
If scan lines has penumbra-object-penumbra pattern, and the length of object part is less than the Thr as system parameters
Length, then said object part is marked as umbra.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2007101517674A CN101399968B (en) | 2007-09-29 | 2007-09-29 | Shadow and high brightness detection method in single color monitoring camera |
KR1020070135842A KR101345131B1 (en) | 2007-09-29 | 2007-12-21 | Shadow and high light detection system and method of the same in surveillance camera and recording medium thereof |
US12/155,839 US8280106B2 (en) | 2007-09-29 | 2008-06-10 | Shadow and highlight detection system and method of the same in surveillance camera and recording medium thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2007101517674A CN101399968B (en) | 2007-09-29 | 2007-09-29 | Shadow and high brightness detection method in single color monitoring camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101399968A CN101399968A (en) | 2009-04-01 |
CN101399968B true CN101399968B (en) | 2012-07-18 |
Family
ID=40518182
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2007101517674A Active CN101399968B (en) | 2007-09-29 | 2007-09-29 | Shadow and high brightness detection method in single color monitoring camera |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR101345131B1 (en) |
CN (1) | CN101399968B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001351107A (en) * | 2000-06-05 | 2001-12-21 | Mitsubishi Electric Corp | Device and method for monitoring traffic |
CN101017573A (en) * | 2007-02-09 | 2007-08-15 | 南京大学 | Method for detecting and identifying moving target based on video monitoring |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1994011852A1 (en) | 1992-11-10 | 1994-05-26 | Siemens Aktiengesellschaft | Process for detecting and eliminating the shadow of moving objects in a sequence of digital images |
JP2004046501A (en) | 2002-07-11 | 2004-02-12 | Matsushita Electric Ind Co Ltd | Moving object detection method and moving object detection device |
GB0326374D0 (en) | 2003-11-12 | 2003-12-17 | British Telecomm | Object detection in images |
-
2007
- 2007-09-29 CN CN2007101517674A patent/CN101399968B/en active Active
- 2007-12-21 KR KR1020070135842A patent/KR101345131B1/en active IP Right Grant
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001351107A (en) * | 2000-06-05 | 2001-12-21 | Mitsubishi Electric Corp | Device and method for monitoring traffic |
CN101017573A (en) * | 2007-02-09 | 2007-08-15 | 南京大学 | Method for detecting and identifying moving target based on video monitoring |
Also Published As
Publication number | Publication date |
---|---|
CN101399968A (en) | 2009-04-01 |
KR101345131B1 (en) | 2013-12-26 |
KR20090033308A (en) | 2009-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102348128B (en) | Surveillance camera system having camera malfunction detection function | |
US8280106B2 (en) | Shadow and highlight detection system and method of the same in surveillance camera and recording medium thereof | |
JP4648981B2 (en) | Non-motion detection method | |
US7778445B2 (en) | Method and system for the detection of removed objects in video images | |
CN101084527B (en) | A method and system for processing video data | |
US8588466B2 (en) | Object area detection system, device, method, and program for detecting an object | |
JP4811653B2 (en) | Object detection device | |
US20200250840A1 (en) | Shadow detection method and system for surveillance video image, and shadow removing method | |
US8355079B2 (en) | Temporally consistent caption detection on videos using a 3D spatiotemporal method | |
Benedek et al. | Study on color space selection for detecting cast shadows in video surveillance | |
JP4653207B2 (en) | Smoke detector | |
US10762372B2 (en) | Image processing apparatus and control method therefor | |
KR101204259B1 (en) | A method for detecting fire or smoke | |
JP5060264B2 (en) | Human detection device | |
CN109410222B (en) | Flame detection method and device | |
KR20160089165A (en) | System and Method for Detecting Moving Objects | |
JP2009282975A (en) | Object detecting method | |
CN104778723A (en) | Method for performing motion detection on infrared image with three-frame difference method | |
CN101727673A (en) | Method and unit for motion detection based on a difference histogram | |
US8311345B2 (en) | Method and system for detecting flame | |
JP5142416B2 (en) | Object detection device | |
KR101729536B1 (en) | Apparatus and Method of Detecting Moving Object in Image | |
CN101399968B (en) | Shadow and high brightness detection method in single color monitoring camera | |
Colombari et al. | Background initialization in cluttered sequences | |
KR102161212B1 (en) | System and method for motion detecting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |