CN102346854A - Method and device for carrying out detection on foreground objects - Google Patents

Method and device for carrying out detection on foreground objects Download PDF

Info

Publication number
CN102346854A
CN102346854A CN2010102519024A CN201010251902A CN102346854A CN 102346854 A CN102346854 A CN 102346854A CN 2010102519024 A CN2010102519024 A CN 2010102519024A CN 201010251902 A CN201010251902 A CN 201010251902A CN 102346854 A CN102346854 A CN 102346854A
Authority
CN
China
Prior art keywords
histogram
pixel
background
foreground
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010102519024A
Other languages
Chinese (zh)
Inventor
邓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN2010102519024A priority Critical patent/CN102346854A/en
Publication of CN102346854A publication Critical patent/CN102346854A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for carrying out detection and shadow elimination on foreground objects, wherein the method comprises the following steps: continuously acquiring images of a monitored area so as to obtain an image sequence of the monitored area including a foreground object; creating a background model consisting of a plurality of texture histograms; taking the acquired current image as an input image, calculating a local texture histogram for each pixel in the image; according to the local texture histogram of each pixel, finding a matched texture histogram in the background model; classifying the pixels into foreground pixels and background pixels according to the matching result so as to obtain a foreground image; and carrying out updating on the background model by using the foreground image and the current input image. Experimental results show that by using the method provided by the invention, the foreground objects can be detected accurately both in indoor and outdoor scenes, and shadow areas in detection results can be eliminated effectively, therefore, the method can be applied to the field of video image processing such as video monitoring, video meeting and the like.

Description

Foreground object detection method and equipment
Technical field
The present invention relates to video image processing technology, more particularly, the present invention relates to a kind ofly in the sequence of video images of having gathered, detect foreground object and eliminate the method and apparatus of shadow region in the testing result
Background technology
Along with the develop rapidly of computer technology, image processing techniques, video is by the increasing fields of society that is applied to, like video monitoring, video conference etc.To the analyzing and processing of video, become the research focus of present image process field, its crucial treatment technology is exactly that moving object in the video sequence or target are detected, and its result is generally used for higher level analysis and processing such as target following, classification.As a kind of effective ways that moving object detects, background subtraction (Background Subtraction) algorithm deducts scene background to obtain sport foreground from present image, has accurate location and does not enlarge advantage such as moving region.But, in the resulting moving object testing result of background subtraction, can comprise usually and follow the cast shadow that foreground object moves simultaneously, directly cause testing result inaccurate.
The problem that causes owing to shade in can't the resolved detection result and true foreground object can be divided into two types substantially: at first, detected foreground object profile can deform because of the existence of shade, and then prediction of ectocrine body position and object classification result; Secondly, shade can cause the adhesion that makes a mistake between the different objects profile, thereby mistake appears in the feasible object number is detected, and this phenomenon is used influence greatly to field of video monitoring, for example object scene is counted and classification.
In the prior art, the method that shade in the foreground object testing result is eliminated mainly contains two kinds: a kind of method that is to use priori promptly reaches the purpose of eliminating shade through the physical message of obtaining contour of object information and scene in advance.This method is inappropriate for and applies to scene with having higher requirements per family.Another kind method is not use prior imformation, and through calculating the spatial information and the transparency of scene, this method calculation cost is big, and is high to system requirements, is difficult in the practical application and realizes.
For example, patent documentation 1 has proposed a kind of shade removing method that uses image intensity information.This patent is to each pixel among the foreground detection result, according to the difference calculating strength cross correlation value (cross correlation) of each pixel strength information in present image and background image.As far as each pixel, its intensity cross correlation value is in the MxN neighborhood that is the center, to calculate with the current pixel.If this cross correlation value is greater than predetermined threshold value, then this pixel is shade.But strength information still receives the light variable effect easily, is not sufficient to distinguish the shadow region.
Patent documentation 2 has proposed a kind of moving object detection method that combines shade to eliminate.This patent has proposed two kinds of shade removing methods: texture analysis and Analysis of Topological Structure.In the texture analysis method, this patent uses textural characteristics that detected foreground object and background model are compared, if in the same area, the textural characteristics of foreground object and background model have obvious difference, and then this object is the true foreground object; Otherwise, then be the shadow region.But the textural characteristics of each pixel is only used single textural characteristics value representation in this patent.In addition, pixel self has also only been considered in the calculating of textural characteristics, does not consider the influence of neighbor in the zone.And in this patent, foreground object detects and shadow Detection is two independently algorithms, does not combine.
Patent documentation 3 has proposed the method for shadows pixels in a kind of detected image.In this patent, two hsv color histograms are used to shadow Detection.All need calculate two color histograms to each pixel in the image.First histogram uses the H/S color value, and second histogram uses the S/V color value.In the shadow Detection process, at first need introduce two preset alignment amounts; Then each pixel is judged, if its H/S value greater than first alignment amount, and the S/V value is less than second alignment amount, then this pixel is a shade.But, use color histogram to compare in this patent, and color characteristic very easily receives the influence that light changes as characteristic.In addition, this patent is in entire image, to detect the shadow region, is not the cast shadow that only is directed against foreground object, therefore obscures the shade of background shadow and real-world object easily.
[patent documentation 1] US Patent No. 7620266 B2
[patent documentation 2] US Patent No. 2009/0060352 A1
[patent documentation 3] US Patent No. 7305127 B2
Summary of the invention
As stated, the existing shade removing method that detects to foreground object all can't adapt to the demand of practical application effectively.To the problem that exists in the prior art, the present invention proposes a kind of new foreground object and detects and the shade removing method.In the present invention, this zone that is defined as of shadow region also equally possesses the textural characteristics of self not as real foreground object.The textural characteristics of the background area that the textural characteristics of shadow region is corresponding with it is similar, and difference only is the variation of light intensity in the zone.In brief, the textural characteristics of shadow region is the result of textural characteristics after light intensity changes of corresponding background area.Therefore, the basic thought of the method for the invention is exactly the difference through the zones of different textural characteristics, distinguishes foreground object and shade and background area.
According to an aspect of the present invention, a kind of foreground object detection method is provided, comprises step: (a) present image of continuous acquisition monitoring area obtains comprising the sequence of video images of foreground object; (b) create a background model of forming by a plurality of texture histograms; (c) with the present image gathered as input picture, each pixel in the image is calculated a local grain histogram; (d), in background model, find the texture histogram of coupling according to the local grain histogram of each pixel; And (e) each pixel is categorized as foreground pixel point and background pixel point according to matching result, obtain foreground image.
The background model of being created is the set of one group of pixel, and each pixel is described by one group of adaptive texture histogram in the set, and it is the weights between 0 to 1 that each histogram all has a value.
It is that the center is with R with the current pixel in input picture that said local grain histogram is one tZone for radius.
The method according to this invention comprises that also step (f) utilizes foreground image and current input image, and background model is upgraded.
The method according to this invention, said step (d) comprising: adopt similarity measurement, all texture histograms in the local grain histogram that in step (c), obtains and the background model are carried out similarity calculating; To the similarity between all texture histograms in the local grain histogram that calculates and the background model, judge whether to exist one or more similarities greater than predetermined threshold value; When all similarities during all less than predetermined threshold value, the minimum texture histogram of weights in the deletion background model, and the local grain histogram of current pixel added in the background model, the histogrammic weights of local grain are preset initial weight; And when having one or more similarities greater than predetermined threshold value, the texture histogram in the background model of selection maximum comparability value correspondence is as the matching histogram of current pixel.Wherein, similarity measurement is any one histogram method for measuring similarity.Said when the local grain histogram is added background model, the histogrammic weights of local grain are preset initial weight.
Said step (e) comprising: with all the texture histograms in the background model according to the descending sort of weights sizes; Select the texture histogram set that is used to describe background; Judge that current pixel belongs to background pixel point or foreground pixel point; And export last foreground object testing result.
Saidly select the texture histogram set that is used to describe background and depend on histogrammic weights, wherein, histogrammic weights are big more, and the probability that this histogram is considered to be used to describe background is high more.
Said judgement current pixel belongs to background pixel point or the foreground pixel point comprises: the histogram set of the matching histogram of current pixel and being used for of selecting being described background compares; If matching histogram is arranged in the histogram set of describing background; Then current pixel belongs to the background pixel point; Otherwise then current pixel belongs to the foreground pixel point.
Said step (f) comprising: utilize current input image, upgrade the matching histogram of each pixel; And the histogrammic weights of all textures in the update background module, wherein, improve the weights of matching histogram, reduce other histogrammic weights.
According to another aspect of the present invention, a kind of foreground object checkout equipment is provided, comprises: image capture module, be used for the present image of continuous acquisition monitoring area, obtain comprising the sequence of video images of foreground object; The background modeling module is used to create a background model of being made up of a plurality of texture histograms; Texture histogram calculation module is used for according to the present image of being gathered, and each pixel in the image is calculated a local grain histogram; The histogram matching module is used for the local grain histogram according to each pixel, in background model, finds the texture histogram of coupling; The pixel sort module is categorized as foreground pixel point and background pixel point according to matching result with each pixel, obtains foreground image.
Foreground object inspection equipment according to the present invention, also comprise: the background model update module is used for according to foreground image and current input image background model being upgraded.
Said image capture module comprises a camera that is used to gather sequence of video images
The said histogram matching module of said foreground object checkout equipment also comprises: similarity is calculated computing module; Adopt similarity measurement, all texture histograms in resulting local grain histogram of texture histogram calculation module and the background model are carried out similarity calculate; Comparison module to the histogrammic similarity of all textures in the local grain histogram that calculates and the background model, judges whether to exist one or more similarities greater than predetermined threshold value; And decision-making module; When all similarities during all less than predetermined threshold value; The minimum texture histogram of weights in the deletion background model; The local grain histogram of current pixel is added in the background model; And when having one or more similarities greater than predetermined threshold value, the texture histogram in the background model of selection maximum comparability value correspondence is as the matching histogram of current pixel.
The said pixel sort module of said foreground object checkout equipment comprises: order module is used for all texture histograms of background model according to the descending sort of weights size; Select module, select the texture histogram set that is used to describe background; Judge module judges that current pixel belongs to background pixel point or foreground pixel point; And output module, export last foreground object testing result.
The said selection module of said foreground object checkout equipment is selected the texture histogram set that is used to describe background based on histogrammic weights, and wherein, histogrammic weights are big more, and the probability that this histogram is considered to be used to describe background is high more.
The matching histogram of the said judge module comparison current pixel of said foreground object checkout equipment and being used for of selecting are described the histogram of the histogram set of background; And be arranged at matching histogram under the situation of the histogram set of describing background current pixel is confirmed as the background pixel point, and be not arranged at matching histogram under the situation of the histogram set of describing background current pixel is confirmed as the foreground pixel point.
Said background model update module utilizes current input image to upgrade the histogrammic weights of all textures in matching histogram and the update background module of each pixel, wherein, improves the weights of matching histogram, reduces other histogrammic weights.
Can find out from such scheme; The present invention utilizes textural characteristics to set up background model, simultaneously, considers the syntople between the pixel; As textural characteristics, can effectively distinguish foreground object and shadow region with the self-adaptation texture histogram that in certain neighborhood, calculates.Simultaneously, foreground object detection and shade elimination process realize at same algorithm among the present invention, have reduced computation complexity, have saved a large amount of computational resources.In addition, foreground object according to the invention detects and the shade removing method need not preserved the total data in the original video sequence, has saved storage space, more helps practical application.The realization of the inventive method is simply effective, has good practice property.
Combine the detailed description of following the preferred embodiments of the present invention that accompanying drawing considers through reading, will understand above and other targets, characteristic, advantage and technology and industrial significance of the present invention better.
Description of drawings
Fig. 1 is that foreground object detects and the overview flow chart of shade removing method in the embodiment of the invention.
Fig. 2 is the two field picture in example image sequence of the present invention.
Fig. 3 a is the texture maps of image shown in Figure 2;
Fig. 3 b is the local neighborhood texture maps of a pixel in the texture maps shown in Fig. 3 a.
Fig. 4 is a process flow diagram of seeking matching histogram part method in the process flow diagram shown in Figure 1.
Fig. 5 is the process flow diagram of pixel classified part method in the process flow diagram shown in Figure 1.
Fig. 6 a is the shadow region exemplary plot in the foreground object testing result of image shown in Figure 2;
Fig. 6 b is the testing result image after foreground object detection and shade removing method are handled image shown in Figure 2 in the use embodiment of the invention.
Fig. 7 is according to the foreground object detection of the embodiment of the invention and the The general frame of shade abatement apparatus.
Embodiment
For making the object of the invention, technical scheme and advantage clearer, below in conjunction with embodiment and accompanying drawing, to further explain of the present invention.
Fig. 1 is that foreground object detects and the overview flow chart of shade removing method in the embodiment of the invention.As shown in Figure 1, this flow process comprises the steps:
Step 10, the image of continuous acquisition monitoring area obtains comprising the sequence of video images of the said monitoring area of foreground object, this sequence of video images is the input data of subsequent step.Except the video sequence of real-time collection, the view data of image stored sequence or other form also can be used as the input data in advance.For example, Fig. 2 shows the two field picture 101 in the example image sequence, and wherein, this example image 101 has comprised a foreground object 102.In step 10, an image capture device like camera, is used to carry out the collection of video data.Data after the collection will be stored in the memory device or in real time and handle.
Step 11 after obtaining current input picture, is set up one in order to describe the non-parametric model of current background, and purpose is that the scene background that can truly reflect the monitoring area changes.In step 11, separate to the process of each pixel modeling in the background, can under the prerequisite of needs, carry out high-speed parallel and calculate.Because the modeling process of all background pixel points is identical, therefore, in ensuing embodiment is introduced, only introduce the modeling process of one of them pixel in detail.In addition, in practical application, the monitoring area is generally the scope of entire image.
In practical application, the fixing collecting device in the bright said method of we use location is gathered video data.Because the background more complicated of surveillance area makes in to the process of regional background modeling, to receive The noise easily.Therefore, using the background model comprise a plurality of adaptive models that pixel is carried out modeling has more and has significant practical applications.The present invention proposes a kind of ADAPTIVE MIXED histogram model based on textural characteristics model as a setting.In an embodiment of the present invention, adopted local binaryzation pattern (Local Binary Pattern, LBP) as the texture description operator, with use the LBP operator pixel with R tThe texture histogram that calculates in the neighborhood for radius is employed texture histogram in the model as a setting.Radius R tDefined histogrammic zoning, value is more little, and histogram can embody local grain information more.The LBP operator is invented in " Multi-resolution gray-scale and rotation invariant texture classification with local binary pattern " by people such as Ojala the earliest; Among the IEEE Trans.on PAMI; In addition; Under the prerequisite that does not depart from the scope of the invention; Can adopt other existing texture description operators; For example: edge histogram, SIFT methods such as (Scale Invariant Feature Transform).
In mixing histogram model, each pixel on the present image is used the modeling of mixing histogram model.In brief, each pixel is by one group of self-adaptation texture histogram { h 0..., h K-1Expression, wherein, K representes to mix histogrammic number in the histogram model.Simultaneously, each histogram contains the weight w of a value between 0 to 1 in the model k(0<k<K+1).Different with common mixture model; Mixing histogram model is progressively to handle observation data; Weights can be imported data and adaptive changing along with different with each histogrammic data, and the value of histogram number K depends on the requirement of the complexity and the computational accuracy of scene.The main thought of mixing histogram model is to use 3-5 texture histogram that pixel is carried out modeling, and the textural characteristics of describing pixel distributes.Mixing histogram model and can reflect the various variations in the real scene, is a kind of multimodal modeling method, has the incomparable robustness of single histogram model.In the simple scene of background, the K value is big (k<=3) too.In the scene of change of background more complicated, need to mix more histogram (3<k<6).
Step 12 is handled the sequence of video images that in step 10, collects, the local grain histogram of calculating pixel.In step 12, a texture description operator is used to calculate the texture maps of current input image.After obtaining texture maps, it is the center that all pixels in the image are extracted with it, and radius is R tLocal neighborhood, and in this local neighborhood, calculate the local grain histogram, the local grain figure that calculates is as the proper vector of this pixel.For example, in the image 101 shown in Figure 2, foreground object 102 gets into scene.Fig. 3 a is corresponding to example image 101, according to the said method of step 12, and the texture Figure 120 that uses LBP unity and coherence in writing operator to calculate.In Fig. 3 a, local neighborhood texture Figure 121 of pixel 122 is extracted out, shown in Fig. 3 b.The local grain histogram that calculates through local neighborhood texture Figure 121 is as the proper vector of pixel 122.
Fig. 4 is the process flow diagram of step 13 in the embodiment of the invention, and purpose is to current pixel, in mixing histogram model, finds the histogram of coupling.
In step 13, be input as the mixing histogram model of in step 11, setting up 132, and local grain Nogata Figure 131 of the pixel that in step 12, calculates.In step 133, local grain Nogata Figure 131 of pixel will compare with all histograms in the mixing histogram model 132, and comparison procedure can be used any one histogram similarity measurement.The similarity measurement that uses in the embodiment of the invention is histogram intersection:
∩ ( h t , h k ) = Σ n = 0 N - 1 min ( h t , n , h k , n ) - - - ( 1 )
Wherein, h tBe local grain Nogata Figure 131 of pixel, h kBe k the histogram that mixes in the histogram model 132, N is histogrammic dimension (bin).Histogram intersection can be measured similar part in two histograms, ignores the characteristic that only in a histogram, occurs.And the calculating of histogram intersection is simple, and cost is low.In addition, under the prerequisite that does not depart from the scope of the invention, can adopt other existing histogram method for measuring similarity, for example: chi-square, Pasteur's distance methods such as (Bhattacharyya distance).
Through step 133, the local grain Nogata Figure 131 that can obtain pixel with mix K the similarity after relatively of K histogram in the histogram model 132.Based on these similarities, in step 134, seek the matching histogram of this pixel.If all less than predetermined threshold value Tp, then there is not matching histogram in all K similarity.Otherwise the corresponding histogram of maximum comparability value is chosen as the matching histogram of this pixel.Step 135 will judge whether this pixel exists matching histogram, exist then to be directly to next step, if there is not matching histogram, proceed to step 136.In step 136, use local grain Nogata Figure 131 of this pixel to replace the histogram that mixes weights minimum in the histogram model 132, weights change to initial weight simultaneously.For example, initial weight is 0.01 in the embodiment of the invention.
Step 14 is judged the output result of step 13, if there is matching histogram in this pixel, proceeds in the step 15 and handles, if there is not matching histogram, judges that then current pixel is the foreground pixel point, proceeds in the step 16 and handles.
Fig. 5 is the process flow diagram of step 15 in the embodiment of the invention, in step 15, will classify to pixel according to the matching histogram of pixel.The matching histogram 151 of pixel and the mixing histogram model of setting up 132 will further be handled as the input data of step 13.
In mixing histogram model 132, be not that all histograms all are used to describe scene background, wherein have only B histogram to be used to describe background distributions (B≤K).It is considered herein that whether any one histogram that mixes in the histogram model is used to describe background distributions and its weights size is closely related.The histogram weights are big more, and the possibility that this histogram is used to describe background is big more.In step 152, all histograms in the current mixing histogram model are carried out descending sort according to weights, after carrying out descending sort according to weights, most possibly the histogram that is produced by background will be on the top of sequence.On this ordering basis, B histogram before in step 153, choosing thinks that they are used to describe background distributions.The standard of choosing is:
B = arg min b ( Σ k = 1 b w k > T B ) - - - ( 2 )
Wherein, T BBe considered to be used for confirming the minimum zone of background distributions, its value and K value are closely related.If T BValue is less, thinks that then current scene background is fairly simple.Otherwise,, then need the T of big value for complicated scene B
Select be used to describe the histogram of background after, in step 154, the matching histogram of pixel and B the background histogram of selecting are compared, comparison procedure is same as step 133.As far as current pixel, if with this B histogram in any one texture histogram coupling, then be judged to be the background pixel point.Otherwise, if not with this B histogram in any one coupling, then be judged to be the foreground pixel point.Step 155 outputs to next procedure with the classification results of pixel.
Step 16, the input results of integrating step 14 and step 15, the final foreground object testing result of output the present invention.In the final detection result of output, if pixel belongs to the background pixel point, pixel value is revised as zero, and pixel belongs to the foreground pixel point, keeps original pixel value constant.For example, Fig. 6 a is the foreground object testing result that does not have the shade removing method.Fig. 6 b is the net result image after foreground object detection and shade removing method detect example image among Fig. 2 101 in the use embodiment of the invention.Through two width of cloth result images are compared and can find out, exist shadow region 160 to link to each other among Fig. 6 a with the true foreground object, cause the distortion of foreground object profile.And among Fig. 6 b, the most of shade in the shadow region 162 has been eliminated, and accurately detects the foreground object 161 of motion.
Step 17, final testing result are used to upgrade background model described in the present invention-mixing histogram model.In step 17, as far as current pixel point, only upgrade the data of mixing its matching histogram in the histogram model, formula is:
h k=(1-α)h k+αh t (3)
Wherein, h tBe the local grain histogram that current pixel calculates in step 12, h kIt is the matching histogram of the pixel that in step 13, obtains.
In addition, upgrade and mix each histogrammic weights in the histogram model, formula is:
w k,t=(1-α)w k,t-1+αM k (4)
Wherein, α is a turnover rate, and size can be considered by mixed histogram model with the pixel value that decides this pixel in much time ranges between 0 to 1.For M kIf corresponding histogram is the matching histogram of pixel, M so kValue just be 1.If corresponding histogram is other histograms that mix in the histogram model, that M kValue just be 0.
Behind the model modification, mix each histogrammic weights normalization again in the histogram model, the histogram data that does not mate in the model remains unchanged.
Foreground object according to the invention detects and the shade removing method can detect foreground object and the area of eliminating shadow region in the testing result effectively accurately and effectively in actual scene.This method is used the proper vector of local grain histogram as pixel, and uses one group of adaptive texture histogram that background is carried out modeling.Said mixing histogram model can accurate description complicated scene background, and variation naturally that adapts to light in the scene.The invention belongs to nonparametric technique, need not carry out parametrization and estimate to have good versatility and practicality scene.
Fig. 7 is that the foreground object in the embodiment of the invention detects the The general frame with the shade abatement apparatus.As shown in Figure 7, survey of scenery health check-up before this and shade abatement apparatus comprise: image capture module 20, background modeling module 21, texture histogram calculation module 22, histogram matching module 23, pixel sort module 24, and background model update module 25.
Image capture module 20 comprises a camera that is used to gather vedio data, perhaps is stored in the video data of memory device.Equipment recommendation described in the present invention works in the live video stream input.Image capture module 20 can comprise that a data memory device is used for the vedio data that storage of collected arrives simultaneously.
Background modeling module 21 is used to create a background model of being made up of a plurality of texture histograms.Texture histogram calculation module 22 as input picture, is calculated a local grain histogram to each pixel in the image with the present image gathered.
Histogram matching module 23 is used for the local grain histogram according to each pixel, in background model, finds the texture histogram of coupling.Pixel sort module 24 is categorized as foreground pixel point and background pixel point according to matching result with each pixel, obtains foreground image.
Background model update module 25 is utilized foreground image and current input image, and background model is upgraded.
Said histogram matching module 23 comprises following functional module: similarity is calculated computing module, adopts similarity measurement, all texture histograms in resulting local grain histogram of texture histogram calculation module and the background model is carried out similarity calculate; Comparison module to the histogrammic similarity of all textures in the local grain histogram that calculates and the background model, judges whether to exist one or more similarities greater than predetermined threshold value; And decision-making module; When all similarities during all less than predetermined threshold value; The minimum texture histogram of weights in the deletion background model; The local grain histogram of current pixel is added in the background model; And when having one or more similarities greater than predetermined threshold value, the texture histogram in the background model of selection maximum comparability value correspondence is as the matching histogram of current pixel.
Said pixel sort module 24 comprises following functional module: order module is used for all texture histograms of background model according to the descending sort of weights size; Select module, select the texture histogram set that is used to describe background; Judge module judges that current pixel belongs to background pixel point or foreground pixel point; And output module, export last foreground object testing result.
The matching histogram of said judge module comparison current pixel and being used for of selecting are described the histogram of the histogram set of background; And be arranged at matching histogram under the situation of the histogram set of describing background current pixel is confirmed as the background pixel point, and be not arranged at matching histogram under the situation of the histogram set of describing background current pixel is confirmed as the foreground pixel point.
Said background model update module 25 utilizes current input image to upgrade the histogrammic weights of all textures in matching histogram and the update background module of each pixel, wherein, improves the weights of matching histogram, reduces other histogrammic weights.
Equipment according to the invention can comprise a computing equipment, is used to carry out mathematical computations and operating software, and accepts the vedio data that image capture module 20 collects.
The sequence of operations that in instructions, illustrates can be carried out through the combination of hardware, software or hardware and software.When by this sequence of operations of software executing, can be installed to computer program wherein in the storer in the computing machine that is built in specialized hardware, make computing machine carry out this computer program.Perhaps, can be installed to computer program in the multi-purpose computer that can carry out various types of processing, make computing machine carry out this computer program.
For example, can store computer program in advance in the hard disk or ROM (ROM (read-only memory)) as recording medium.Perhaps, can perhaps for good and all store (record) computer program in removable recording medium, such as floppy disk, CD-ROM (compact disc read-only memory), MO (magneto-optic) dish, DVD (digital versatile disc), disk or semiconductor memory temporarily.Can so removable recording medium be provided as canned software.
The present invention describes in detail with reference to specific embodiment.Yet clearly, under the situation that does not deviate from spirit of the present invention, those skilled in the art can carry out change and replacement to embodiment.In other words, the present invention is open with form illustrated, rather than explains with being limited.Judge main idea of the present invention, should consider appended claim.

Claims (10)

1. foreground object detection method comprises step:
(a) present image of continuous acquisition monitoring area obtains comprising the sequence of video images of foreground object;
(b) create a background model of forming by a plurality of texture histograms;
(c) with the present image gathered as input picture, each pixel in the image is calculated a local grain histogram;
(d), in background model, find the texture histogram of coupling according to the local grain histogram of each pixel;
(e) according to matching result each pixel is categorized as foreground pixel point and background pixel point, obtains foreground image.
2. foreground object detection method as claimed in claim 1 also comprises step
(f) utilize foreground image and current input image, background model upgraded, comprising:
Utilize current input image, upgrade the matching histogram of each pixel;
The histogrammic weights of all textures in the update background module wherein, improve the weights of matching histogram, reduce other histogrammic weights.
3. foreground object detection method as claimed in claim 2, wherein, said step (d) comprising:
Adopt similarity measurement, all texture histograms in the local grain histogram that in step (c), obtains and the background model are carried out similarity calculating;
To the histogrammic similarity of all textures in the local grain histogram that calculates and the background model, judge whether to exist one or more similarities greater than predetermined threshold value;
When all similarities during, the minimum texture histogram of weights in the deletion background model, the local grain histogram of current pixel is added in the background model all less than predetermined threshold value;
When having one or more similarities greater than predetermined threshold value, the texture histogram in the background model of selection maximum comparability value correspondence is as the matching histogram of current pixel.
4. foreground object detection method as claimed in claim 1, wherein, said step (e) comprising:
With all the texture histograms in the background model according to the descending sort of weights sizes;
Select the texture histogram set that is used to describe background;
Judge that current pixel belongs to background pixel point or foreground pixel point;
Export last foreground object testing result.
5. foreground object detection method as claimed in claim 4; Wherein, Judge that current pixel belongs to background pixel point or the foreground pixel point comprises: the histogram that the matching histogram of current pixel and being used for of selecting are described background is gathered and is compared; If matching histogram is arranged in the histogram set of describing background; Then current pixel belongs to the background pixel point; Otherwise then current pixel belongs to the foreground pixel point.
6. foreground object checkout equipment, this equipment comprises:
Image capture module is used for the present image of continuous acquisition monitoring area, obtains comprising the sequence of video images of foreground object;
The background modeling module is used to create a background model of being made up of a plurality of texture histograms;
Texture histogram calculation module is used for according to the present image of being gathered, and each pixel in the image is calculated a local grain histogram;
The histogram matching module according to the local grain histogram of each pixel, finds the texture histogram of coupling in background model;
The pixel sort module is categorized as foreground pixel point and background pixel point according to matching result with each pixel, obtains foreground image;
7. foreground object inspection equipment as claimed in claim 6 also comprises:
The background model update module; Be used for according to foreground image and current input image; Background model is upgraded; Wherein, Said background model update module utilizes current input image to upgrade the histogrammic weights of all textures in matching histogram and the update background module of each pixel; Wherein, improve the weights of matching histogram, reduce other histogrammic weights.
8. foreground object checkout equipment as claimed in claim 7, wherein, said histogram matching module also comprises:
Similarity is calculated computing module, adopts similarity measurement, all texture histograms in resulting local grain histogram of texture histogram calculation module and the background model is carried out similarity calculate;
Comparison module to the histogrammic similarity of all textures in the local grain histogram that calculates and the background model, judges whether to exist one or more similarities greater than predetermined threshold value;
Decision-making module; When all similarities during all less than predetermined threshold value; The minimum texture histogram of weights in the deletion background model; The local grain histogram of current pixel is added in the background model; And when having one or more similarities greater than predetermined threshold value, the texture histogram in the background model of selection maximum comparability value correspondence is as the matching histogram of current pixel.
9. foreground object checkout equipment as claimed in claim 6, wherein, said pixel sort module comprises:
Order module is used for all texture histograms of background model according to the descending sort of weights size;
Select module, select the texture histogram set that is used to describe background;
Judge module judges that current pixel belongs to background pixel point or foreground pixel point;
Output module is exported last foreground object testing result.
10. foreground object checkout equipment as claimed in claim 9; Wherein, The matching histogram of said judge module comparison current pixel and being used for of selecting are described the histogram of the histogram set of background; And be arranged at matching histogram under the situation of the histogram set of describing background current pixel is confirmed as the background pixel point, and be not arranged at matching histogram under the situation of the histogram set of describing background current pixel is confirmed as the foreground pixel point.
CN2010102519024A 2010-08-03 2010-08-03 Method and device for carrying out detection on foreground objects Pending CN102346854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102519024A CN102346854A (en) 2010-08-03 2010-08-03 Method and device for carrying out detection on foreground objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102519024A CN102346854A (en) 2010-08-03 2010-08-03 Method and device for carrying out detection on foreground objects

Publications (1)

Publication Number Publication Date
CN102346854A true CN102346854A (en) 2012-02-08

Family

ID=45545515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102519024A Pending CN102346854A (en) 2010-08-03 2010-08-03 Method and device for carrying out detection on foreground objects

Country Status (1)

Country Link
CN (1) CN102346854A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629369A (en) * 2012-02-27 2012-08-08 天津大学 Single color image shadow removal method based on illumination surface modeling
CN103729613A (en) * 2012-10-12 2014-04-16 浙江大华技术股份有限公司 Method and device for detecting video image
CN104077788A (en) * 2014-07-10 2014-10-01 中国科学院自动化研究所 Moving object detection method fusing color and texture information for performing block background modeling
CN105657547A (en) * 2015-12-31 2016-06-08 北京奇艺世纪科技有限公司 Detection method and device for similar video and pirated video
CN105681898A (en) * 2015-12-31 2016-06-15 北京奇艺世纪科技有限公司 Similar video and pirated video detection method and device
CN105681899A (en) * 2015-12-31 2016-06-15 北京奇艺世纪科技有限公司 Method and device for detecting similar video and pirated video
CN106327473A (en) * 2016-08-10 2017-01-11 北京小米移动软件有限公司 Method and device for acquiring foreground images
CN106651919A (en) * 2015-10-27 2017-05-10 富士通株式会社 Initialization apparatus and method of background image model, and image processing device
CN109479120A (en) * 2016-10-14 2019-03-15 富士通株式会社 Extraction element, traffic congestion detection method and the device of background model
CN109643446A (en) * 2016-08-15 2019-04-16 精工爱普生株式会社 Circuit device, electronic equipment and error-detecting method
CN110933304A (en) * 2019-11-27 2020-03-27 RealMe重庆移动通信有限公司 Method and device for determining to-be-blurred region, storage medium and terminal equipment
CN111126176A (en) * 2019-12-05 2020-05-08 山东浪潮人工智能研究院有限公司 Monitoring and analyzing system and method for specific environment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448151A (en) * 2007-11-28 2009-06-03 汉王科技股份有限公司 Motion detecting device for estimating self-adapting inner core density and method therefor
EP2093699A1 (en) * 2008-02-19 2009-08-26 British Telecommunications Public Limited Company Movable object status determination

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448151A (en) * 2007-11-28 2009-06-03 汉王科技股份有限公司 Motion detecting device for estimating self-adapting inner core density and method therefor
EP2093699A1 (en) * 2008-02-19 2009-08-26 British Telecommunications Public Limited Company Movable object status determination

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MARKO HEIKKILA等: "A texture-Based Method for Modeling the Background and Detecting Moving Objects", 《PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
SHENGPING ZHANG等: "DYNAMIC BACKGROUND MODELING AND SUBTRACTION USING SPATIO-TEMPORAL LOCAL BINARY PATTERNS", 《IMAGE PROCESSING, 2008. ICIP 2008. 15TH IEEE INTERNATIONAL CONFERENCE》 *
李斌等: "基于纹理的运动目标检测", 《计算机工程与应用》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629369A (en) * 2012-02-27 2012-08-08 天津大学 Single color image shadow removal method based on illumination surface modeling
CN103729613A (en) * 2012-10-12 2014-04-16 浙江大华技术股份有限公司 Method and device for detecting video image
CN104077788B (en) * 2014-07-10 2017-02-15 中国科学院自动化研究所 Moving object detection method fusing color and texture information for performing block background modeling
CN104077788A (en) * 2014-07-10 2014-10-01 中国科学院自动化研究所 Moving object detection method fusing color and texture information for performing block background modeling
CN106651919A (en) * 2015-10-27 2017-05-10 富士通株式会社 Initialization apparatus and method of background image model, and image processing device
CN105681898A (en) * 2015-12-31 2016-06-15 北京奇艺世纪科技有限公司 Similar video and pirated video detection method and device
CN105681899A (en) * 2015-12-31 2016-06-15 北京奇艺世纪科技有限公司 Method and device for detecting similar video and pirated video
CN105657547A (en) * 2015-12-31 2016-06-08 北京奇艺世纪科技有限公司 Detection method and device for similar video and pirated video
CN105681898B (en) * 2015-12-31 2018-10-30 北京奇艺世纪科技有限公司 A kind of detection method and device of similar video and pirate video
CN105657547B (en) * 2015-12-31 2019-05-10 北京奇艺世纪科技有限公司 A kind of detection method and device of similar video and pirate video
CN105681899B (en) * 2015-12-31 2019-05-10 北京奇艺世纪科技有限公司 A kind of detection method and device of similar video and pirate video
CN106327473A (en) * 2016-08-10 2017-01-11 北京小米移动软件有限公司 Method and device for acquiring foreground images
CN109643446A (en) * 2016-08-15 2019-04-16 精工爱普生株式会社 Circuit device, electronic equipment and error-detecting method
CN109479120A (en) * 2016-10-14 2019-03-15 富士通株式会社 Extraction element, traffic congestion detection method and the device of background model
CN110933304A (en) * 2019-11-27 2020-03-27 RealMe重庆移动通信有限公司 Method and device for determining to-be-blurred region, storage medium and terminal equipment
CN111126176A (en) * 2019-12-05 2020-05-08 山东浪潮人工智能研究院有限公司 Monitoring and analyzing system and method for specific environment

Similar Documents

Publication Publication Date Title
CN102346854A (en) Method and device for carrying out detection on foreground objects
CN103914702B (en) System and method for improving the object detection performance in video
Wang et al. Automated sewer pipe defect tracking in CCTV videos based on defect detection and metric learning
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN104978567B (en) Vehicle checking method based on scene classification
KR101414670B1 (en) Object tracking method in thermal image using online random forest and particle filter
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
CN115995063A (en) Work vehicle detection and tracking method and system
CN108734200B (en) Human target visual detection method and device based on BING (building information network) features
CN114565675A (en) Method for removing dynamic feature points at front end of visual SLAM
Ali et al. Vehicle detection and tracking in UAV imagery via YOLOv3 and Kalman filter
CN102314591A (en) Method and equipment for detecting static foreground object
KR101690050B1 (en) Intelligent video security system
CN108765463B (en) Moving target detection method combining region extraction and improved textural features
Chen et al. Image segmentation based on mathematical morphological operator
Nandhini et al. SIFT algorithm-based Object detection and tracking in the video image
WO2022045877A1 (en) A system and method for identifying occupancy of parking lots
CN108257148A (en) The target of special object suggests window generation method and its application in target following
CN115965613A (en) Cross-layer connection construction scene crowd counting method based on cavity convolution
Padmashini et al. Vision based algorithm for people counting using deep learning
Williams et al. Detecting marine animals in underwater video: Let's start with salmon
Dadgostar et al. Gesture-based human–machine interfaces: a novel approach for robust hand and face tracking
Leykin et al. A vision system for automated customer tracking for marketing analysis: Low level feature extraction
Ravanbakhsh et al. An application of shape-based level sets to fish detection in underwater images.
Yang et al. The large-scale crowd analysis based on sparse spatial-temporal local binary pattern

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120208