CN103839278A - Foreground detecting method and device - Google Patents

Foreground detecting method and device Download PDF

Info

Publication number
CN103839278A
CN103839278A CN201410079009.6A CN201410079009A CN103839278A CN 103839278 A CN103839278 A CN 103839278A CN 201410079009 A CN201410079009 A CN 201410079009A CN 103839278 A CN103839278 A CN 103839278A
Authority
CN
China
Prior art keywords
block
image
pixel
sub
piece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410079009.6A
Other languages
Chinese (zh)
Inventor
李茂林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ingenic Semiconductor Co Ltd
Original Assignee
Beijing Ingenic Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ingenic Semiconductor Co Ltd filed Critical Beijing Ingenic Semiconductor Co Ltd
Priority to CN201410079009.6A priority Critical patent/CN103839278A/en
Publication of CN103839278A publication Critical patent/CN103839278A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a foreground detecting method and device which can be automatically adapted to the situation that a background changes suddenly and improve the foreground detecting accuracy. The foreground detecting method comprises the steps that after a new frame of video image is obtained, a current frame of video image is divided into a plurality of non-overlapping image sub-blocks according to a set block dividing rule; for each image sub-block in the current frame of video image, the following operations are executed, wherein the following operations comprise the procedures that whether a current image sub-block is a foreground block or a background block in textural feature is judged according to a constructed textural feature model; whether the current image sub-block is a foreground block or a background block in color feature is judged according to a constructed historical motion image and a mixed color feature model; if the current image sub-block is judged to be a foreground block both in color feature and textural color, the current image sub-block is judged to be a foreground block, and otherwise, the current image sub-block is judged to be a background block.

Description

A kind of foreground detection method and device
Technical field
The present invention relates to technical field of video monitoring, relate in particular to a kind of foreground detection method and device.
Background technology
Intelligent video monitoring system is to adopt image processing, pattern-recognition and computer vision technique, by increase intelligent video analysis module in supervisory system, the powerful data-handling capacity of computer is carried out automatic analysis identification to video image, thereby realize intelligent system full-automatic, that monitor in real time, there is important practical application meaning.
Foreground detection (also referred to as moving object detection) is one of core technology of intelligent video monitoring system.So-called foreground detection, refers to region corresponding moving target is extracted from sequence of video images, for example, for concrete traffic monitoring, exactly pedestrian and vehicle etc. in scene is extracted from sequence of video images.It should be noted that, background refers to the comparatively stable scene structure of non-interest object composition in scene; Prospect refers to the scene structure of interested moving target composition in scene.Prospect and background are relative concepts, taking highway as example: if interested in up and-down automobile on highway, automobile is prospect, and road surface and environment are around backgrounds; If only interested in swarming into the pedestrian of highway, intruder is prospect, and other things including automobile are background.
Foreground detection algorithm can be divided three classes substantially at present: (1) background difference algorithm; (2) time difference algorithm; (3) optical flow algorithm.From the angle of real-time monitoring, generally adopt background difference algorithm.In background difference algorithm, the mixed Gaussian background modeling algorithm based on mixed Gauss model, can adapt to background and slowly change the situation of (background gradual change) to a certain extent, and such as leaf rocks, water wave fluctuation etc.Mixed Gauss model uses K(generally to get 3 to 5) individual Gauss model represents the distribution of each pixel in video image, after obtaining, a new frame video image upgrades mixed Gauss model, mate with mixed Gauss model with the each pixel in current frame video image, if success, judges that this pixel is as background dot, otherwise, judge that this pixel is as foreground point.Mixed Gauss model is mainly determined by variance and two parameters of average.To the study of average and variance, take different study mechanisms, will directly have influence on stability, accuracy and the convergence of mixed Gauss model.Classical mixed Gaussian background modeling algorithm has obtained application widely aspect foreground detection, but this algorithm is using color as feature descriptor, thereby illumination is had to susceptibility, and particularly, in the situation of illuminance abrupt variation, false drop rate acutely rises.
In actual life, environment is complicated and changeable, and the background of dynamic change all can affect the effect of foreground detection largely.Mixed Gaussian background modeling algorithm based on mixed Gauss model, is difficult to adapt to the background burst such as illuminance abrupt variation, stream cloud and changes the situation of (background transition), thereby affected the accuracy of foreground detection.
Summary of the invention
The embodiment of the present invention provides a kind of foreground detection method and device, can adaptive background burst situation about changing, promote the accuracy of foreground detection.
The foreground detection method that the embodiment of the present invention provides, comprising:
After a new frame video image obtains, according to the piecemeal rule of setting, current frame video image is divided into non-overlapping some image subblocks;
For each image subblock in current frame video image, carry out respectively following operation:
According to the textural characteristics model building, judge that present image sub-block is foreground blocks or background piece on textural characteristics, the pixel two-value code set of all pixels in each image subblock of described textural characteristics model representation video image;
According to the historical movement image and the blend color characteristic model that build, judge that present image sub-block is foreground blocks or background piece on color characteristic, described blend color characteristic model comprises mixed Gauss model and homogeneous distributed model, described mixed Gauss model presentation video sub-block is the probability of background piece, the probability that described homogeneous distributed model presentation video sub-block is foreground blocks;
If present image sub-block is all judged to be foreground blocks on color characteristic and textural characteristics, judge that present image sub-block is as foreground blocks, otherwise, judge that present image sub-block is as background piece.
The foreground detection device that the embodiment of the present invention provides, comprising:
Image block module, after obtaining at a new frame video image, is divided into non-overlapping some image subblocks according to the piecemeal rule of setting by current frame video image;
Texture judge module, for each image subblock for current frame video image, according to the textural characteristics model building, judge that present image sub-block is foreground blocks or background piece on textural characteristics, the pixel two-value code set of all pixels in described textural characteristics model representation image subblock;
Color judge module, for each image subblock for current frame video image, according to the historical movement image and the blend color characteristic model that build, judge that present image sub-block is foreground blocks or background piece on color characteristic, described blend color characteristic model comprises mixed Gauss model and homogeneous distributed model, wherein mixed Gauss model presentation video sub-block is the probability of background piece, the probability that homogeneous distributed model presentation video sub-block is foreground blocks;
Mix determination module, if judge that for described texture judge module present image sub-block judges that as foreground blocks and described color judge module present image sub-block is foreground blocks on textural characteristics on color characteristic, judge that present image sub-block is as foreground blocks, otherwise, judge that present image sub-block is as background piece.
The foreground detection method that the embodiment of the present invention provides and device, adopt the thought of image block, build textural characteristics model based on piece textural characteristics, build blend color characteristic model based on piece color characteristic, and blend color characteristic model comprises the homogeneous distributed model of prospect and the mixed Gauss model of background, merge blend color characteristic model and textural characteristics model construction color and vein feature mixture model; Utilize textural characteristics model to overcome the situation that illumination burst changes, blend color characteristic model has taken into full account the distribution situation of prospect and background simultaneously; And, build historical movement image based on the sequence of video images in a period of time, in the time obtaining a new frame video image, first utilize historical movement image to obtain prospect roughly, the block feature of each image subblock in the color and vein feature mixture model that recycling builds and current video image, obtain more accurate prospect, effectively promote the accuracy of foreground detection.
The application's further feature and advantage will be set forth in the following description, and, partly from instructions, become apparent, or understand by implementing the application.The application's object and other advantages can be realized and be obtained by specifically noted structure in write instructions, claims and accompanying drawing.
Brief description of the drawings
Accompanying drawing is used to provide a further understanding of the present invention, and forms a part for instructions, is used from explanation the present invention with the embodiment of the present invention one, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is foreground detection method process flow diagram in the embodiment of the present invention;
Fig. 2 is foreground detection device block diagram in the embodiment of the present invention.
Fig. 3 is the one possibility structured flowchart of texture judge module in the embodiment of the present invention;
Fig. 4 is that the one of color judge module in the embodiment of the present invention may structured flowchart.
Embodiment
The embodiment of the present invention provides a kind of foreground detection method and device, can self-adaptation illuminance abrupt variation, the background burst such as stream cloud situation about changing, promote the accuracy of foreground detection.Below in conjunction with Figure of description, the preferred embodiments of the present invention are described, should be appreciated that preferred embodiment described herein, only for description and interpretation the present invention, is not intended to limit the present invention.And in the situation that not conflicting, the feature in embodiment and embodiment in the application can combine mutually.
First the foreground detection algorithm and the correlation model that in the embodiment of the present invention, adopt are described.On the basis of the mixed Gaussian background modeling algorithm based on mixed Gauss model, the embodiment of the present invention provides the improvement algorithm based on color and vein feature mixture model, this improvement algorithm adopts the thought of image block, the new frame video image obtaining is first carried out to piecemeal processing, utilize block feature to replace original pixel feature, the color and vein feature mixture model based on block feature and structure carries out foreground detection to a new frame video image.Described color and vein feature mixture model, refers to the textural characteristics model that utilizes piece Image Coding Algorithms to build on the one hand; Refer to that on the other hand background sets up mixed Gauss model, prospect is set up homogeneous distributed model, combines the blend color characteristic model that above-mentioned two models utilize piece color characteristic to build.
Construction method to correlation model and foreground detection algorithm are elaborated below.
One, build historical movement image according to the sequence of video images in seclected time section in video.
Historical movement image (Motion History Image, MHI) referred within a period of time, the historical track that moving target moves continuously.In concrete enforcement, for the sequence of video images capturing by camera, first utilize frame difference method, threshold method processing to obtain the movement edge of moving target; Recycling timestamp builds historical movement image.
Two, on the one hand, build textural characteristics model.
For each frame video image in video, the current frame video image obtaining is carried out to piecemeal processing according to the piecemeal rule of setting, current frame video image is divided into non-overlapping some image subblocks, and determines the texture description figure of each image subblock based on piece Image Coding Algorithms; According to the texture description figure of each image subblock in each frame video image, build textural characteristics model.
In concrete enforcement, conventional piece Image Coding Algorithms comprises BTC(block truncation coding, piece brachymemma coding) algorithm.Damage image coding technique as one, the calculated amount of BTC algorithm is less, and speed has the fault-tolerant power of good channel, and reconstructed image quality is higher.Below the method for measured BTC algorithm structure textural characteristics model is specifically described.
Step a1, current frame video image is divided into non-overlapping some n × n image subblocks, determines the pixel characteristic value average m of each pixel in each image subblock according to formula [1]:
m = 1 n × n Σ i = 1 n Σ j = 1 n x ij - - - [ 1 ]
Wherein, subscript i, the position coordinates of pixel in j presentation video sub-block, x ijthe pixel characteristic value of the pixel that in presentation video sub-block, coordinate position is (i, j).
Step a2, for each image subblock, according to formula [2], the pixel characteristic value of the each pixel in present image sub-block and the average of this image subblock are compared, determine the pixel two-value code of each pixel.
Figure BDA0000472839700000052
Wherein, subscript i, the position coordinates of pixel in j presentation video sub-block, x ijthe pixel characteristic value of the pixel that in presentation video sub-block, coordinate position is (i, j), b ijthe pixel two-value code of the pixel that in presentation video sub-block, coordinate position is (i, j).In image subblock, the binary mapping figure of the pixel two-value code of each pixel composition is the texture description figure of image subblock.
Step a3, according to the texture description figure of each image subblock in each frame video image, build textural characteristics model.In concrete enforcement, build textural characteristics model Q (B according to formula [4] t), Q (B t) represent the pixel two-value code set of all pixels in each image subblock of t moment video image.
Q(B t)={B t} [4]
Wherein, Q (B t) expression textural characteristics model, B trepresent the i.e. pixel two-value code set of all pixels of piece textural characteristics value of t moment present image sub-block.
Consider the slickness of video image, the embodiment of the present invention is improved the BTC algorithm of standard, it is improved BTC algorithm that a kind of new piece Image Coding Algorithms is provided, this improved BTC algorithm has added a new parameter in the process of determining pixel two-value code: smooth threshold tau, the parameter value of τ generally sets manually in advance.Below the method that builds textural characteristics model based on improved BTC algorithm is specifically described.
Step b1, current frame video image is divided into non-overlapping some n × n image subblocks, determines the pixel characteristic value average m of each pixel in each image subblock according to formula [1]:
m = 1 n × n Σ i = 1 n Σ j = 1 n x ij - - - [ 1 ]
Wherein, subscript i, the position coordinates of pixel in j presentation video sub-block, x ijthe pixel characteristic value of the pixel that in presentation video sub-block, coordinate position is (i, j).
Step b2, for each image subblock, according to formula [3], the pixel characteristic value of the each pixel in present image sub-block and the average of this image subblock are compared, determine the pixel two-value code of each pixel.
Figure BDA0000472839700000062
Wherein, subscript i, the position coordinates of pixel in j presentation video sub-block, x ijthe pixel characteristic value of the pixel that in presentation video sub-block, coordinate position is (i, j), b ijthe pixel two-value code of the pixel that in presentation video sub-block, coordinate position is (i, j), τ represents the smooth threshold value of presetting.In image subblock, the binary mapping figure of the pixel two-value code of each pixel composition is the texture description figure of image subblock.
Step b3, according to the texture description figure of each image subblock in each frame video image, build textural characteristics model.In concrete enforcement, build textural characteristics model Q (B according to formula [4] t), Q (B t) represent the pixel two-value code set of all pixels in each image subblock of t moment video image.
Q(B t)={B t} [4]
Wherein, B trepresent the i.e. pixel two-value code set of all pixels of piece textural characteristics value of t moment present image sub-block.After obtaining pixel two-value code according to formula [3], can adopt bit operating to obtain difference value, thereby there is lower computation complexity.
Three, on the other hand, build blend color characteristic model.
In concrete enforcement, set up mixed Gauss model P (X according to formula [5] on the one hand t), P (X t) represent that t moment image subblock is the probability of background piece; Set up homogeneous distributed model U (X according to formula [6] on the other hand t), U (X t) represent the probability that t moment image subblock is foreground blocks, thus above-mentioned two model construction blend color characteristic models combined.
P ( X t ) = Σ k = 1 K ω k , t η ( X t , μ k , t , Σ k , t ) - - - [ 5 ]
Wherein, X tthe piece color feature value that represents t moment image subblock is the pixel characteristic value average of each pixel, and K represents the number of Gauss model; η represents Gaussian density function; ω k,trepresent the weight of t moment k Gauss model, μ k,tand ∑ k,trepresent respectively average and the standard deviation of t moment k Gauss model, ω k,t, μ k,tand ∑ k,tparameter value by video seclected time section inner video image sequence study determine.
U ( X t ) = 1 256 - - - [ 6 ]
Wherein, X trepresent the piece color feature value of t moment image subblock.
Four, after a new frame video image obtains, according to the piecemeal rule of setting, current frame video image is divided into non-overlapping some image subblocks, and according to the textural characteristics model, historical movement image and the blend color characteristic model that build, judge that each image subblock is foreground blocks or background piece.
Due to historical movement recording image the continuously historical track of motion of moving target, in order to reduce computation complexity, using the historical track of historical movement image as region of interest, i.e. prospect.The embodiment of the present invention is further distinguished foreground blocks and background piece on the basis of historical movement image.
This is two class classification problems.Analyze from color characteristic, according to Bayesian decision theory, present image sub-block belongs to the type of posterior probability maximum.Press formula [7], if discriminant function f is (X t) >0, this image subblock is foreground blocks; Otherwise belong to background piece.Analyze from textural characteristics, by formula [8], if SUM is (B t, B new) be greater than the textural characteristics threshold value setting in advance, this image subblock belongs to background piece, otherwise belongs to foreground blocks.
Figure BDA0000472839700000081
SUM ( B t , B new ) = Σ i = 1 n Σ j = 1 n ( b ij new ∩ b ij ) - - - [ 8 ]
Wherein, φ (fg),
Figure BDA0000472839700000083
represent respectively prospect prior probability and background prior probability, can determine by shared Area Ratio in current frame video image according to prospect and background.B tthe piece textural characteristics value that represents t moment image subblock, the piece textural characteristics value of image subblock refers to the pixel two-value code of all pixels in image subblock; B newrepresent the piece textural characteristics value of the image subblock of same position in next frame video image; b ijrepresent the pixel two-value code of the pixel that in t moment image subblock, coordinate position is (i, j);
Figure BDA0000472839700000084
the pixel two-value code of the pixel that in expression next frame video image, in the image subblock of same position, coordinate position is (i, j).
Finally, present image sub-block is all judged as foreground blocks on textural characteristics and color characteristic, and this image subblock is foreground blocks; Otherwise be background piece.
Historical movement recording image be the historical track of moving object in a period of time, therefore to upgrade in time.In order to adapt to the dynamic change of background, need real-time update mixed Gauss model, the update method of mixed Gauss model is consistent with prior art, specifically repeats no more.
Based on the introduction of foreground detection algorithm and correlation model, the embodiment of the present invention provides corresponding foreground detection method, so that the foreground detection of a frame video image in video is described as example.As shown in Figure 1, comprise the steps:
S101, from video, obtain a new frame video image.
S102, according to set piecemeal rule current frame video image is divided into non-overlapping some image subblocks.
S103, judge the image subblock of whether not carrying out in addition foreground detection in current frame video image, if so, carry out S104, otherwise, the foreground detection of current frame video image is completed.Follow-up, can from video, obtain next frame video image and carry out foreground detection, until the foreground detection of all frame video images all completes in video.
S104, from current frame video image, select an image subblock of not carrying out foreground detection.
S105, according to build textural characteristics model, judge that present image sub-block is foreground blocks or background piece on textural characteristics, wherein, the pixel two-value code set of all pixels in textural characteristics model representation image subblock.
In the concrete enforcement of S105, first determine in the image subblock of same position in present image sub-block and next frame video image the pixel two-value code common factor sum of the pixel on each correspondence position; Whether the pixel two-value code common factor sum that then judgement is determined is greater than default textural characteristics threshold value, if so, judges that present image sub-block is foreground blocks on textural characteristics, otherwise, judge that present image sub-block is background piece on textural characteristics.
S106, according to build historical movement image and blend color characteristic model, judge that present image sub-block is foreground blocks or background piece on color characteristic, wherein, blend color characteristic model comprises mixed Gauss model and homogeneous distributed model, mixed Gauss model presentation video sub-block is the probability of background piece, the probability that homogeneous distributed model presentation video sub-block is foreground blocks.
In the concrete enforcement of S106, first determine the piece color feature value of present image sub-block, the piece color feature value of image subblock refers to the pixel characteristic value average of each pixel in image subblock; Then according to the mixed Gauss model of the pixel characteristic value average of each pixel in present image sub-block and structure, determine that present image sub-block is the probability of background piece; And according to according to the homogeneous distributed model of the piece color feature value of present image sub-block and structure, determine that present image sub-block is the probability of foreground blocks; Then judge present image sub-block is whether the probability of foreground blocks and prospect prior probability long-pending is greater than the probability that present image sub-block is background piece and amasss with background prior probability is, if, judge that present image sub-block is as foreground blocks, otherwise, judge that present image sub-block is background piece, wherein, prospect prior probability and background prior probability are determined according to the historical movement image building.Concrete, according to the historical movement image building, divide prospect and background in current frame video image; According to prospect and background, shared Area Ratio in current frame video image is determined prospect prior probability and background prior probability.
Whether S107, present image sub-block are all judged to be foreground blocks on color characteristic and textural characteristics, if so, carry out S108, otherwise, carry out S109.
S108, judge that present image sub-block is as foreground blocks.
S109, judge that present image sub-block is as background piece.
Based on same technical conceive, the embodiment of the present invention provides a kind of foreground detection device, and because the principle that this device is dealt with problems is consistent with foreground detection method, therefore the enforcement of this device can be referring to the enforcement of method, repeats part and do not repeating.
As shown in Figure 2, the foreground detection device that the embodiment of the present invention provides, comprising:
Image block module 20201, after obtaining at a new frame video image, is divided into non-overlapping some image subblocks according to the piecemeal rule of setting by current frame video image;
Texture judge module 202, for each image subblock for current frame video image, according to the textural characteristics model building, judge that present image sub-block is foreground blocks or background piece on textural characteristics, the pixel two-value code set of all pixels in described textural characteristics model representation image subblock;
Color judge module 203, for each image subblock for current frame video image, according to the historical movement image and the blend color characteristic model that build, judge that present image sub-block is foreground blocks or background piece on color characteristic, described blend color characteristic model comprises mixed Gauss model and homogeneous distributed model, wherein mixed Gauss model presentation video sub-block is the probability of background piece, the probability that homogeneous distributed model presentation video sub-block is foreground blocks;
Mix determination module 204, if judge that for texture judge module 202 present image sub-block judges that as foreground blocks and color judge module 203 present image sub-block is foreground blocks on textural characteristics on color characteristic, judge that present image sub-block is as foreground blocks, otherwise, judge that present image sub-block is as background piece.
In concrete enforcement, the one possibility structure of texture judge module 202, as shown in Figure 3, specifically comprises:
Sub module stored 301, for storing the textural characteristics model of structure;
Piece textural characteristics summation submodule 302, for according to the textural characteristics model building, determines in the image subblock of same position in present image sub-block and next frame video image the pixel two-value code common factor sum of the pixel on each correspondence position;
Texture judges submodule 303, whether the pixel two-value code common factor sum of determining for decision block textural characteristics summation submodule 301 is greater than default textural characteristics threshold value, if, judge that present image sub-block is foreground blocks on textural characteristics, otherwise, judge that present image sub-block is background piece on textural characteristics.
In concrete enforcement, the one possibility structure of color judge module 203, as shown in Figure 4, specifically comprises:
Piece color characteristic is determined submodule 401, and for determining the piece color feature value of present image sub-block, the piece color feature value of image subblock refers to the pixel characteristic value average of each pixel in image subblock;
Gauss hybrid models submodule 402, for according to the mixed Gauss model of the piece color feature value of present image sub-block and structure, determines that present image sub-block is the probability of background piece;
Homogeneous distributed model submodule 403, for according to according to the homogeneous distributed model of the piece color feature value of present image sub-block and structure, determines that present image sub-block is the probability of foreground blocks;
Blend color judges submodule 404, for judging that present image sub-block is that whether probability and prospect prior probability long-pending of foreground blocks is greater than the probability that present image sub-block is background piece and background prior probability is and amasss, if, judge that present image sub-block is as foreground blocks, otherwise, judge that present image sub-block is background piece, described prospect prior probability and background prior probability are determined according to the historical movement image building.
The foreground detection device that the application's embodiment provides can be realized by computer program.Those skilled in the art should be understood that; above-mentioned Module Division mode is only the one in numerous Module Division modes; if be divided into other modules or do not divide module, as long as the through server of search has above-mentioned functions, all should be within the application's protection domain.
The foreground detection method that the embodiment of the present invention provides and device, adopt the thought of image block, build textural characteristics model based on piece textural characteristics, build blend color characteristic model based on piece color characteristic, and blend color characteristic model comprises the homogeneous distributed model of prospect and the mixed Gauss model of background, merge blend color characteristic model and textural characteristics model construction color and vein feature mixture model; Utilize textural characteristics model to overcome the situation that illumination burst changes, blend color characteristic model has taken into full account the distribution situation of prospect and background simultaneously; And, build historical movement image based on the sequence of video images in a period of time, in the time obtaining a new frame video image, first utilize historical movement image to obtain prospect roughly, the block feature of each image subblock in the color and vein feature mixture model that recycling builds and current video image, obtain more accurate prospect, effectively promote the accuracy of foreground detection.
It will be understood by those skilled in the art that embodiments of the invention can be provided as method, system, equipment or computer program.Therefore, the present invention can adopt complete hardware implementation example, completely implement software example or the form in conjunction with the embodiment of software and hardware aspect.And the present invention can adopt the form at one or more upper computer programs of implementing of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) that wherein include computer usable program code.
The present invention is with reference to describing according to process flow diagram and/or the block scheme of the method for the embodiment of the present invention, equipment (system) and computer program.Should understand can be by the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or the combination of square frame.Can provide these computer program instructions to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, the instruction that makes to carry out by the processor of computing machine or other programmable data processing device produces the device for realizing the function of specifying at flow process of process flow diagram or multiple flow process and/or square frame of block scheme or multiple square frame.
These computer program instructions also can be stored in energy vectoring computer or the computer-readable memory of other programmable data processing device with ad hoc fashion work, the instruction that makes to be stored in this computer-readable memory produces the manufacture that comprises command device, and this command device is realized the function of specifying in flow process of process flow diagram or multiple flow process and/or square frame of block scheme or multiple square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make to carry out sequence of operations step to produce computer implemented processing on computing machine or other programmable devices, thereby the instruction of carrying out is provided for realizing the step of the function of specifying in flow process of process flow diagram or multiple flow process and/or square frame of block scheme or multiple square frame on computing machine or other programmable devices.
Although described the preferred embodiments of the present invention, once those skilled in the art obtain the basic creative concept of cicada, can make other change and amendment to these embodiment.So claims are intended to be interpreted as comprising preferred embodiment and fall into all changes and the amendment of the scope of the invention.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if these amendments of the present invention and within modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.

Claims (10)

1. a foreground detection method, is characterized in that, comprising:
After a new frame video image obtains, according to the piecemeal rule of setting, current frame video image is divided into non-overlapping some image subblocks;
For each image subblock in current frame video image, carry out respectively following operation:
According to the textural characteristics model building, judge that present image sub-block is foreground blocks or background piece on textural characteristics, the pixel two-value code set of all pixels in each image subblock of described textural characteristics model representation video image;
According to the historical movement image and the blend color characteristic model that build, judge that present image sub-block is foreground blocks or background piece on color characteristic, described blend color characteristic model comprises mixed Gauss model and homogeneous distributed model, described mixed Gauss model presentation video sub-block is the probability of background piece, the probability that described homogeneous distributed model presentation video sub-block is foreground blocks;
If present image sub-block is all judged to be foreground blocks on color characteristic and textural characteristics, judge that present image sub-block is as foreground blocks, otherwise, judge that present image sub-block is as background piece.
2. the method for claim 1, is characterized in that, according to the textural characteristics model building, judges that present image sub-block is the method for foreground blocks or background piece on textural characteristics, specifically comprises:
Determine in the image subblock of same position in present image sub-block and next frame video image the pixel two-value code common factor sum of the pixel on each correspondence position;
Judge whether described pixel two-value code common factor sum is greater than default textural characteristics threshold value, if so, judge that present image sub-block is foreground blocks on textural characteristics, otherwise, judge that present image sub-block is background piece on textural characteristics.
3. method as claimed in claim 2, is characterized in that, definite method of the pixel two-value code of pixel in image subblock, comprising:
Determine the pixel characteristic value average of each pixel in image subblock;
For each pixel in image subblock, if the pixel characteristic value of current pixel point is less than described pixel characteristic value average and the smooth threshold value sum of presetting, the pixel two-value code of determining current pixel point is 0, otherwise, determine that the pixel two-value code of current pixel point is 1.
4. method as claimed in claim 2, is characterized in that, in described definite present image sub-block and next frame video image, in the image subblock of same position, the pixel two-value code common factor sum of the pixel on each correspondence position, specifically passes through formula SUM ( B t , B new ) = Σ i = 1 n Σ j = 1 n ( b ij new ∩ b ij ) Realize, wherein:
B tthe piece textural characteristics value that represents t moment image subblock, the piece textural characteristics value of image subblock refers to the pixel two-value code of all pixels in image subblock; B newrepresent the piece textural characteristics value of the image subblock of same position in next frame video image; b ijrepresent the pixel two-value code of the pixel that in t moment image subblock, coordinate position is (i, j);
Figure FDA0000472839690000022
the pixel two-value code of the pixel that in expression next frame video image, in the image subblock of same position, coordinate position is (i, j).
5. method as claimed in claim 1 or 2, is characterized in that, described according to the historical movement image and the blend color characteristic model that build, judges that present image sub-block is the method for foreground blocks or background piece on color characteristic, specifically comprises:
Determine the piece color feature value of present image sub-block, the piece color feature value of image subblock refers to the pixel characteristic value average of each pixel in image subblock;
According to the mixed Gauss model of the piece color feature value of present image sub-block and structure, determine that present image sub-block is the probability of background piece; According to the homogeneous distributed model of the piece color feature value of present image sub-block and structure, determine that present image sub-block is the probability of foreground blocks;
Judge present image sub-block is whether the probability of foreground blocks and prospect prior probability long-pending is greater than the probability that present image sub-block is background piece and amasss with background prior probability is, if, judge that present image sub-block is as foreground blocks, otherwise, judge that present image sub-block is background piece, described prospect prior probability and background prior probability are determined according to the historical movement image building.
6. method as claimed in claim 5, is characterized in that, described mixed Gauss model adopts formula
Figure FDA0000472839690000023
represent, wherein: X trepresent the piece color feature value of t moment image subblock; K represents the number of Gauss model; η represents Gaussian density function; ω k,trepresent the weight of t moment k Gauss model, μ k,tand ∑ k,trepresent respectively average and the standard deviation of t moment k Gauss model, ω k,t, μ k,tand ∑ k,tparameter value by video seclected time section inner video image sequence study determine;
Described homogeneous distributed model adopts formula
Figure FDA0000472839690000031
represent, wherein: X trepresent the piece color feature value of t moment image subblock.
7. method as claimed in claim 4, is characterized in that, described prospect prior probability and background prior probability are determined according to the historical movement image building, specifically comprised:
According to the historical movement image building, divide prospect and background in current frame video image;
According to prospect and background, shared Area Ratio in current frame video image is determined prospect prior probability and background prior probability.
8. a foreground detection device, is characterized in that, comprising:
Image block module, after obtaining at a new frame video image, is divided into non-overlapping some image subblocks according to the piecemeal rule of setting by current frame video image;
Texture judge module, for each image subblock for current frame video image, according to the textural characteristics model building, judge that present image sub-block is foreground blocks or background piece on textural characteristics, the pixel two-value code set of all pixels in described textural characteristics model representation image subblock;
Color judge module, for each image subblock for current frame video image, according to the historical movement image and the blend color characteristic model that build, judge that present image sub-block is foreground blocks or background piece on color characteristic, described blend color characteristic model comprises mixed Gauss model and homogeneous distributed model, wherein mixed Gauss model presentation video sub-block is the probability of background piece, the probability that homogeneous distributed model presentation video sub-block is foreground blocks;
Mix determination module, if judge that for described texture judge module present image sub-block judges that as foreground blocks and described color judge module present image sub-block is foreground blocks on textural characteristics on color characteristic, judge that present image sub-block is as foreground blocks, otherwise, judge that present image sub-block is as background piece.
9. device as claimed in claim 8, is characterized in that, described texture judge module, specifically comprises:
Sub module stored, for storing the textural characteristics model of structure;
Piece textural characteristics summation submodule, for according to the textural characteristics model building, determines in the image subblock of same position in present image sub-block and next frame video image the pixel two-value code common factor sum of the pixel on each correspondence position;
Texture judges submodule, for judging whether the pixel two-value code common factor sum that described textural characteristics summation submodule determined is greater than default textural characteristics threshold value, if, judge that present image sub-block is foreground blocks on textural characteristics, otherwise, judge that present image sub-block is background piece on textural characteristics.
10. install as claimed in claim 8 or 9, it is characterized in that, described in, color judge module, specifically comprises:
Piece color characteristic is determined submodule, and for determining the piece color feature value of present image sub-block, the piece color feature value of image subblock refers to the pixel characteristic value average of each pixel in image subblock;
Gauss hybrid models submodule, for according to the mixed Gauss model of the piece color feature value of present image sub-block and structure, determines that present image sub-block is the probability of background piece;
Homogeneous distributed model submodule, for according to according to the homogeneous distributed model of the piece color feature value of present image sub-block and structure, determines that present image sub-block is the probability of foreground blocks;
Blend color judges submodule, for judging that present image sub-block is that whether probability and prospect prior probability long-pending of foreground blocks is greater than the probability that present image sub-block is background piece and background prior probability is and amasss, if, judge that present image sub-block is as foreground blocks, otherwise, judge that present image sub-block is background piece, described prospect prior probability and background prior probability are determined according to the historical movement image building.
CN201410079009.6A 2014-03-05 2014-03-05 Foreground detecting method and device Pending CN103839278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410079009.6A CN103839278A (en) 2014-03-05 2014-03-05 Foreground detecting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410079009.6A CN103839278A (en) 2014-03-05 2014-03-05 Foreground detecting method and device

Publications (1)

Publication Number Publication Date
CN103839278A true CN103839278A (en) 2014-06-04

Family

ID=50802744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410079009.6A Pending CN103839278A (en) 2014-03-05 2014-03-05 Foreground detecting method and device

Country Status (1)

Country Link
CN (1) CN103839278A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077788A (en) * 2014-07-10 2014-10-01 中国科学院自动化研究所 Moving object detection method fusing color and texture information for performing block background modeling
CN105046722A (en) * 2015-08-04 2015-11-11 深圳市哈工大交通电子技术有限公司 Suddenly-changed illumination robustness foreground detection algorithm based on GPU platform
CN107682699A (en) * 2017-10-19 2018-02-09 厦门大学 A kind of nearly Lossless Image Compression method
CN112784651A (en) * 2019-11-11 2021-05-11 北京君正集成电路股份有限公司 System for realizing efficient target detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120141019A1 (en) * 2010-12-07 2012-06-07 Sony Corporation Region description and modeling for image subscene recognition
CN102915544A (en) * 2012-09-20 2013-02-06 武汉大学 Video image motion target extracting method based on pattern detection and color segmentation
CN103063674A (en) * 2012-12-26 2013-04-24 浙江大学 Detection method for copper grade of copper block, and detection system thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120141019A1 (en) * 2010-12-07 2012-06-07 Sony Corporation Region description and modeling for image subscene recognition
CN102915544A (en) * 2012-09-20 2013-02-06 武汉大学 Video image motion target extracting method based on pattern detection and color segmentation
CN103063674A (en) * 2012-12-26 2013-04-24 浙江大学 Detection method for copper grade of copper block, and detection system thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHIH-YANG LIN 等: "Real-Time Robust Background Modeling Based on Joint Color and Texture Descriptions", 《GENETIC AND EVOLUTIONARY COMPUTING》 *
J.L.LANDABASO 等: "Cooperative background modelling using multiple cameras towards human detection in smart-rooms", 《SIGNAL PROCESSING CONFERENCE》 *
T BOUWMANS 等: "Background Modeling using Mixture of Gaussians for foreground detection-a survey", 《RECENT PATENTS ON COMPUTER SCIENCE》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077788A (en) * 2014-07-10 2014-10-01 中国科学院自动化研究所 Moving object detection method fusing color and texture information for performing block background modeling
CN104077788B (en) * 2014-07-10 2017-02-15 中国科学院自动化研究所 Moving object detection method fusing color and texture information for performing block background modeling
CN105046722A (en) * 2015-08-04 2015-11-11 深圳市哈工大交通电子技术有限公司 Suddenly-changed illumination robustness foreground detection algorithm based on GPU platform
CN107682699A (en) * 2017-10-19 2018-02-09 厦门大学 A kind of nearly Lossless Image Compression method
CN107682699B (en) * 2017-10-19 2019-07-02 厦门大学 A kind of nearly Lossless Image Compression method
CN112784651A (en) * 2019-11-11 2021-05-11 北京君正集成电路股份有限公司 System for realizing efficient target detection

Similar Documents

Publication Publication Date Title
Sheng et al. Graph-based spatial-temporal convolutional network for vehicle trajectory prediction in autonomous driving
Jana et al. YOLO based Detection and Classification of Objects in video records
CN111428765B (en) Target detection method based on global convolution and local depth convolution fusion
CN110766038B (en) Unsupervised landform classification model training and landform image construction method
Akan et al. Stretchbev: Stretching future instance prediction spatially and temporally
Ren et al. A novel squeeze YOLO-based real-time people counting approach
Niranjan et al. Deep learning based object detection model for autonomous driving research using carla simulator
Bešić et al. Dynamic object removal and spatio-temporal RGB-D inpainting via geometry-aware adversarial learning
CN103839278A (en) Foreground detecting method and device
Charouh et al. Improved background subtraction-based moving vehicle detection by optimizing morphological operations using machine learning
US20230278587A1 (en) Method and apparatus for detecting drivable area, mobile device and storage medium
CN114155303B (en) Parameter stereo matching method and system based on binocular camera
CN103473789A (en) Human body video segmentation method fusing multi-cues
CN112738725A (en) Real-time identification method, device, equipment and medium for target crowd in semi-closed area
CN101567088B (en) Method and device for detecting moving object
Liu et al. Multi-lane detection by combining line anchor and feature shift for urban traffic management
Han et al. Fully Convolutional Neural Networks for Road Detection with Multiple Cues Integration
CN110633641A (en) Intelligent security pedestrian detection method, system and device and storage medium
Guo et al. Research on human-vehicle gesture interaction technology based on computer visionbility
CN115953668A (en) Method and system for detecting camouflage target based on YOLOv5 algorithm
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN103020987A (en) Quick foreground detection method based on multi-background model
CN114067360A (en) Pedestrian attribute detection method and device
CN113762043A (en) Abnormal track identification method and device
CN108898284B (en) Internet of vehicles management control strategy evaluation method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140604