CN101216942A - An increment type characteristic background modeling algorithm of self-adapting weight selection - Google Patents
An increment type characteristic background modeling algorithm of self-adapting weight selection Download PDFInfo
- Publication number
- CN101216942A CN101216942A CNA2008100591311A CN200810059131A CN101216942A CN 101216942 A CN101216942 A CN 101216942A CN A2008100591311 A CNA2008100591311 A CN A2008100591311A CN 200810059131 A CN200810059131 A CN 200810059131A CN 101216942 A CN101216942 A CN 101216942A
- Authority
- CN
- China
- Prior art keywords
- background
- frame
- centerdot
- video frame
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a self-adaptive weight selected incremental characteristic background modeling method. The invention can conduct incremental real-time updating of a background model according to the movement contained in each video frame in the movement detection, and appoint a weight for every frame in the updating process to improve the expression and description abilities of the background model. The method comprises the following steps of: roughly detecting the movement area of the present frame by an un-updated background model, constructing a weight for the movement area based on the reconstruction error of the model, updating the background model by right of the incremental principal component analysis based on the weighted present frame and generating background images. With well expressing the dynamic changes of the complex scenes and being sensitive to the objects with obvious movement prospect, the inventive technique has great application value in the fields of video surveillance, etc.
Description
Technical field
The present invention relates to Video Motion Detection, relate in particular to a kind of increment type characteristic background modeling method of self-adaptation weight selection.
Background technology
Motion detection and motion tracking are the problems that compares bottom in the vision field, and wherein motion detection is again the prerequisite of following the tracks of.As class methods that solve the motion detection problem, background modeling receives the concern of Many researchers in recent years.Some typical background modeling methods comprise: Gauss model, mixed Gauss model, Density Estimator and characteristic background modeling (Eigen-background Modeling).First three methods is based on the method for pixel, is each pixel and sets up an independent model be described on time domain.Background modeling computation complexity based on pixel is higher, and is unfavorable for catching the complicated background content, such as weather conditions such as rain, snow, and situations such as the moving leaf of wind.On the contrary, people such as Oliver are (IEEE Transactions on Pattern Analysis and Machine Intelligence on IEEE pattern analysis and machine intelligence transactions, 2000,22 (8): 831~843) the characteristic background modeling method of Ti Chuing then is based on whole frame of video, therefore can express complicated background information preferably.The flow process of this method can be summarized as: calculate covariance matrix and extract several characteristic background (proper vector) with one group of sample background image, present frame is projected to acquisition one stack features coefficient on these proper vectors, rebuilds the current background frame based on average and these proper vectors of sample background image.Its principle is to adopt principal component analysis (PCA) (Principle Component Analysis) technology to rebuild background frames, because a few features vector is only described the global feature of image, therefore less foreground object just has been left in the basket with respect to scene, thereby makes the background image of rebuilding include only scene information.Yet still there are two subject matters in this characteristic background modeling method: the one, and the method need be prepared one group of sample background image in advance, it is very big for the influence of algorithm accuracy rate as sample to choose which image, and the primitive characteristics background modeling method does not discuss how model is done fast updating; The 2nd, because decomposing, characteristic root on entire image, carries out, this just causes bigger foreground moving object to be " absorbed " into background model, thereby can't generate desirable background image.First problem can be with people such as Weng at IEEE pattern analysis and machine intelligence transactions (IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003,25 (8): 1034~1040) go up the increment type principal component analytical method that proposes and solve.This is a kind of covariance matrix that need not to construct, the proper vector update algorithm of increment type, and this class of model modification that is fit in the background modeling is used in real time.At second problem, at present up-to-date improvement is increment type subspace learning method (the Yongmin Li.Onincremental and robust subspace learning.Pattern Recognition2004 (37): 1509~1518) of the robust delivered on Pattern Recognition of Yongmin Li to the characteristic background modeling method.The author has not only realized the increment type real-time update to feature background model, and is that different frame of video specifies different weights to increase robustness.Yet the author is not explained to these weights that weight is not given physical meaning for this reason, and quantitative reasonable weight value computing method more are not provided.Obviously, only depend on experience to determine that the method for weight is very unstable, be difficult to set up the background model of optimization and generate desirable background image.In addition, weight is to be applied on the entire image, does not consider the difference contribution of the zones of different of image to background model.
Summary of the invention
The purpose of this invention is to provide a kind of increment type characteristic background modeling method of self-adaptation weight selection, comprise the steps:
The increment type characteristic background modeling method of self-adaptation weight selection comprises the steps:
1) sets up initial back-ground model based on the sample background frame of video with principal component analysis (PCA);
2) utilize the background image of current background Model Reconstruction input video frame, and calculate the error between background image and the input video frame;
3) determine moving region in the input video frame according to the error between background image and the input video frame;
4) based on the error between background image and the input video frame be the moving region structure weight of input video frame adaptively;
5), adopt the increment type principal component analysis (PCA) to carry out background model and upgrade based on the frame of video of weighting.
Describedly set up the initial back-ground model method based on the sample background frame of video with principal component analysis (PCA) and be: establishing sample background graphical representation is vectorial X=(x
1... x
n)
T, calculate the vectorial average μ of this group
x=E (X), covariance matrix C
x=E{ (x-μ
x) (x-μ
x)
T, calculate covariance matrix C
xEigenwert and proper vector: based on eigenvalue
iWith proper vector e
iThe constraint condition C that satisfies
xE
i=λ
iE
iFind the solution λ
iAnd e
i, i=1 wherein ..., n arranges proper vector from big to small according to the size of eigenwert, obtains one group of orthogonal basis A; The average of this group orthogonal basis and sample background frame of video is initial back-ground model.
Described with current background Model Reconstruction input video frame background image and calculate background image and input video frame between error: make E={E
1..., E
nAnd
Be respectively the proper vector of current background model and the average of background video frame, F
M+1Be current video frame, to F
M+1The subduction average
Obtain F
M+1 %, and with F
M+1 %Be projected to E:
Wherein cof is the projection coefficient vector, is used for rebuilding current video frame, present frame F
M+1Can be redeveloped into:
Present frame F
M+1With reconstruction frames F
M+1' between error can be calculated as follows:
Wherein npxl is the number of all pixels in the present frame, and er is the root-mean-square error between present frame and the background frames.
Describedly determine moving region in the input video frame: make F according to the error between background image and the input video frame
DifBe present frame F
M+1With reconstruction frames F
M+1' difference image, at first with F
DifBe converted into gray-scale map F
Dif %, calculate F respectively along x direction and y direction then
Dif %The intensity profile histogram, be the accumulative total number of nonzero value pixel on this delegation or this row image in histogrammic each hurdle, difference image F is just represented in the position that the nonzero value number of pixels is maximum on the histogram
Dif %The most intensive zone of last nonzero value pixel, when non-zero pixels outnumbered a certain threshold value, this position was the moving region.
Describedly be the moving region of input video frame structure weight adaptively based on the error between background image and the input video frame: make that MR is moving region on the rough detected present frame,
Be in the current average image therewith the moving region be in the subregion of same position, the moving region of calculating weighting is:
Weights W can be carried out adaptive choosing by following formula:
Wherein θ and α are threshold values.
Described frame of video based on weighting, the principal component analysis (PCA) of employing increment type is carried out the background model method for updating and is:
Be weighted processing by following formula:
Wherein W is a weight,
Be in the current average image therewith the moving region be in the subregion of same position; Upgrade the background mean value of current model:
F wherein
1... F
mBe preceding m frame,
Be average, F
M+1Be present frame;
Suppose { E
1... E
nBe the proper vector of current feature background model, as a new frame F
T+1During arrival, the average after subduction is upgraded
Obtain F
T+1 %, then based on F
T+1 %Upgrade first proper vector:
E wherein
1' be the proper vector after upgrading;
Use E
1Rebuild F
T+1 %, data after the reconstruction and F
T+1 %Between error can be calculated as:
This error and E
1Direction be quadrature, can be used to further upgrade second proper vector E
2:
Next according to R
1Calculate R
2, update background module progressively:
The present invention can not only carry out the real-time update of increment type according to each frame of video to background model, and in renewal process, specify a weight for each frame, thereby strengthen the expression and the descriptive power of background model, have very big using value aspect real time kinematics detection and the motion Object Segmentation.The feature background model that employing is not upgraded detects the moving region of present frame roughly, and for specifying a certain weights in the moving region.This weight only puts on the moving region, and other zone of image is unaffected.Adopt a kind of adaptive method quantitative Analysis weight.Weight is based on present frame and calculates with the error between the background frames of the feature background model reconstruction of not upgrading.
Description of drawings
Fig. 1 is that the intensity profile histogram based on present image and background image difference diagram carries out the rough synoptic diagram that detects in moving region among the present invention;
Fig. 2 is limit, the West Lake scene in the embodiment of the invention 1;
Fig. 3 (a) is the background image that is produced by traditional characteristic background modeling method;
Fig. 3 (b) is the foreground area that is produced by traditional characteristic background modeling method;
Fig. 3 (c) is the background image that is obtained by the method for the invention modeling,
Fig. 3 (d) is the foreground area that is obtained by the method for the invention modeling;
Fig. 4 (a) is the video scene that comprises human motion among the embodiment 2;
Fig. 4 (b) is the background image that is produced by traditional characteristic background modeling method;
Fig. 4 (c) is the background image that is obtained by the method for the invention modeling;
Fig. 5 carries out background modeling effect synoptic diagram at complex scene on a large scale among the present invention.
Embodiment
The increment type characteristic background modeling method of self-adaptation weight selection is implemented as follows:
1) sets up initial back-ground model based on 200 800 * 600 sample background frame of video with principal component analysis (PCA).We provide 200 frame sample background images for every section video, are expressed as X=(x
1... x
200)
T, calculate sample average μ
x=E (X) also adopts the principal component analysis (PCA) algorithm to extract preceding 30 proper vectors: at first calculate covariance matrix C
x=E{ (x-μ
x) (x-μ
x)
T, based on the eigenvalue of covariance matrix
iWith proper vector e
iThe constraint condition C that satisfies
xE
i=λ
iE
i(i=1 ..., 200) and find the solution λ
iAnd e
i, proper vector is arranged from big to small according to the size of eigenwert, obtain the proper vector A of one group of quadrature, choose preceding 30 proper vector a, the average of a and sample background frame of video has constituted initial back-ground model.
2) utilize the background image of current background Model Reconstruction input video frame, and calculate the error between background image and the input video frame.Make E={E
1... E
30With
Be respectively the proper vector of current background model and the average of background video frame, F
M+1Be current video frame, to F
M+1The subduction average
Obtain F
M+1 %, and with F
M+1 %Be projected to E:
Present frame F
M+1Can be redeveloped into:
Present frame F
M+1With reconstruction frames F
M+1' between error can be calculated as follows:
Wherein npxl is the number of all pixels in the present frame, and er promptly is the root-mean-square error between present frame and the background frames.
3) determine moving region in the input video frame according to the error between background image and the input video frame: make F
DifBe present frame F
M+1With reconstruction frames F
M+1' difference image, at first with F
DifBe converted into gray-scale map F
Dif %, calculate F respectively along x direction and y direction then
Dif %The intensity profile histogram, way be the x direction with image in the non-zero pixels accumulative total number of each row, it is vectorial to form one 1 * 800 row; The y direction with image in non-zero pixels accumulative total number of each row, form one 600 * 1 column vector, each element of two vectors shows the accumulative total number of nonzero value pixel on this row or this row image, and difference image F is just represented in the position that the nonzero value number of pixels is maximum on the histogram
Dif %The most intensive zone of last nonzero value pixel when non-zero pixels outnumbers a certain threshold value, thinks that motion has comparatively significantly appearred in this position, and we are redefined for 20 with threshold value here.Effect is detected as shown in Figure 1 in the moving region.
4) be the moving region of input video frame structure weight adaptively based on the error between background image and the input video frame: make that MR is moving region on the rough detected present frame,
Be in the current average image therewith the moving region be in the subregion of same position, the moving region of calculating weighting is:
Weights W can be carried out adaptive choosing by following formula:
Wherein θ and α are threshold values, and θ gets 3-7 during practical application, and α gets 0.4-0.6 can obtain satisfactory result.
5), adopt the increment type principal component analysis (PCA) to carry out the real-time update of background model based on the frame of video of weighting:
Moving region MR to present frame is weighted processing:
Wherein W is a weight
Be in the current average image therewith the moving region be in the subregion of same position; Upgrade the background mean value of current model:
F wherein
1... F
mBe preceding m frame,
Be average, F
M+1Be present frame because initial background model sets up according to 200 frame background images, therefore when enforcement the m value since 201, i.e. F
201First frame for test video;
Suppose { E
1... E
nBe the proper vector of current feature background model, as a new frame F
T+1During arrival, the average after subduction is upgraded
Obtain F
T+1 %, then based on F
T+1 %Upgrade first proper vector:
E wherein
1' be the proper vector after upgrading;
Use E
1Rebuild F
T+1 %, data after the reconstruction and F
T+1 %Between error can be calculated as:
This error and E
1Direction be quadrature, can be used to further upgrade second proper vector E
2:
Next can be according to R
1Calculate R
2:
30 proper vectors that progressively comprise in the update background module in this manner
In order to verify method of the present invention, we provide three background modeling examples.
Embodiment 1
Carry out background modeling embodiment at the speedboat on the lake surface of the West Lake:
First scene is the West Lake during the setting sun shines upon, speeded the rapidly lake surface in a distant place of a speedboat.Carry out motion detection under this scene certain difficulty is arranged: at first, we wish detected foreground object, and just the area that occupies in the view picture picture of speedboat is too little, and the motion of speedboat is easy to be taken as ground unrest and is absorbed by background model; Secondly, in the video pictures variation of background comparatively remarkable, comprise the camera lens leaf that moved by wind of top and the sunlight of lake wave reflection nearby.As shown in Figure 2.
Motion detection result as shown in Figure 3.(a) and (b) being background image and the detected foreground area that is produced by traditional characteristic background modeling method, is background image and the foreground area that is obtained by the method for the invention (c) and (d).Although see on directly perceived, (a) and (c) and indifference, detected sport foreground is variant.Image (b) shows, classical characteristic background modeling method has not only detected speedboat, has also detected the background leaf that rocks, and the leaf zone is almost big with speedboat zone etc., and this just causes the prospect of can't distinguishing and background.Reason is that classical characteristic background modeling method just sets up background model according to the 200 frame sample images that we provide, and model is not done renewal, therefore the scene that exceeds the sample data scope is changed to lack ability to express.Image (d) has then only detected the speedboat of motion basically, and this is because the method for the invention is kind of the algorithm based on the increment type principal component analysis (PCA), thereby background model is brought in constant renewal in the dynamic change that can adapt to scene.
Embodiment 2
Movement human at limit, the West Lake carries out background modeling embodiment:
Second scenario is a people along the Su Causeway lakeside of passing by, and background is the West Lake that the stormy waves motion is arranged, shown in Fig. 4 (a).The difficulty here is that background lake surface area is bigger, and the background wave motion is very complicated, and withy is nearby waved with the wind simultaneously, is easy to be taken as foreground object.We have tested traditional characteristic background modeling method and the method for the invention respectively based on this scene, and the background image of generation is respectively shown in Fig. 4 (b) and Fig. 4 (c).Produced significantly " ghost " phenomenon in Fig. 4 (b) display background image in the zone that people's process is arranged, this is because traditional method is done frame of video as a wholely to treat, and do not consider the motion conditions of zones of different in the image, so the motion of foreground object is absorbed into background model easily.Particularly serious for very big this problem of foreground moving object.This problem that Fig. 4 (c) has shown solution that the method for the invention is comparatively successful is because considered the influence of moving region to background model.
Embodiment 3
Carry out background modeling embodiment at complex scene on a large scale:
Last scene for all based on all very challenging property of the method for testing motion of background modeling.Test video is in the lawn of YuQuan school area, ZheJiang University photographs, has the people to run whole scene fast, and scene comprises very that complicated background changes, as the crowd of distant place erratic motion and the grove of flickering with the wind.Same based on this scrnario testing traditional method and the method for the invention, Fig. 5 has showed algorithm effect.In Fig. 5, first row is the 1373rd, 1410,1450,1634 and the 1660th frame of original video, and second row is the corresponding background image that generates with classic method, and the third line is the corresponding background image that generates with the method for the invention.In the second row image, the image-region of circled has " ghost " effect clearly, and this is because the author does not take any measure to prevent that the moving region is included into background model equally.On the contrary, the method for the invention can solve this problem.
Claims (6)
1. the increment type characteristic background modeling method of a self-adaptation weight selection is characterized in that comprising the steps:
1) sets up initial back-ground model based on the sample background frame of video with principal component analysis (PCA);
2) utilize the background image of current background Model Reconstruction input video frame, and calculate the error between background image and the input video frame;
3) determine moving region in the input video frame according to the error between background image and the input video frame;
4) based on the error between background image and the input video frame be the moving region structure weight of input video frame adaptively;
5), adopt the increment type principal component analysis (PCA) to carry out background model and upgrade based on the frame of video of weighting.
2. the increment type characteristic background modeling method of a kind of self-adaptation weight selection according to claim 1, it is characterized in that describedly setting up the initial back-ground model method based on the sample background frame of video with principal component analysis (PCA) and being: establishing sample background graphical representation is vectorial X=(x
1... x
n)
T, calculate the vectorial average μ of this group
x=E (X), covariance matrix C
x=E{ (x-μ
x) (x-μ
x)
T, calculate covariance matrix C
xEigenwert and proper vector: based on eigenvalue
iWith proper vector e
iThe constraint condition C that satisfies
xE
i=λ
iE
iFind the solution λ i and e
i, i=1 wherein ..., n arranges proper vector from big to small according to the size of eigenwert, obtains one group of orthogonal basis A; The average of this group orthogonal basis and sample background frame of video is initial back-ground model.
3. the increment type characteristic background modeling method of a kind of self-adaptation weight selection according to claim 1, it is characterized in that described with current background Model Reconstruction input video frame background image and calculate background image and input video frame between error: make E={E
1..., E
nAnd
Be respectively the proper vector of current background model and the average of background video frame, F
M+1Be current video frame, to F
M+1The subduction average
Obtain F
M+1 %, and with F
M+1 %Be projected to E:
Wherein cof is the projection coefficient vector, is used for rebuilding current video frame, present frame F
M+1Can be redeveloped into:
Present frame F
M+1With reconstruction frames F
M+1' between error can be calculated as follows:
Wherein npxl is the number of all pixels in the present frame, and er is the root-mean-square error between present frame and the background frames.
4. the increment type characteristic background modeling method of a kind of self-adaptation weight selection according to claim 1 is characterized in that describedly determining moving region in the input video frame according to the error between background image and the input video frame: make F
DifBe present frame F
M+1With reconstruction frames F
M+1' difference image, at first with F
DifBe converted into gray-scale map F
Dif %, calculate F respectively along x direction and y direction then
Dif %The intensity profile histogram, be the accumulative total number of nonzero value pixel on this delegation or this row image in histogrammic each hurdle, difference image F is just represented in the position that the nonzero value number of pixels is maximum on the histogram
Dif %The most intensive zone of last nonzero value pixel, when non-zero pixels outnumbered a certain threshold value, this position was the moving region.
5. the increment type characteristic background modeling method of a kind of self-adaptation weight selection according to claim 1, it is characterized in that describedly being the moving region of input video frame structure weight adaptively: make that MR is moving region on the rough detected present frame based on the error between background image and the input video frame
Be in the current average image therewith the moving region be in the subregion of same position, the moving region of calculating weighting is:
Weights W can be carried out adaptive choosing by following formula:
Wherein θ and α are threshold values.
6. the increment type characteristic background modeling method of a kind of self-adaptation weight selection according to claim 1 is characterized in that described frame of video based on weighting, adopts the increment type principal component analysis (PCA) to carry out the background model method for updating to be:
Be weighted processing by following formula:
Wherein W is a weight,
Be in the current average image therewith the moving region be in the subregion of same position; Upgrade the background mean value of current model:
F wherein
1... F
mBe preceding m frame,
Be average, F
M+1Be present frame;
Suppose { E
1... E
nBe the proper vector of current feature background model, as a new frame F
T+1During arrival, the average after subduction is upgraded
Obtain F
T+1 %, then based on F
T+1 %Upgrade first proper vector:
E wherein
1' be the proper vector after upgrading;
Use E
1Rebuild F
T+1 %, data after the reconstruction and F
T+1 %Between error can be calculated as:
This error and E
1Direction be quadrature, can be used to further upgrade second proper vector E
2:
Next according to R
1Calculate R
2, update background module progressively:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2008100591311A CN101216942A (en) | 2008-01-14 | 2008-01-14 | An increment type characteristic background modeling algorithm of self-adapting weight selection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNA2008100591311A CN101216942A (en) | 2008-01-14 | 2008-01-14 | An increment type characteristic background modeling algorithm of self-adapting weight selection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101216942A true CN101216942A (en) | 2008-07-09 |
Family
ID=39623370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2008100591311A Pending CN101216942A (en) | 2008-01-14 | 2008-01-14 | An increment type characteristic background modeling algorithm of self-adapting weight selection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101216942A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102063722A (en) * | 2011-01-18 | 2011-05-18 | 上海交通大学 | Image change detecting method based on principle component general inverse transformation |
CN102136148A (en) * | 2011-03-24 | 2011-07-27 | 福州大学 | Adaptive background reconfiguration method based on pixel sequential morphology |
CN101930610B (en) * | 2009-06-26 | 2012-05-02 | 思创影像科技股份有限公司 | Method for detecting moving object by using adaptable background model |
CN102970517A (en) * | 2012-11-28 | 2013-03-13 | 四川长虹电器股份有限公司 | Holder lens autonomous control method based on abnormal condition identification |
CN104299246A (en) * | 2014-10-14 | 2015-01-21 | 江苏湃锐自动化科技有限公司 | Production line object part motion detection and tracking method based on videos |
CN104537693A (en) * | 2015-01-04 | 2015-04-22 | 北京航空航天大学 | Multi-target detection algorithm based on chebyshev pixel estimation |
CN105243355A (en) * | 2015-09-09 | 2016-01-13 | 大连理工大学 | Event-driven remote wireless coalbed methane well station abnormal scene safety monitoring method |
CN105513089A (en) * | 2015-11-03 | 2016-04-20 | 国家电网公司 | Target tracking method based on batch increment discriminant analysis |
US9652694B2 (en) | 2013-08-21 | 2017-05-16 | Canon Kabushiki Kaisha | Object detection method, object detection device, and image pickup device |
CN107067411A (en) * | 2017-01-03 | 2017-08-18 | 江苏慧眼数据科技股份有限公司 | A kind of Mean shift trackings of combination dense feature |
CN107194932A (en) * | 2017-04-24 | 2017-09-22 | 江苏理工学院 | A kind of adaptive background algorithm for reconstructing forgotten based on index |
CN107240116A (en) * | 2016-03-24 | 2017-10-10 | 想象技术有限公司 | Generate sparse sample histogram |
CN107330923A (en) * | 2017-06-07 | 2017-11-07 | 太仓诚泽网络科技有限公司 | A kind of update method of dynamic background image |
CN111145219A (en) * | 2019-12-31 | 2020-05-12 | 神思电子技术股份有限公司 | Efficient video moving target detection method based on Codebook principle |
WO2020192095A1 (en) * | 2019-03-22 | 2020-10-01 | 浙江宇视科技有限公司 | Coding method and apparatus for surveillance video background frames, electronic device and medium |
WO2021208275A1 (en) * | 2020-04-12 | 2021-10-21 | 南京理工大学 | Traffic video background modelling method and system |
CN113536971A (en) * | 2021-06-28 | 2021-10-22 | 中科苏州智能计算技术研究院 | Target detection method based on incremental learning |
CN113614558A (en) * | 2019-03-14 | 2021-11-05 | 皇家飞利浦有限公司 | MR imaging using 3D radial or helical acquisition with soft motion gating |
CN117124587A (en) * | 2023-01-12 | 2023-11-28 | 珠海视熙科技有限公司 | Background modeling method and device based on depth image, medium and computing equipment |
-
2008
- 2008-01-14 CN CNA2008100591311A patent/CN101216942A/en active Pending
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930610B (en) * | 2009-06-26 | 2012-05-02 | 思创影像科技股份有限公司 | Method for detecting moving object by using adaptable background model |
CN102063722B (en) * | 2011-01-18 | 2012-09-05 | 上海交通大学 | Image change detecting method based on principle component general inverse transformation |
CN102063722A (en) * | 2011-01-18 | 2011-05-18 | 上海交通大学 | Image change detecting method based on principle component general inverse transformation |
CN102136148A (en) * | 2011-03-24 | 2011-07-27 | 福州大学 | Adaptive background reconfiguration method based on pixel sequential morphology |
CN102136148B (en) * | 2011-03-24 | 2012-11-21 | 福州大学 | Adaptive background reconfiguration method based on pixel sequential morphology |
CN102970517B (en) * | 2012-11-28 | 2015-08-19 | 四川长虹电器股份有限公司 | Based on the autonomous control method of platform-lens of abnormal sight identification |
CN102970517A (en) * | 2012-11-28 | 2013-03-13 | 四川长虹电器股份有限公司 | Holder lens autonomous control method based on abnormal condition identification |
US9652694B2 (en) | 2013-08-21 | 2017-05-16 | Canon Kabushiki Kaisha | Object detection method, object detection device, and image pickup device |
CN104299246A (en) * | 2014-10-14 | 2015-01-21 | 江苏湃锐自动化科技有限公司 | Production line object part motion detection and tracking method based on videos |
CN104537693A (en) * | 2015-01-04 | 2015-04-22 | 北京航空航天大学 | Multi-target detection algorithm based on chebyshev pixel estimation |
CN105243355A (en) * | 2015-09-09 | 2016-01-13 | 大连理工大学 | Event-driven remote wireless coalbed methane well station abnormal scene safety monitoring method |
CN105513089A (en) * | 2015-11-03 | 2016-04-20 | 国家电网公司 | Target tracking method based on batch increment discriminant analysis |
CN107240116B (en) * | 2016-03-24 | 2022-06-17 | 想象技术有限公司 | Apparatus, method, manufacturing system and data processing device for sorting input values |
US11616920B2 (en) | 2016-03-24 | 2023-03-28 | Imagination Technologies Limited | Generating sparse sample histograms in image processing |
CN107240116A (en) * | 2016-03-24 | 2017-10-10 | 想象技术有限公司 | Generate sparse sample histogram |
CN107067411A (en) * | 2017-01-03 | 2017-08-18 | 江苏慧眼数据科技股份有限公司 | A kind of Mean shift trackings of combination dense feature |
CN107194932A (en) * | 2017-04-24 | 2017-09-22 | 江苏理工学院 | A kind of adaptive background algorithm for reconstructing forgotten based on index |
CN107330923A (en) * | 2017-06-07 | 2017-11-07 | 太仓诚泽网络科技有限公司 | A kind of update method of dynamic background image |
CN113614558A (en) * | 2019-03-14 | 2021-11-05 | 皇家飞利浦有限公司 | MR imaging using 3D radial or helical acquisition with soft motion gating |
WO2020192095A1 (en) * | 2019-03-22 | 2020-10-01 | 浙江宇视科技有限公司 | Coding method and apparatus for surveillance video background frames, electronic device and medium |
CN111145219A (en) * | 2019-12-31 | 2020-05-12 | 神思电子技术股份有限公司 | Efficient video moving target detection method based on Codebook principle |
CN111145219B (en) * | 2019-12-31 | 2022-06-17 | 神思电子技术股份有限公司 | Efficient video moving target detection method based on Codebook principle |
WO2021208275A1 (en) * | 2020-04-12 | 2021-10-21 | 南京理工大学 | Traffic video background modelling method and system |
CN113536971A (en) * | 2021-06-28 | 2021-10-22 | 中科苏州智能计算技术研究院 | Target detection method based on incremental learning |
CN117124587A (en) * | 2023-01-12 | 2023-11-28 | 珠海视熙科技有限公司 | Background modeling method and device based on depth image, medium and computing equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101216942A (en) | An increment type characteristic background modeling algorithm of self-adapting weight selection | |
CN110111335B (en) | Urban traffic scene semantic segmentation method and system for adaptive countermeasure learning | |
Kwak et al. | Learning occlusion with likelihoods for visual tracking | |
CN109993095B (en) | Frame level feature aggregation method for video target detection | |
CN112150493B (en) | Semantic guidance-based screen area detection method in natural scene | |
CN112132149B (en) | Semantic segmentation method and device for remote sensing image | |
CN103208115B (en) | Based on the saliency method for detecting area of geodesic line distance | |
CN105869178A (en) | Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization | |
CN102006425A (en) | Method for splicing video in real time based on multiple cameras | |
CN103049763A (en) | Context-constraint-based target identification method | |
CN106778687A (en) | Method for viewing points detecting based on local evaluation and global optimization | |
Agrawal et al. | A comprehensive review on analysis and implementation of recent image dehazing methods | |
CN110245587B (en) | Optical remote sensing image target detection method based on Bayesian transfer learning | |
CN103714556A (en) | Moving target tracking method based on pyramid appearance model | |
CN107506792A (en) | A kind of semi-supervised notable method for checking object | |
US11367206B2 (en) | Edge-guided ranking loss for monocular depth prediction | |
US20220335572A1 (en) | Semantically accurate super-resolution generative adversarial networks | |
CN103839244B (en) | Real-time image fusion method and device | |
CN107341449A (en) | A kind of GMS Calculation of precipitation method based on cloud mass changing features | |
Singh et al. | Visibility enhancement and dehazing: Research contribution challenges and direction | |
CN103077383B (en) | Based on the human motion identification method of the Divisional of spatio-temporal gradient feature | |
CN104282004A (en) | Self-adaptation equalization method based on extensible segmentation histogram | |
Li et al. | Weather-degraded image semantic segmentation with multi-task knowledge distillation | |
Xu et al. | Multi-scale dehazing network via high-frequency feature fusion | |
Wang et al. | A De-raining semantic segmentation network for real-time foreground segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20080709 |