CN101290682A - Movement target checking method and apparatus - Google Patents

Movement target checking method and apparatus Download PDF

Info

Publication number
CN101290682A
CN101290682A CNA2008101155911A CN200810115591A CN101290682A CN 101290682 A CN101290682 A CN 101290682A CN A2008101155911 A CNA2008101155911 A CN A2008101155911A CN 200810115591 A CN200810115591 A CN 200810115591A CN 101290682 A CN101290682 A CN 101290682A
Authority
CN
China
Prior art keywords
image
current input
input image
characteristic
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008101155911A
Other languages
Chinese (zh)
Inventor
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vimicro Corp
Original Assignee
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp filed Critical Vimicro Corp
Priority to CNA2008101155911A priority Critical patent/CN101290682A/en
Publication of CN101290682A publication Critical patent/CN101290682A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method and a device for detecting a moving target. The invention belongs to the image processing technical field and is used to solve the problems that the prior moving target detection technology has low detection efficiency and detection accuracy. The method for detecting the moving target provided by the invention comprises the following steps that: a current input image is projected on a background model to obtain a characteristic image of the current input image; the characteristic image is back projected on the background model to obtain a reconstruction image of the current input image; through comparing the current input image with the reconstruction image, and a moving target image of the current input image is determined. The method and the device for detecting the moving target are used for detecting the moving target and improving the detection efficiency and detection accuracy of the moving target.

Description

A kind of moving target detecting method and device
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of moving target detecting method and device.
Background technology
Intelligent video monitoring is based on computer vision technique key message in the scene is analyzed, extracted to the video image in the monitoring scene; form the monitor mode of corresponding event and alarm, have at aspects such as public safety protection, traffic administrations very widely and use.
Intelligent video monitoring system generally all is to use video camera or IP Camera that a certain scene is continuously taken, the video flowing of collecting is handled, detect moving target wherein, and to moving target classify, subsequent treatment such as tracking, identification.Moving object detection is the basis of intelligent video monitoring, and various subsequent treatment are had great significance, and the performance impact of whole monitoring system is great.
Moving target detecting method commonly used at present comprises time differencing method (Temporal Difference) and background subtraction point-score (Background Subtraction).Time differencing method is called the frame-to-frame differences point-score again, and is all constant by the pixel value and the position of pixel in the hypothesis background image, thus separating background image and foreground image.Time differencing method has multiple implementation method, wherein a kind of is continuous video image (being also referred to as video flowing), perhaps carry out absolute calculus of differences between each two field picture of image sequence, algorithm flow as shown in Figure 1, two two field picture f in video flowing or the image sequence kAnd f K-1Carry out absolute calculus of differences and obtain difference image D k, again difference image is carried out the thresholding processing and obtain binary image, then, use Mathematical Morphology Method that binary image is carried out Filtering Processing and obtain foreground image R k, again foreground image is carried out connectivity analysis, for example fill the cavity in the foreground image, remove the less isolated piece of area, non-connection piece etc., differentiate at last, only keep and be communicated with in the piece area, thereby isolate foreground image and background image greater than the connected component of given area threshold.The background subtraction point-score is by current frame image f kWith average background image b K-1Carry out calculus of differences, thereby isolate background image and foreground image.The algorithm flow of background subtraction point-score as shown in Figure 2, with the algorithm flow basically identical of time differencing method.
This shows, existing Detection for Moving Target mainly is based on the detection technique of pixel, the prior art is only considered the information of each pixel, and ignored correlativity between each pixel in the image, lost a lot of valuable information, and, existing Detection for Moving Target need all be handled all pixels, therefore, the operand of existing Detection for Moving Target is bigger, is unfavorable for real-time application.
In addition, prior art is very sensitive to noise by the mode that a series of images to input carries out simple calculus of differences extraction foreground image and background image, under the constant situation of moving scene, can obtain certain moving object detection effect, and when changing (such as illumination variation, leaf swing or the like) slightly at environment, the foreground image that extracts and the degree of accuracy of background image can be very poor, accurately moving mass on the differentiate between images and non-moving mass, therefore, existing Detection for Moving Target at background image than the detection that can make a mistake under the complicated situation.
Summary of the invention
The embodiment of the invention provides a kind of moving target detecting method and device, in order to solve detection efficiency and the lower problem of detection accuracy rate that existing Detection for Moving Target exists.
A kind of moving target detecting method that the embodiment of the invention provides comprises:
Current input image is projected on the background model, obtain the characteristic image of described current input image;
Described characteristic image back projection on described background model, is obtained the reconstructed image of described current input image;
By more described current input image and described reconstructed image, determine the movement destination image on the described current input image.
A kind of moving object detection device that the embodiment of the invention provides comprises:
The background modeling unit is used for setting up and the storage background model;
The foreground detection unit is used for current input image is projected to described background model, obtains the characteristic image of described current input image; Described characteristic image back projection on described background model, is obtained the reconstructed image of described current input image; By more described current input image and described reconstructed image, determine the movement destination image on the described current input image.
The embodiment of the invention projects to current input image on the background model, obtains the characteristic image of described current input image; Described characteristic image back projection on described background model, is obtained the reconstructed image of described current input image; By more described current input image and described reconstructed image, determine the movement destination image on the described current input image, thereby realized a kind of detection efficiency and detected all higher Detection for Moving Target of accuracy rate.
Description of drawings
Fig. 1 is the schematic flow sheet of time difference detection algorithm of the prior art;
Fig. 2 divides the schematic flow sheet of detection algorithm for background subtraction of the prior art;
The structural representation of a kind of moving object detection device that Fig. 3 provides for the embodiment of the invention;
The structural representation of the foreground detection unit that Fig. 4 provides for the embodiment of the invention;
The structural representation of the background modeling unit that Fig. 5 provides for the embodiment of the invention;
The schematic flow sheet of a kind of moving target detecting method that Fig. 6 provides for the embodiment of the invention.
Embodiment
The embodiment of the invention provides a kind of moving target detecting method and device, in order to improve the accuracy rate of motion target detection efficient and detection.
The video streaming image that the embodiment of the invention will be obtained by IP Camera or video camera etc. is as input picture.
Below in conjunction with accompanying drawing the embodiment of the invention is elaborated.
Referring to Fig. 3, a kind of moving object detection device that the embodiment of the invention provides comprises: background modeling unit 30, foreground detection unit 31, morphologic filtering unit 32 and context update unit 33.
Background modeling unit 30 is used for setting up and the storage background model.
Foreground detection unit 31 is used for current input image is projected to background model, obtains the characteristic image of current input image; Again with the characteristic image back projection of current input image on background model, obtain the reconstructed image of current input image; Compare by reconstructed image, determine the movement destination image (for example: the motor image vegetarian refreshments is 1, and the background pixel point is 0) of the binaryzation of current input image current input image and current input image.
Morphologic filtering unit 32 is used for that movement destination image is carried out morphologic filtering and handles, and the movement destination image after will handling through morphologic filtering is as final testing result output.
Context update unit 33 is used for update background module.
Preferably, referring to Fig. 4, foreground detection unit 31 comprises:
Characteristic image unit 311 is used for current input image is projected to described background model, obtains the characteristic image of current input image.
Reconstructed image unit 312 is used for characteristic image back projection obtaining the reconstructed image of current input image to background model.
Identifying unit 313, be used to calculate the difference of the pixel value of the same pixel on the reconstructed image of current input image and current input image, when this difference during more than or equal to pre-set threshold, with the motor image vegetarian refreshments of this pixel as current input image; At last each motor image vegetarian refreshments is formed the movement destination image on the current input image.
The embodiment of the invention adopts a kind of feature extraction algorithm of efficient robust---and two-dimentional principal component analysis (PCA) (2D-PCA) algorithm, extract the feature of the background image on the image, and set up background model based on the feature of background image; In testing process, adopt incremental learning method update background module at any time, make background model to bring in constant renewal in along with the variation of environment, and then the system that makes can be under the background environment of various dynamic changes, update background module fast and effeciently, thus moving target detected quickly and accurately.Certainly, in addition, background modeling unit 30 can also adopt other known algorithm to set up background model; Perhaps, the preceding some two field pictures that keep current input image all the time are as the reference image, with these some frame reference pictures model as a setting; Perhaps only simply with a certain still image model or the like as a setting.
Preferably, referring to Fig. 5, background modeling unit 30 comprises:
The equal value cell 301 of image is used to calculate the image average of some frame background images.
Image covariance matrix unit 302 is used for the image covariance matrix by the some frame background images of image mean value computation of some frame background images.
Characteristic value decomposition unit 303 is used for image covariance matrix is carried out characteristic value decomposition, obtains each eigenwert characteristic of correspondence vector of image covariance matrix.
Projection matrix unit 304 is used for according to eigenwert order from big to small, chooses certain characteristics and is worth pairing proper vector structure projection matrix, with this projection matrix model as a setting.
Storage unit 305 is used to store background model.
Context update unit 33 as newly-increased background image, re-constructs projection matrix, update background module with current input image.
Respectively the several formations unit in the embodiment of the invention device is elaborated below.
1, the background modeling unit 30:
Background modeling unit 30 adopts two-dimentional principal component analysis (PCA) algorithm to set up background model.Two dimension principal component analysis (PCA) algorithm is a kind of image characteristics extraction algorithm, has the characteristic that validity height, arithmetic speed are fast, robustness is good of extracting feature.
When initial, following operation is carried out in background modeling unit 30:
Collect N width of cloth background image, the graphical representation of the i width of cloth is the matrix I of the capable n row of m i, this N width of cloth background image is expressed in matrix as { I 1, I 2..., I N, and calculate the image average of this N width of cloth background image:
I ‾ = 1 N Σ i = 1 N I i
Utilize image average I to calculate the image covariance matrix of N width of cloth background image:
G t = 1 N Σ j = 1 N ( I j - I ‾ ) T ( I j - I ‾ )
To image covariance matrix G tCarry out characteristic value decomposition:
G t=U∑U T
Wherein, diagonal matrix sigma=diag (λ 1, λ 2..., λ n) be eigenvalue matrix, its characteristic of correspondence vector matrix is U=[u 1, u 2..., u n], and satisfy G tu iiu i, wherein, i=1,2 ..., n, n are the matrix column numbers, each input picture all is the matrix of the capable n row of m.
At last, according to eigenwert order from big to small, select the individual eigenwert characteristic of correspondence vector of wherein preceding M (M is less than n) structure projection matrix U M=[u 1, u 2..., u M], U MBe initial background model.
Any piece image in this N width of cloth background image can use projection matrix U M=[u 1, u 2..., u M] weighted sum represent, so U M=[u 1, u 2..., u M] can represent N width of cloth background image.
Need to prove, generally speaking, background image is the background image that does not comprise moving target, even training image is concentrated the moving target that has comprised some prospects, but because moving target can not occur all the time at same position, in two-dimentional principal component analysis (PCA) algorithm, the information of these moving targets has only occupied considerably less part, so, the proper subspace U that obtains by said process MStill background image information can be described well.
2, the foreground detection unit 31:
Suppose that image to be detected is I, projects to U with it MOn, obtain the characteristic image V of image I, make V=(V 1, V 2..., V M), U=(U 1, U 2..., U M), V=IU then.Characteristic image V can be understood as the main information of present image I, can be used for characterizing current input image.
The characteristic image V back projection of image I is rebuild on background model, obtains the reconstructed image of image I: I ~ = VU T = Σ k = 1 M v k u k T .
Need to prove that the dimension of characteristic image V is littler than the dimension of current input image I, input picture I is the matrix of the capable n row of m, projection matrix U M=[u 1, u 2..., u M] be the matrix of the capable M of n row, and the matrix of the characteristic image V capable M row that are m.
(x y) locates, and calculates reconstructed image at each pixel of image I
Figure A20081011559100102
Poor with the pixel value of true picture I: d ( x , y ) = | I ( x , y ) - I ~ ( x , y ) | , If (x, y)>T sets up, (x is foreground point (being the pixel on the movement destination image) y) to inequality d, otherwise just represents that (x y) is pixel (abbreviation background dot) on the background image to this pixel with regard to the remarked pixel point.Wherein, T is prior preset threshold, and this threshold value can be provided with as required.
3, the morphologic filtering unit 32:
Foreground image is carried out 3 * 3 medium filterings, corrosion operation and expansive working, carry out connectivity analysis after the filtering, only keep area greater than the connection piece of pre-set threshold image block as moving target.
4, the context update unit 33:
Background modeling unit 30 has obtained describing the proper subspace U of background image on the basis of two-dimentional principal component analysis (PCA) algorithm M, but the background model U that only obtains by off-line training MCan not reflect the variation of background, can not describe dynamic background image along with the variation of environment.Therefore, the embodiment of the invention also needs the proper subspace U of online updating background image constantly MThereby, can handle the situation of dynamic background, reach and detect effect more accurately.Wherein, the operation of described update background module can be that every frame all carries out once; Also can be to carry out once every some frames, because generally speaking, the variation of adjacent two two field pictures be very little in the video.
On the basis of two-dimentional principal component analysis (PCA) algorithm, context update unit 33 uses incremental learning method online updating background model, and is specific as follows:
If I 1, I 2..., I mBe existing m frame background image, I M+1Be the background image that increases newly, the average of these background images and image covariance matrix upgrade by following formula respectively:
I m+1=(1-α)I m+αI m+1 (1)
G t new = ( 1 - α ) G t + α 2 ( I m + 1 - I ‾ m + 1 ) T ( I m + 1 - I ‾ m + 1 ) - - - ( 2 )
Wherein, I mBe the image average of existing m frame background image, 0<α≤1 is for the new input picture that obtains of each frame, according to formula (1) update image average, according to formula (2) update image covariance matrix.
Preferably, in order to reduce computational complexity, the embodiment of the invention is divided into some equal-sized corresponding zones with every frame input picture and background model, each regional image of independent processing, the image of the same area on input picture and the background model is compared, thereby determine the motor image vegetarian refreshments on each zone on the input picture, be about to the image of the corresponding region on the background model of each regional image projection on the current input image, obtain each regional characteristic image on the current input image; Image with the corresponding region on the background model of each regional characteristic image back projection on the current input image obtains each regional reconstructed image on the current input image; By each regional image on the more described current input image and this corresponding regional reconstructed image, determine each regional motor image vegetarian refreshments on the current input image; At last the motor image vegetarian refreshments on each zone on the input picture is formed the movement destination image on the input picture.For example, the size of image is 320 * 240, is broken down into 100 32 * 24 zone, and each regional image is carried out moving object detection individually, each testing result is stitched together at last again and has just obtained the testing result of entire image.
Therefore, preferably, in order further to improve arithmetic speed, the embodiment of the invention is when background image updating, image for the zone that comprises the motor image vegetarian refreshments is not done any change, because can not go background image updating, still keep original image average and image covariance matrix with the image in the zone that comprises the motor image vegetarian refreshments; And for the image in the zone that does not comprise the motor image vegetarian refreshments, need upgrade, at this moment, at first upgrade the image average and the image covariance matrix of this regional image correspondence; Then, this image covariance matrix is carried out characteristic value decomposition, M the maximum pairing proper vector of eigenwert promptly obtains the new projection matrix of this regional image correspondence, thereby obtains the image of new background area before preserving.Preferably, because the characteristic value decomposition of matrix needs the certain calculation amount, when practical application, can carry out characteristic value decomposition again one time every some frames (such as five frames or ten frames).
Introduce the method that the embodiment of the invention provides below in conjunction with accompanying drawing.
Referring to Fig. 6, a kind of moving target detecting method that the embodiment of the invention proposes comprises:
S601, current input image is projected on the background model, obtain the characteristic image of current input image.
S602, with the characteristic image back projection of current input image on background model, obtain the reconstructed image of current input image.
S603, by the reconstructed image of current input image and current input image relatively, determine the movement destination image on the current input image.
In sum, the technical scheme of embodiment of the invention proposition has the following advantages:
Set up background model by two-dimentional principal component analysis (PCA) algorithm, incorporated the correlation information between the pixel on the two-dimensional space, avoided the independent modeling of all pixels has been obtained better modeling effect; Current input image is projected on the background model, obtain the characteristic image of current input image, and with characteristic image back projection on background model, obtain the reconstructed image of current input image, by comparing current input image and reconstructed image, determine the movement destination image on the current input image, thereby improved moving object detection efficient.
In addition, by Incremental Learning Algorithm online updating background model, both improved the adaptivity of background model, reduced memory space and the calculated amount in the calculating process again effectively, the complex environment that can adapt to continuous variation helps real-time processing, has further improved the detection effect.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.

Claims (12)

1, a kind of moving target detecting method is characterized in that, this method comprises:
Current input image is projected on the background model, obtain the characteristic image of described current input image;
Described characteristic image back projection on described background model, is obtained the reconstructed image of described current input image;
By more described current input image and described reconstructed image, determine the movement destination image on the described current input image.
2, method according to claim 1 is characterized in that, by more described current input image and described reconstructed image, determines that the step of the movement destination image on the described current input image comprises:
Calculate the difference of the pixel value of the same pixel on described current input image and the described reconstructed image, when this difference during more than or equal to pre-set threshold, with the motor image vegetarian refreshments of described pixel as described current input image;
Each described motor image vegetarian refreshments is formed movement destination image on the described current input image.
3, method according to claim 1 is characterized in that, this method also comprises the step of setting up described background model.
4, method according to claim 3 is characterized in that, the step of setting up described background model comprises:
Calculate the image average of some frame background images;
Obtain the image covariance matrix of described some frame background images by described image mean value computation;
Described image covariance matrix is carried out characteristic value decomposition, obtain each eigenwert characteristic of correspondence vector of described image covariance matrix;
According to described eigenwert order from big to small, choose certain characteristics and be worth pairing proper vector structure projection matrix, this projection matrix model as a setting.
5, method according to claim 4 is characterized in that, comprises the former frame input picture of current input image in described some frame background images.
6, method according to claim 1 is characterized in that, this method also comprises:
Described movement destination image is carried out morphologic filtering handle, the movement destination image after will handling through described morphologic filtering is as final testing result output.
7, method according to claim 1 is characterized in that, background model and current input image are divided into some corresponding zones;
Image with the corresponding region on the background model of each regional image projection on the current input image obtains each regional characteristic image on the current input image;
Image with the corresponding region on the background model of each regional characteristic image back projection on the current input image obtains each regional reconstructed image on the current input image;
By each regional image on the more described current input image and this corresponding regional reconstructed image, determine each regional motor image vegetarian refreshments on the current input image;
Each regional motor image vegetarian refreshments on the current input image is spliced into movement destination image on the current input image.
8, a kind of moving object detection device is characterized in that, this device comprises:
The background modeling unit is used for setting up and the storage background model;
The foreground detection unit is used for current input image is projected to described background model, obtains the characteristic image of described current input image; Described characteristic image back projection on described background model, is obtained the reconstructed image of described current input image; By more described current input image and described reconstructed image, determine the movement destination image on the described current input image.
9, device according to claim 8 is characterized in that, described foreground detection unit comprises:
The characteristic image unit is used for current input image is projected to described background model, obtains the characteristic image of described current input image;
The reconstructed image unit is used for described characteristic image back projection obtaining the reconstructed image of described current input image to described background model;
Identifying unit is used to calculate the difference of the pixel value of the same pixel on described current input image and the described reconstructed image, when this difference during more than or equal to pre-set threshold, with the motor image vegetarian refreshments of described pixel as described current input image; Each described motor image vegetarian refreshments is formed movement destination image on the described current input image.
10, device according to claim 8 is characterized in that, described background modeling unit comprises:
The equal value cell of image is used to calculate the image average of some frame background images;
The image covariance matrix unit is used for the image covariance matrix by the described some frame background images of described image mean value computation;
The characteristic value decomposition unit is used for the characteristic value decomposition of carrying out to described image covariance matrix, obtains each eigenwert characteristic of correspondence vector of described image covariance matrix;
The projection matrix unit is used for according to described eigenwert order from big to small, chooses certain characteristics and is worth pairing proper vector structure projection matrix, with this projection matrix model as a setting;
Storage unit is used to store background model.
11, device according to claim 10 is characterized in that, this device also comprises:
The context update unit is used for described current input image re-constructing projection matrix as newly-increased background image, upgrades described background model.
12, device according to claim 8 is characterized in that, this device also comprises:
The morphologic filtering unit is used for that described movement destination image is carried out morphologic filtering and handles, the most final testing result output of movement destination image after will handling through described morphologic filtering.
CNA2008101155911A 2008-06-25 2008-06-25 Movement target checking method and apparatus Pending CN101290682A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008101155911A CN101290682A (en) 2008-06-25 2008-06-25 Movement target checking method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008101155911A CN101290682A (en) 2008-06-25 2008-06-25 Movement target checking method and apparatus

Publications (1)

Publication Number Publication Date
CN101290682A true CN101290682A (en) 2008-10-22

Family

ID=40034928

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008101155911A Pending CN101290682A (en) 2008-06-25 2008-06-25 Movement target checking method and apparatus

Country Status (1)

Country Link
CN (1) CN101290682A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789128A (en) * 2010-03-09 2010-07-28 成都三泰电子实业股份有限公司 Target detection and tracking method based on DSP and digital image processing system
CN101807245A (en) * 2010-03-02 2010-08-18 天津大学 Artificial neural network-based multi-source gait feature extraction and identification method
CN102096931A (en) * 2011-03-04 2011-06-15 中南大学 Moving target real-time detection method based on layering background modeling
CN102214308A (en) * 2011-05-17 2011-10-12 詹东晖 Pedestrian detecting method, system and device
CN102663776A (en) * 2012-03-31 2012-09-12 北京智安邦科技有限公司 Violent movement detection method based on characteristic point analysis and device thereof
CN103581561A (en) * 2013-10-30 2014-02-12 广东欧珀移动通信有限公司 Human and scene image synthesis method and system based on rotary camera lens photographing
CN103778785A (en) * 2012-10-23 2014-05-07 南开大学 Vehicle tracking and detecting method based on parking lot environment video monitoring
CN104751141A (en) * 2015-03-30 2015-07-01 东南大学 ELM gesture recognition algorithm based on feature image full pixel gray values
CN104951755A (en) * 2015-06-04 2015-09-30 广东工业大学 EMD (empirical mode decomposition)-based intelligent document image block detection method
CN108665509A (en) * 2018-05-10 2018-10-16 广东工业大学 A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing
CN110163221A (en) * 2019-05-28 2019-08-23 腾讯科技(深圳)有限公司 Method, apparatus, the vehicle, robot of object detection are carried out in the picture
TWI670684B (en) * 2015-06-12 2019-09-01 鴻海精密工業股份有限公司 A method for detecting and tracing a moving target and a target detection device
CN110879951A (en) * 2018-09-06 2020-03-13 华为技术有限公司 Motion foreground detection method and device
CN112489034A (en) * 2020-12-14 2021-03-12 广西科技大学 Modeling method based on time domain information characteristic space background

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807245B (en) * 2010-03-02 2013-01-02 天津大学 Artificial neural network-based multi-source gait feature extraction and identification method
CN101807245A (en) * 2010-03-02 2010-08-18 天津大学 Artificial neural network-based multi-source gait feature extraction and identification method
CN101789128A (en) * 2010-03-09 2010-07-28 成都三泰电子实业股份有限公司 Target detection and tracking method based on DSP and digital image processing system
CN101789128B (en) * 2010-03-09 2012-01-18 成都三泰电子实业股份有限公司 Target detection and tracking method based on DSP and digital image processing system
CN102096931A (en) * 2011-03-04 2011-06-15 中南大学 Moving target real-time detection method based on layering background modeling
CN102096931B (en) * 2011-03-04 2013-01-09 中南大学 Moving target real-time detection method based on layering background modeling
CN102214308A (en) * 2011-05-17 2011-10-12 詹东晖 Pedestrian detecting method, system and device
CN102214308B (en) * 2011-05-17 2013-04-24 詹东晖 Pedestrian detecting method and system
CN102663776A (en) * 2012-03-31 2012-09-12 北京智安邦科技有限公司 Violent movement detection method based on characteristic point analysis and device thereof
CN103778785A (en) * 2012-10-23 2014-05-07 南开大学 Vehicle tracking and detecting method based on parking lot environment video monitoring
CN103581561A (en) * 2013-10-30 2014-02-12 广东欧珀移动通信有限公司 Human and scene image synthesis method and system based on rotary camera lens photographing
CN106713768B (en) * 2013-10-30 2019-07-09 Oppo广东移动通信有限公司 People's scape image composition method, system and computer equipment
CN103581561B (en) * 2013-10-30 2017-03-29 广东欧珀移动通信有限公司 The people's scape image combining method shot based on rotating camera and system
CN106713768A (en) * 2013-10-30 2017-05-24 广东欧珀移动通信有限公司 Person-scenery image synthesis method and system, and computer device
CN104751141A (en) * 2015-03-30 2015-07-01 东南大学 ELM gesture recognition algorithm based on feature image full pixel gray values
CN104951755A (en) * 2015-06-04 2015-09-30 广东工业大学 EMD (empirical mode decomposition)-based intelligent document image block detection method
CN104951755B (en) * 2015-06-04 2018-04-10 广东工业大学 A kind of Intelligent file image block detection method based on EMD
TWI670684B (en) * 2015-06-12 2019-09-01 鴻海精密工業股份有限公司 A method for detecting and tracing a moving target and a target detection device
CN108665509A (en) * 2018-05-10 2018-10-16 广东工业大学 A kind of ultra-resolution ratio reconstructing method, device, equipment and readable storage medium storing program for executing
CN110879951A (en) * 2018-09-06 2020-03-13 华为技术有限公司 Motion foreground detection method and device
CN110879951B (en) * 2018-09-06 2022-10-25 华为技术有限公司 Motion foreground detection method and device
CN110163221A (en) * 2019-05-28 2019-08-23 腾讯科技(深圳)有限公司 Method, apparatus, the vehicle, robot of object detection are carried out in the picture
CN110163221B (en) * 2019-05-28 2022-12-09 腾讯科技(深圳)有限公司 Method and device for detecting object in image, vehicle and robot
CN112489034A (en) * 2020-12-14 2021-03-12 广西科技大学 Modeling method based on time domain information characteristic space background

Similar Documents

Publication Publication Date Title
CN101290682A (en) Movement target checking method and apparatus
CN101266689B (en) A mobile target detection method and device
EP2250624B1 (en) Image processing method and image processing apparatus
CN102378992B (en) Articulated region detection device and method for same
CN101236656B (en) Movement target detection method based on block-dividing image
CN110120064B (en) Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN113052029A (en) Abnormal behavior supervision method and device based on action recognition and storage medium
CN101493944A (en) Moving target detecting and tracking method and system
CN103700087A (en) Motion detection method and device
CN102930248A (en) Crowd abnormal behavior detection method based on machine learning
CN112561951B (en) Motion and brightness detection method based on frame difference absolute error and SAD
CN101483001A (en) Video-based intrusion detection method, detection apparatus and chip
CN102457724B (en) Image motion detecting system and method
CN103679745A (en) Moving target detection method and device
CN111079539A (en) Video abnormal behavior detection method based on abnormal tracking
CN104408741A (en) Video global motion estimation method with sequential consistency constraint
CN101908214A (en) Moving object detection method with background reconstruction based on neighborhood correlation
CN102314591B (en) Method and equipment for detecting static foreground object
CN110827320A (en) Target tracking method and device based on time sequence prediction
CN104658009A (en) Moving-target detection method based on video images
CN206411692U (en) clustering system and corresponding device
CN102722720B (en) Video background extraction method based on hue-saturation-value (HSV) space on-line clustering
CN101877135A (en) Moving target detecting method based on background reconstruction
Xie et al. Real-time vehicles tracking based on Kalman filter in a video-based ITS

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20081022