CN101615291B - Method for detecting reaction type object - Google Patents

Method for detecting reaction type object Download PDF

Info

Publication number
CN101615291B
CN101615291B CN200810126130A CN200810126130A CN101615291B CN 101615291 B CN101615291 B CN 101615291B CN 200810126130 A CN200810126130 A CN 200810126130A CN 200810126130 A CN200810126130 A CN 200810126130A CN 101615291 B CN101615291 B CN 101615291B
Authority
CN
China
Prior art keywords
pixel
image data
data
probability
detecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200810126130A
Other languages
Chinese (zh)
Other versions
CN101615291A (en
Inventor
张智豪
杨宗岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
RUIZHI TECHNOLOGY CO LTD
Original Assignee
RUIZHI TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by RUIZHI TECHNOLOGY CO LTD filed Critical RUIZHI TECHNOLOGY CO LTD
Priority to CN200810126130A priority Critical patent/CN101615291B/en
Publication of CN101615291A publication Critical patent/CN101615291A/en
Application granted granted Critical
Publication of CN101615291B publication Critical patent/CN101615291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a reaction type object, belonging to the field of image processing and comprising the following steps: performing an object slicing program, wherein foreground objects and slicing data are determined by first image data and target positions; performing an object capturing program to enable each foreground object to have corresponding first characteristic data; performing an object tracing program to obtain at least one second characteristic datum; performing an object projection program to predict the target positions which correspond to the foreground objects according to the second characteristic data and second image data, and outputting the target positions to the object slicing program to perform the object slicing of third image data. The invention is suitable for processing images.

Description

A kind of reaction type object method for detecting
Technical field
The present invention relates to image processing, relate in particular to a kind of position of predicting foreground object, to carry out the object method for detecting of object cutting.
Background technology
In image processing system, each pixel deals with if image processing system can be directed against object, and then image processing system can obtain the information of more image contents, and can further handle institute's event in the picture.Utilize Properties of Objects in the picture, for example: new appearance, mobile, Vehicle, object detecting algorithm of the prior art can be detected out with the prospect object, and the cutting image is prospect and background object.Object detecting algorithm can be applicable to many aspects, for example: intelligent safety monitoring system, computer vision application, man-machine communication interface, image compression or the like.
For intelligent safety monitoring system, the shortcoming of conventional security supervisory system that object has been detected algorithms to improve can be saved to supervise manpower and promote and repaid the special event accuracy.In intelligent safety monitoring system, if object detecting algorithm can come out object detecting exactly, then object detecting algorithm can significantly be promoted monitoring efficiency, and gives the alarm more accurately.In application facet; If object detecting algorithm can be detected object exactly; Then object detecting algorithm can not only detect simple incident; As: intruder's detecting etc., can also detect to special event, for example: suspicious figure who leave over object (airport knapsack bomb), steal article (Art Museum is saved from damage), trails or the like.
Please with reference to Fig. 1, that it illustrates is the functional module structure figure of object detecting algorithm in the prior art.Wherein, the foreground object that will import in the image of object cutting module cuts out.The object acquisition module is set up object information with the object that cuts out according to its characteristic.Through following the trail of the trend of every picture object, the object tracing module can be learnt object speed or the like data.Please with reference to Fig. 2, what it illustrated is the synoptic diagram of object cutting method in the prior art.
Object cutting mode of the prior art mainly contains following several kinds:
1, picture difference algorithm (Frame Difference): the method utilizes each pixel of this picture and each pixel of last picture to subtract each other, and finds out mobile object.The advantage of this method is that computing is simple, and shortcoming is if the not motion of the foreground object of desire detecting then can't cut out.
2, regional combination algorithm (Region Merge): the method utilizes the similarity of neighbor to combine, and via the repetitive operation of certain number of times, finds out the object with consistance characteristic.The shortcoming of this method is for can only find out the object with uniform characteristics, and the repetitive operation of the certain number of times of needs.Advantage is therefore need not keep background model owing to take neighbor to combine.
3, background subtracting algorithm (Background Subtraction): the method utilizes historical picture to set up background model, compares via each pixel and background model, finds out the object inequality with background.The advantage of this method is that fiduciary level is higher, for situation such as dynamic backgrounds preferable resistibility is arranged.Shortcoming is for need keep background model.
Yet object cutting algorithm of the prior art all merely is that starting point is detected with the pixel, and the angle from " object " does not deal with.Therefore, object cutting algorithm of the prior art, as easy as rolling off a log generation false alarm (False alarm), as shadow is changed, picture noise is thought foreground object by mistake, and makes the situation of error in judgement increase.
When existing object cutting algorithm is carried out object cutting, can set the difference that a critical value (threshold) is used as prospect and background usually.But, when this object cutting algorithm is set critical value, will run into awkward problem.Modal shortcoming is, if critical value setting is too wide, the noise that then many objects produce, reflective, faint shadow change will be regarded as prospect.If critical value setting is too narrow, then some foreground object similar with background will can not be cut out.The related patent U.S. Patent No. case please refer to US6999620, US6141433US6075875.
Object cutting algorithm of the prior art still fails to reach gratifying degree in accuracy rate, thereby on using, more produces many restrictions, for example:
1, when object and background color characteristic quite near the time, existing object cutting algorithm is difficult for cutting exactly.
2, object takes place and breaks off (as: the health part is similar with background color) because of cutting accidentally in existing object cutting algorithm easily, and then makes single object be judged to be broken into the phenomenon of two objects.
3, when picture had the light reflection to change with shadow, existing object cutting algorithm was difficult for cutting exactly, cuts out and easily shadow is changed as new prospect object, made the false alarm number of times increase.
4, with the variation of object learning rate, when the object learning rate is fast,, object do not learned background if not moving soon.When the object learning rate is slow, if background changes the renewal that then background model can't be real-time.These effects all can cause the failure of object cutting algorithm.
Comprehensively above-mentioned, not only there are many restrictions in existing object cutting algorithm, and this object cutting algorithm has many important disadvantages, makes the image processing process produce many flaws.These shortcoming major parts all are that starting point causes with the pixel because of existing object cutting algorithm, and therefore, existing object cutting algorithm is demanded urgently improving.
Summary of the invention
To the problems referred to above, the invention provides a kind of reaction type object method for detecting.Because the present invention is to be the technology that main body is carried out object cutting with the object, with the pixel object cutting method on basis so the present invention has improved tradition.The present invention is via the position of object shadow casting technique prediction foreground object, to carry out object cutting.The flaw that is produced when the invention solves the prior art object cutting, the accuracy that has improved object cutting.
For reaching above-mentioned purpose, the present invention proposes a kind of reaction type object method for detecting, is applicable to image processing system.Wherein, when the t time (promptly t opens picture), second image data (t-1, t-2 ..., t-n opens picture) time of producing is at first image data (t opens picture) before.This method comprises the following steps: this method execution object cutting program; Import aforementioned first image data; The target location of being calculated according to aforementioned first image data and object projection program, cutting out foreground object, and output cutting data (dualistic formula image light shield).Then, this method is carried out the object capturing program, imports aforementioned cutting data, according to aforementioned foreground object and aforementioned cutting data, extracts pairing first characteristic of each foreground object.Next; This method is carried out the object tracing program; Import aforementioned first characteristic, analyze in aforementioned first image data first corresponding characteristic in the first corresponding characteristic and aforementioned second image data, to obtain second characteristic of each object in first image data.Thereafter; This method is carried out object projection program, imports aforementioned second characteristic, analyzes second characteristic in aforementioned second characteristic and aforementioned second image data; To predict aforementioned foreground object (t+1 opens picture) corresponding target location in the 3rd image data; Then, export aforementioned target location to aforementioned object cutting program, to cut out the foreground object of (t+1 opens picture) in the 3rd image data.
In the present invention, first image data is meant this picture, and promptly t opens picture.Second image data is meant historical picture, i.e. t-1, and t-2 ..., t-n opens picture.The 3rd image data is meant next picture, and promptly t+1 opens picture.First characteristic is meant the object information that is obtained after the object capturing program.Second characteristic is meant the characteristic information that is obtained after the object tracing program.Primary importance is meant the position of object in first image data, and the second place is meant the position of object in second image, and the 3rd position is meant the position of object in the 3rd image.First probability is meant each position of the target location that utilizes object projection program to produce in the object cutting learning probability for prospect.Second probability is meant via compare resulting probability with multiple Gaussian mixture model-universal background model.The 3rd probability is meant compare with the neighborhood pixels probability of gained of object pixel.Comprehensive first, second, and the 3rd probability can obtain the prospect probability that prospect appears in this position.
Said according to preferred embodiment of the present invention, above-mentioned object cutting program comprises the following steps: that this method reads one of them pixel in first image data as object pixel.Then, according to the target location that aforementioned object pixel and corresponding aforementioned object projection program produce, be the probability of foreground pixel to determine aforementioned object pixel, with this probability as first probability.Thereafter, the similarity of this method comparison of aforementioned object pixel and multiple Gaussian mixture model-universal background model is the probability of foreground pixel to determine aforementioned object pixel, with this probability as second probability.Next, the similarity of this method comparison of aforementioned object pixel and the corresponding neighborhood pixels of object pixel is the probability of foreground pixel to determine aforementioned object pixel, with this probability as the 3rd probability.At last, according to aforementioned first probability, aforementioned second probability and aforementioned the 3rd probability, determine whether aforementioned object pixel is foreground pixel.
Said according to preferred embodiment of the present invention, above-mentioned aforementioned object cutting program comprises the following steps: to utilize aforementioned multiple Gaussian mixture model-universal background model, and this method obtains the time domain difference parameter.Then, according to the neighborhood pixels of aforementioned object pixel, this method is to obtain the spatial diversity parameter.Then, if aforementioned time domain difference parameter and aforesaid space difference parameter sum be greater than a critical value, then this method judges that aforementioned object pixel is a foreground pixel.If aforementioned time domain difference parameter and aforesaid space difference parameter sum are less than a critical value, then this method judges that aforementioned object pixel is not foreground pixel.
Said according to preferred embodiment of the present invention, if aforementioned target location is projected to corresponding position, then improves correspondence position and the probability of aforementioned foreground pixel occurs or reduce whether this position differentiation is the critical value of prospect.
Said according to preferred embodiment of the present invention; Above-mentioned object projection program comprises the following steps: according to second characteristic and second image data; This object projection program can be learnt the target location (primary importance) of all destination objects in first image data (t opens picture, i.e. this picture).Then; According to the primary importance of aforementioned first image data and the second place of second image data; In the 3rd image data when object projection program determines that t+1 opens picture, the 3rd position of aforementioned destination object (being the position of t+1 this destination object when opening picture).The mode of object projection program calculated target positions is following: according to aforementioned second image data, this method learn aforementioned destination object the second place (be t-1, t-2 ..., t-n opens the position of this destination object of picture).Thereafter, according to the aforementioned primary importance and the aforementioned second place, this method is estimated direction of motion and the movement velocity that this destination object is corresponding.Next, this method log history direction of motion and historical movement speed.Then, this method predicts that t+1 opens picture corresponding direction of motion and corresponding movement velocity.At last, this method is predicted the target location (i.e. three position) of aforementioned destination object in next image (the 3rd image data).
Comprehensively above-mentioned, the present invention proposes a kind of reaction type object method for detecting.Because the object tracing function can so the present invention utilizes the result of object tracing function, utilize object projection program with the position that the foreground object of predicting next picture belongs in the hope of the speed of object, can significantly promote the accuracy of object cutting.The present invention has following beneficial effect at least:
1, in order to reach the ability of object detecting more accurately, the present invention adopts the data of whole object detecting system to come to adjust cleverly critical value, makes accuracy significantly promote.
2, the present invention comes the position of forecasting object with the principle of projection, and this method not only possesses novelty in the technology of object cutting, have more progressive.The purpose of object projection is, the present invention utilize second image data (t-1, t-2 ..., t-n opens picture), with the object of predicting the 3rd image data (t+1 opens picture) the position that possibly occur.Then, this method is with this position feedback that possibly occur to object cutting module, and to be used as the auxiliary of object cutting, for example: the present invention improves the probability that object appears in object view field, and reduces the probability that foreground object appears in the zone that does not project to.Thereby the present invention improves the accuracy of object cutting, and reaches the effect that reduces false alarm.
3, the object projection is the help of object cutting, and the object projection can refill the part that object cuts disconnection accidentally, and the present invention overcomes the shortcoming of known techniques, avoids an object to be mistaken as two objects because of disconnection.
4, the object projection is the help of object cutting, and the object projection increases the accuracy of detecting contour of object.The present invention can increase object in similar background, the probability of successfully cutting out.
5, the object projection is the help of object cutting, and the object projection can be adjusted critical value according to projection result, reduces the harmful effect of using single fixedly critical value to cause effectively.For example: reduce the critical value of view field, improve the critical value of non-view field.
6, the object projection is the help of object cutting, and the object projection increases foreground object and can in picture, stop the static time, and makes object can not gone into background and do not detected by quick.
7, the object projection is the help of object cutting, and it is the shortcoming that unit does to cut with the pixel that the object projection overcomes convention object detecting algorithm, and the object projection utilizes the characteristic of whole object, increases the correctness of object cutting.
From the above, the probability of foreground object possibly appear in each position that object projection meter calculates, and the cutting power of adjustment object cutting algorithm (for example: critical value), to promote the accuracy of whole object detecting system.
Description of drawings
For letting the object of the invention, characteristic and the advantage can be more obviously understandable, hereinafter is special lifts preferred embodiment, and cooperates appended graphicly, elaborates as follows:
Fig. 1 is the functional module structure figure of object detecting algorithm in the prior art;
Fig. 2 is the synoptic diagram of object cutting method in the prior art;
Fig. 3 is the functional module structure figure according to the reaction type object method for detecting of the present invention's one preferred embodiment;
Fig. 4 is the process flow diagram according to the object cutting program of the present invention's one preferred embodiment;
Fig. 5 is the process flow diagram of the probability of foreground pixel for the decision object pixel according to the present invention's one preferred embodiment;
Fig. 6 is the process flow diagram according to the object projection program of the present invention's one preferred embodiment;
Fig. 7 is the synoptic diagram according to the object cutting of the present invention's one preferred embodiment.
The primary clustering symbol description is following among the figure:
302: the object cutting module
304: the object acquisition module
306: the object tracing module
308: the object projection module
S402-S430: the step of process flow diagram
S602-S626: the step of process flow diagram
700: the first image datas
702: object pixel
704,706,708: multiple Gaussian mixture model-universal background model
Embodiment
In order to be illustrated more clearly in the technical scheme of the embodiment of the invention; To combine accompanying drawing that embodiments of the invention are carried out detailed introduction below; Following description only is some embodiments of the present invention; For those of ordinary skills, under the prerequisite of not paying creative work property, can also obtain other embodiment of the present invention according to these embodiment.
Please with reference to Fig. 3, that it illustrates is the functional module structure figure according to the reaction type object method for detecting of the present invention's one preferred embodiment.This method is applicable to image processing, wherein, at least one pen second image data (t-1, t-2 ..., t-n opens picture) time of producing is at one first image data (t opens picture) before.This functional module structure figure comprises object cutting module 302, object acquisition module 304, object tracing module 306 and object projection module 308.This method with first image data (t opens picture) and second image data (t-1, t-2 ..., t-n opens picture) the corresponding target location input object cutting module 302 that produces.Next, this method is carried out the object cutting program, makes the corresponding dualistic formula image light shield of object cutting module 302 outputs to object acquisition module 304.Then, this method is carried out the object capturing program, makes corresponding first characteristic to the object tracing module 306 of object acquisition module 304 outputs.Thereafter, this method is carried out the object tracing program, makes corresponding second characteristic to the object projection module 308 of object tracing module 306 outputs.Then, this method is carried out object projection program, makes corresponding target location to the object cutting module 302 of object projection module 308 outputs first image data, to assist the image data cutting object of the 3rd image data (t+1 opens picture).
This method comprises the following steps: this method execution object cutting program, imports aforementioned first image data and target location.According to aforementioned first image data and aforementioned target location, to cut out foreground objects all in the picture and this foreground object corresponding cutting data.Then, this method is carried out the object capturing program, imports aforementioned cutting data, and this cutting data is a dualistic formula image light shield.According to aforementioned foreground object and aforementioned cutting data, obtain the first corresponding characteristic of each foreground object.Thereafter; This method is carried out the object tracing program; Import aforementioned first characteristic; And analyze in aforementioned first image data corresponding aforementioned first characteristic in the first corresponding characteristic and aforementioned second image data, through relatively learning corresponding relation, to obtain second characteristic of each object in first image data.Then, this method is carried out object projection program, imports aforementioned second characteristic, analyzes aforementioned second characteristic second characteristic corresponding with aforementioned second image data, with the aforementioned target location (the 3rd position) of predicting that aforementioned foreground object is corresponding.Then, this method exports aforementioned target location to aforementioned object cutting program, to carry out the object cutting of aforementioned the 3rd image data.
Please with reference to Fig. 4, what it illustrated is the process flow diagram according to the object cutting program of the present invention's one preferred embodiment.Aforementioned object cutting program comprises the following steps: to begin to carry out object cutting S402, and this method reads one of them pixel in first image data (t opens picture) as object pixel S404.Next, this method import second image data (t-1, t-2 ..., t-n opens picture), and when t-1 opens picture the corresponding target location S406 of decision.Then, this method reads this target location S408.Then, state the target location before with corresponding, the probability of foreground pixel occurs, become the first probability S410 to determine aforementioned target location according to aforementioned object pixel.In addition, according to Gaussian mixture model-universal background model, obtain corresponding time domain cutting data S412.Next, this method reads aforementioned time domain cutting data S414.Then, the similarity of this method comparison of aforementioned object pixel and Gaussian mixture model-universal background model is the probability of foreground pixel to determine aforementioned object pixel, becomes the second probability S416.In addition, this method reads the first image data S418.Then, according to the corresponding neighborhood pixels of aforementioned object pixel, obtain spatial data S420 with object pixel.Thereafter, the similarity of this method comparison of aforementioned object pixel and the corresponding neighborhood pixels of object pixel is the probability of foreground pixel to determine aforementioned object pixel, becomes the 3rd probability S422.Then, according to first probability, second probability and the 3rd probability, determine whether aforementioned object pixel is foreground pixel S424.Next, this method is exported aforementioned object pixel to dualistic formula image light shield S426.Then, this method judges whether the pixel of whole picture all cuts completion S428.If the pixel of whole picture is not cut completion, this method execution in step S404 once more then.If the cutting of the pixel of whole picture is accomplished, then this method end object cutting process S430.
Please with reference to Fig. 5, it illustrates is that decision object pixel according to the present invention's one preferred embodiment is the process flow diagram of the probability of foreground pixel.This method forms the foreground pixel probability and comprises the following steps: can learn aforesaid first probability according to first image data and the object projection information target location of reading this object.Through multiple Gaussian mixture model-universal background model, this method obtains the time domain difference parameter.According to this time domain difference parameter, can learn aforesaid second probability.Then, the pixel contiguous according to object pixel, this method obtains the spatial diversity parameter.According to this spatial diversity parameter, can learn aforesaid the 3rd probability.According to aforementioned first probability, adjust the critical value that second probability and the 3rd probability are judged, and by with critical value result relatively, can try to achieve the foreground pixel probability.Whether this pixel of foreground pixel probability decidable is foreground pixel thus, accomplishes the object cutting of this pixel.
Please once more with reference to Fig. 3, the object capturing program can be used the connection assembly volume label algorithm (ConnectedComponent Labeling) of convention, distributes with connection situation, position and the object of analyzing connection assembly, to obtain first characteristic.The object tracing program can be used object pairing algorithm, through man-to-man more every picture, seeks analogical object to follow the trail of, to obtain second characteristic.
Please with reference to Fig. 6, what it illustrated is the process flow diagram according to the object projection program of the present invention's one preferred embodiment.Object projection program comprises the following steps: to begin to carry out object projection S602, and this method reads the destination object S604 that will carry out the object projection.In addition, this method obtains the data S606 of the destination object of second image data.Then, this method read second image data (t-1, t-2 ..., t-n opens picture) the position S608 of destination object.In addition, this method obtains the data S610 of the destination object of first image data (this picture t).Then, according to first image data, when determining that t opens picture, the primary importance of destination object, that is this method reads the position S612 of the destination object of this picture (t opens picture).Then, according to the aforementioned primary importance and the aforementioned second place, estimate direction of motion and movement velocity S614.Then, this method log history direction of motion and historical movement speed S616.And this method is predicted the direction of motion and corresponding movement velocity S618 of the correspondence of the 3rd image data (t+1 opens picture).According to step S612 and step S618, the target location S620 of this method target of prediction object in the 3rd image data (t+1 opens picture).Thereafter, this method export target object is opened the target location S622 in the image of picture at t+1.Then, this method judge in first image data all destination objects whether all projection accomplish S624.If all destination objects in first image data as yet not projection accomplish this method execution in step S604 once more then.If all destination objects in first image data projection are accomplished, then this method end object projection program S626.
What be worth explanation is that first characteristic system is object information such as distribution of color, object mass center or object size.Second characteristic system is mobile data, the data that obtained according to the analytic target moving state, for example: information such as object velocity, object's position or direction of motion.In addition, second characteristic also can be grouped data, the kind of aforementioned grouped data denoted object, for example: people or car.Moreover second characteristic also can be the scene location data, aforementioned scene position data denoted object place scene, for example: doorway, upward slope or descending.In addition, second characteristic also can be interactive data, through analyzing the mutual-action behavior between each connection assembly, can obtain aforementioned interactive data, for example: talk behavior or Body contact behavior.Moreover second characteristic also can be the scene depth data, the scene depth at aforementioned field depth of field degrees of data denoted object place.According to second characteristic, this method second characteristic capable of using is come the target location of target of prediction object at next picture, and then, next opens target location to original object cutting program of picture this method back coupling, can obtain first probability.This method cooperates other second probability and the 3rd probability to do prediction more accurately, can accomplish the work of object cutting more accurately.
Please with reference to Fig. 7, what it illustrated is the synoptic diagram according to the object cutting of the present invention's one preferred embodiment.Please cooperate with reference to Fig. 5 and Fig. 6, first image data 700 includes object pixel 702, according to object pixel 702 neighborhood pixels, can obtain the 3rd probability.Moreover, according to multiple Gaussian mixture model-universal background model 704, multiple Gaussian mixture model-universal background model 706, multiple Gaussian mixture model-universal background model 708 or the like N model, can obtain second probability.In addition, according to the object mobile data, the desirable probability of winning of this method, its mathematical form is following:
(Obj (k), t): object k is in the position of t time for Pos
(Obj (k), t): object k is at t and the motion-vector of t-1 time (motion vector) for MV
MV(Obj(k),t)=Pos(Obj(k),t)-Pos(Obj(k),t-1)
MP (Obj (k), t): moving projection function (motion prediction)
Low_pass_filter (X): low-pass filter function
MP(Obj(k),t)=Low_pass_filter(MV(Obj(k),t),MV(Obj(k),t-1),MV(Obj(k),t-2),...)
Proj_pos (Obj (k), t+1): according to aforesaid data, the position that this method prediction (projection) object t+1 time occurs
Proj_pos(Obj(k),t+1)=Pos(Obj(k),t)+MP(Obj(k),t)
This method is being carried out t+1 when opening the object segmentation of picture, if this position is the target location of object projection, then improves the probability that this position object occurs, that is this method reduces judges that this position is the critical value of prospect.
One of ordinary skill in the art will appreciate that all or part of step that realizes in the foregoing description, can accomplish through the programmed instruction related hardware.The software that said embodiment is corresponding can be stored in a computing machine and can store in the medium that reads.
The above; Be merely embodiment of the present invention, but protection scope of the present invention is not limited thereto, any technician who is familiar with the present technique field is in the technical scope that the present invention discloses; Can expect easily changing or replacement, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (17)

1. a reaction type object method for detecting is applicable to image processing, it is characterized in that, the time that at least one second image data produces, the method comprised the following steps: before one first image data
Carry out the object cutting step, import the target location that this first image data and object projection step are obtained,,, obtain the corresponding cutting data to cut out all foreground objects in the picture according to this first image data and this target location;
Carry out object acquisition step, import this cutting data,, obtain the first corresponding characteristic of each this foreground object according to this foreground object and cutting data;
Carry out the object tracing step, import this first characteristic, analyze this corresponding first characteristic in this corresponding in this first image data first characteristic and this second image data, to obtain at least one second characteristic; And,
Carry out this object projection step, import this second characteristic, analyze this second characteristic and this second image data; With this target location of predicting that this foreground object is corresponding; Then, export this target location to this object cutting step, carry out the cutting of one the 3rd image data to assist.
2. reaction type object method for detecting as claimed in claim 1 is characterized in that this object cutting step comprises the following steps:
Read one of them pixel in this first image data as an object pixel;
According to this object pixel and this corresponding target location, the probability of one foreground pixel appears to determine this target location, and this probability is first probability;
Relatively the similarity of this object pixel and background model is the probability of this foreground pixel to determine this object pixel, and this probability is second probability;
The similarity of this object pixel and the corresponding neighborhood pixels of this object pixel relatively, to determine the probability of this object pixel for this foreground pixel, this probability is the 3rd probability; And,
According to this first probability, this second probability and the 3rd probability, determine whether this object pixel is this foreground pixel.
3. reaction type object method for detecting as claimed in claim 2 is characterized in that this background model is multiple Gaussian mixture model-universal background model.
4. reaction type object method for detecting as claimed in claim 3 is characterized in that this object cutting step comprises the following steps:
Utilize this multiple Gaussian mixture model-universal background model, to obtain the time domain difference parameter;
According to the neighborhood pixels of this object pixel, to obtain the spatial diversity parameter;
If this time domain difference parameter and this spatial diversity parameter sum judge then that greater than a critical value this object pixel is this foreground pixel; And,
If this time domain difference parameter and this spatial diversity parameter sum, are then judged this object pixel less than this critical value and are not this foreground pixel.
5. reaction type object method for detecting as claimed in claim 2 is characterized in that, if this target location is projected to corresponding position, then improves the probability that this foreground pixel appears in correspondence position.
6. reaction type object method for detecting as claimed in claim 1 is characterized in that, this cutting data is a dualistic formula image light shield.
7. reaction type object method for detecting as claimed in claim 1 is characterized in that this first characteristic comprises distribution of color, object mass center and object size.
8. reaction type object method for detecting as claimed in claim 1 is characterized in that this second characteristic is a mobile data, obtains said mobile data through the analytic target moving state.
9. reaction type object method for detecting as claimed in claim 8 is characterized in that, this mobile data comprises at least a in object velocity and object's position, object size and the direction of motion.
10. reaction type object method for detecting as claimed in claim 1 is characterized in that this second characteristic is a grouped data, the kind of this grouped data denoted object.
11. reaction type object method for detecting as claimed in claim 10 is characterized in that, this grouped data comprises at least a in a people and the car.
12. reaction type object method for detecting as claimed in claim 1 is characterized in that this second characteristic is the scene location data, this scene location data denoted object place scene.
13. reaction type object method for detecting as claimed in claim 12 is characterized in that, these scene location data comprise a doorway, go up a slope with a descending at least a.
14. reaction type object method for detecting as claimed in claim 1 is characterized in that this second characteristic is an interactive data, through analyzing the mutual-action behavior between at least one connection assembly, to obtain this interactive data.
15. reaction type object method for detecting as claimed in claim 14 is characterized in that, this interactive data is a talk behavior and a health touching act.
16. reaction type object method for detecting as claimed in claim 1 is characterized in that this second characteristic is the scene depth data, the scene depth of this scene depth data denoted object.
17. reaction type object method for detecting as claimed in claim 1 is characterized in that this object projection step comprises the following steps:
According to this second characteristic and this second image data, determine at least one destination object;
According to this first image data, when determining that t opens picture, the primary importance of this destination object;
According to this second image data, determine t-1, t-2 ..., when t-n opens picture, the second place of this destination object;
According to this primary importance and this second place, estimate a direction of motion and a movement velocity;
Write down a historical movement direction and a historical movement speed;
Predict the 3rd image data, the 3rd image data is t+1 corresponding this direction of motion and this corresponding movement velocity when opening picture; And,
Predict this destination object this target location in the 3rd image data.
CN200810126130A 2008-06-27 2008-06-27 Method for detecting reaction type object Active CN101615291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810126130A CN101615291B (en) 2008-06-27 2008-06-27 Method for detecting reaction type object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810126130A CN101615291B (en) 2008-06-27 2008-06-27 Method for detecting reaction type object

Publications (2)

Publication Number Publication Date
CN101615291A CN101615291A (en) 2009-12-30
CN101615291B true CN101615291B (en) 2012-10-10

Family

ID=41494915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810126130A Active CN101615291B (en) 2008-06-27 2008-06-27 Method for detecting reaction type object

Country Status (1)

Country Link
CN (1) CN101615291B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI486837B (en) * 2012-09-18 2015-06-01 Egalax Empia Technology Inc Prediction-based touch contact tracking
CN109583262B (en) * 2017-09-28 2021-04-20 财团法人成大研究发展基金会 Adaptive system and method for object detection
CN107742171B (en) * 2017-10-31 2020-08-21 浙江工业大学 Photovoltaic power station power generation power prediction method based on mobile shadow image recognition
CN108460791A (en) * 2017-12-29 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for handling point cloud data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075875A (en) * 1996-09-30 2000-06-13 Microsoft Corporation Segmentation of image features using hierarchical analysis of multi-valued image data and weighted averaging of segmentation results
US6141433A (en) * 1997-06-19 2000-10-31 Ncr Corporation System and method for segmenting image regions from a scene likely to represent particular objects in the scene
CN1423487A (en) * 2001-12-03 2003-06-11 微软公司 Automatic detection and tracing for mutiple people using various clues
US6999620B1 (en) * 2001-12-10 2006-02-14 Hewlett-Packard Development Company, L.P. Segmenting video input using high-level feedback

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075875A (en) * 1996-09-30 2000-06-13 Microsoft Corporation Segmentation of image features using hierarchical analysis of multi-valued image data and weighted averaging of segmentation results
US6141433A (en) * 1997-06-19 2000-10-31 Ncr Corporation System and method for segmenting image regions from a scene likely to represent particular objects in the scene
CN1423487A (en) * 2001-12-03 2003-06-11 微软公司 Automatic detection and tracing for mutiple people using various clues
US6999620B1 (en) * 2001-12-10 2006-02-14 Hewlett-Packard Development Company, L.P. Segmenting video input using high-level feedback

Also Published As

Publication number Publication date
CN101615291A (en) 2009-12-30

Similar Documents

Publication Publication Date Title
CN101610420B (en) Automatic white balance method
TWI420401B (en) Algorithm for feedback type object detection
Tian et al. Video processing techniques for traffic flow monitoring: A survey
US9373055B2 (en) Hierarchical sudden illumination change detection using radiance consistency within a spatial neighborhood
CN101729872B (en) Video monitoring image based method for automatically distinguishing traffic states of roads
KR101731461B1 (en) Apparatus and method for behavior detection of object
US20150339831A1 (en) Multi-mode video event indexing
CN104063885A (en) Improved movement target detecting and tracking method
CN103049787A (en) People counting method and system based on head and shoulder features
Ling et al. A background modeling and foreground segmentation approach based on the feedback of moving objects in traffic surveillance systems
CN101615291B (en) Method for detecting reaction type object
Zhang et al. Crowd panic state detection using entropy of the distribution of enthalpy
Campos et al. Discrimination of abandoned and stolen object based on active contours
Qianyin et al. A model based method of pedestrian abnormal behavior detection in traffic scene
CN103577832A (en) People flow statistical method based on spatio-temporal context
Salem et al. Detection of suspicious activities of human from surveillance videos
Xia et al. Automatic multi-vehicle tracking using video cameras: An improved CAMShift approach
Wang et al. A new approach for real-time detection of abandoned and stolen objects
CN116524410A (en) Deep learning fusion scene target detection method based on Gaussian mixture model
Agrawal et al. An improved Gaussian Mixture Method based background subtraction model for moving object detection in outdoor scene
Liu et al. A review of traffic visual tracking technology
Vedder et al. Zeroflow: Fast zero label scene flow via distillation
Xun et al. Congestion detection of urban intersections based on surveillance video
CN111126195B (en) Abnormal behavior analysis method based on scene attribute driving and time-space domain significance
Wang et al. Anomaly detection in crowd scene using historical information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant