CN102831384A - Method and device for detecting discards by video - Google Patents

Method and device for detecting discards by video Download PDF

Info

Publication number
CN102831384A
CN102831384A CN 201110166825 CN201110166825A CN102831384A CN 102831384 A CN102831384 A CN 102831384A CN 201110166825 CN201110166825 CN 201110166825 CN 201110166825 A CN201110166825 A CN 201110166825A CN 102831384 A CN102831384 A CN 102831384A
Authority
CN
China
Prior art keywords
present frame
static
abandon
long
term
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110166825
Other languages
Chinese (zh)
Other versions
CN102831384B (en
Inventor
刘舟
吴伟国
李亮
宁文鑫
王贵锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Sony Corp
Original Assignee
Tsinghua University
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Sony Corp filed Critical Tsinghua University
Priority to CN201110166825.7A priority Critical patent/CN102831384B/en
Publication of CN102831384A publication Critical patent/CN102831384A/en
Application granted granted Critical
Publication of CN102831384B publication Critical patent/CN102831384B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and device for detecting discards by the video. The method comprises the following steps: extracting a background from the current frame according to a background model; extracting a short-term still area and a short-term static edge from the background; further determining a long-term still area and a long-term static edge according to the short-term still area and the short-term static edge; and determining the discards from the long-term still area according to the long-term static edge. Therefore, the discards can be detected from the video without tracking the moving target. In addition, the influence of various factors such as light and crowded pedestrian is considered in the process of detecting, so that the amount of calculation can be reduced greatly. The method is high in robustness and low in misstatement rate.

Description

From video, detect the method and apparatus of abandon
Technical field
Present invention relates in general to computer realm, relate in particular to a kind of method and apparatus that from video, detects abandon.
Background technology
Abandon detects significant for the safety of safeguarding the public place.Here said abandon detects and is meant that detecting those is had a mind to abandon, throws in public places or some critical position by the people, and the knapsack, briefcase of explosive etc. possibly are housed.Usually the terrorist ignites the bomb that is contained in the bag through the mode of timing or remote control after placing such parcel.This crime means cost is low, harm is big, take precautions against and track down the difficulty height, becomes the offender gradually and carries out one of main mode of explosive attack.Similarly case emerged in an endless stream, like Madrid, ESP serial blast case in 2004; London in 2005 and Liverpudlian case of explosion or the like.
The detection abandon is meant and utilizes field erected video monitoring equipment from video, through the picture material analysis in the video being surveyed the generation that abandon detects incident.
In a kind of known method that from video, detects abandon: whether at first detect the moving target in the scene, follow the tracks of all moving targets then, analyzing has moving target from another target separately and keep static a period of time to detect abandon.
Also there is the pursuit movement target detects abandon from video the method that need not in the prior art.But in the method, do not consider the many difficult points in the real scene, for example influence of the blocking of pedestrian, illumination or the like.
Summary of the invention
Provided hereinafter about brief overview of the present invention, so that the basic comprehension about some aspect of the present invention is provided.Should be appreciated that this general introduction is not about exhaustive general introduction of the present invention.It is not that intention is confirmed key of the present invention or pith, neither be intended to limit scope of the present invention.Its purpose only is to provide some notion with the form of simplifying, with this as the preorder in greater detail of argumentation after a while.
The present invention aims to provide a kind of method and apparatus that from video, detects abandon; Wherein need not to follow the tracks of the processing of all moving targets in the frame of video; And considered the factors such as influence of the blocking of crowded pedestrian, illumination, can significantly reduce calculated amount thus and have higher robustness and less rate of false alarm simultaneously.
According to an aspect of the present invention, a kind of method is provided also, has comprised: from present frame, extracted prospect according to background model; From prospect, extract stagnant zone and static in short-term edge in short-term; Based on stagnant zone and static in short-term edge in short-term, further determine when long stagnant zone static edge when long; Determine abandon the stagnant zone when static edge is from length during according to length.
According to another aspect of the present invention, a kind of device is provided also, has comprised: a kind of device that from video, detects abandon, comprising: foreground extraction portion is configured to from present frame, extract prospect according to background model; Stagnant zone and static in short-term edge extracting portion are configured to from prospect, extract stagnant zone and static in short-term edge in short-term in short-term; Portion is confirmed at stagnant zone static edge when long when long, is configured to based on stagnant zone and static in short-term edge in short-term, further determines when long stagnant zone static state edge when long; Abandon is confirmed portion, determines abandon the stagnant zone when static edge is from length when being configured to according to length.
According to others of the present invention, corresponding computer programs code, computer-readable recording medium and computer program are provided also.
Through below in conjunction with the detailed description of accompanying drawing to most preferred embodiment of the present invention, these and other advantage of the present invention will be more obvious.
Description of drawings
The present invention can wherein use same or analogous Reference numeral to represent identical or similar parts in institute's drawings attached through with reference to hereinafter combining the given description of accompanying drawing to be better understood.Said accompanying drawing comprises in this manual and forms the part of this instructions together with following detailed description, and is used for further illustrating the preferred embodiments of the present invention and explains principle and advantage of the present invention.In the accompanying drawings:
Fig. 1 shows the process flow diagram that from video, detects the method for abandon according to an embodiment of the invention;
Fig. 2 shows the process flow diagram that comprises the method for abandon that from video, detects of eliminating the line noise processing according to an embodiment of the invention;
Fig. 3 is the process flow diagram that the processing of eliminating line noise according to an embodiment of the invention is shown;
Fig. 4 is the process flow diagram that the processing of calculating degree of confidence according to an embodiment of the invention is shown;
Fig. 5 is the process flow diagram that the processing of eliminating line noise according to an embodiment of the invention is shown;
Fig. 6 shows the process flow diagram of the processing of the degree of confidence between the appropriate section of the local part of calculating present frame according to an embodiment of the invention and current background;
Fig. 7 illustrates the process flow diagram that judges whether to need to carry out the processing of eliminating line noise according to an embodiment of the invention;
Fig. 8 illustrates according to an embodiment of the invention the process flow diagram that comes to determine respectively the processing at static edge when long of stagnant zone when long based on stagnant zone and static in short-term edge in short-term;
Fig. 9 shows the schematic representation of apparatus that from video, detects abandon according to an embodiment of the invention;
Figure 10 illustrates the synoptic diagram of line noise elimination portion according to an embodiment of the invention;
Figure 11 shows the structural representation of confidence calculations portion according to an embodiment of the invention;
Figure 12 illustrates the structural representation of line noise elimination portion according to an embodiment of the invention;
Figure 13 shows the structural representation of local according to an embodiment of the invention confidence calculations portion;
Figure 14 shows the synoptic diagram of judging part according to an embodiment of the invention;
Figure 15 shows according to an embodiment of the invention the structural representation that portion is confirmed at the static edge when long of stagnant zone when long;
Figure 16 shows the block scheme that wherein can realize according to the exemplary configurations of the general purpose personal computer of the method for the embodiment of the invention and/or device.
Embodiment
To combine accompanying drawing that example embodiment of the present invention is described hereinafter.In order to know and for simplicity, in instructions, not describe all characteristics of actual embodiment.Yet; Should understand; In the process of any this practical embodiments of exploitation, must make a lot of decisions, so that realize developer's objectives, for example specific to embodiment; Meet and system and professional those relevant restrictive conditions, and these restrictive conditions may change along with the difference of embodiment to some extent.In addition, might be very complicated and time-consuming though will also be appreciated that development, concerning the those skilled in the art that have benefited from present disclosure, this development only is customary task.
At this; What also need explain a bit is; For fear of having blured the present invention, only show in the accompanying drawings and closely-related apparatus structure of scheme according to the present invention and/or treatment step, and omitted other details little with relation of the present invention because of unnecessary details.
Fig. 1 shows the process flow diagram that from video, detects the method for abandon according to an embodiment of the invention.
As shown in Figure 1, at step S102 place, can from present frame, extract prospect.
Specifically, can utilize background model to come the prospect that from present frame, extracts.The processing that utilizes background model from present frame, to extract prospect can realize through method any known or that will occur in the future.
In one embodiment of the invention, can carry out background modeling through known GMM (gauss hybrid models) method, and through present frame and current background are subtracted each other the prospect of extracting.Wherein, The article of delivering at STAUFFER C and GRIMSON WEL that is entitled as " Adaptive background mixture models for real-time tracking [C] " is (referring to IEEE Computer Society Conference on Computer Vision and Pattern Recognition.New York:IEEE Computer Society Press; 1999:2462252.) in describe the method for GMM background modeling in detail; Its full content is herein incorporated by reference, repeats no more so that instructions keeps succinct at this.
Then, at step S104 place, can from prospect, extract stagnant zone and static in short-term edge in short-term.
Wherein, stagnant zone can be the zone that in a period of time (for example, several 10 frames or less time length) inherent prospect, remains unchanged in short-term, and static in short-term edge then can be the edge that in said a period of time, in prospect, remains unchanged.
In one embodiment of the invention, from prospect, extracting in short-term, the processing of stagnant zone can realize through the processing of inter-frame difference.
Specifically, can utilize current frame image and consecutive frame image to carry out difference; Judge according to the difference value of the pixel in the present frame whether this pixel is static in short-term.Can from prospect, extract stagnant zone in short-term thus.
In addition, can utilize the canny algorithm to come from prospect, to extract the edge, and the edge that will in a period of time, remain unchanged in (for example, several 10 frames or less time length) is as static edge in short-term.
Wherein, the canny algorithm is a kind of edge extracting method commonly used in the image processing field.Detailed content about the canny algorithm can be referring to Canny; J. the article of delivering that is entitled as " A Computational Approach To Edge Detection " (is seen IEEE Trans.Pattern Analysis and Machine Intelligence; 8:679-714; 1986), the full content of this article is herein incorporated by reference, repeats no more so that instructions keeps succinct at this.
Extract the processing of stagnant zone in short-term and extract that the processing sequence at static edge can be unrestricted in short-term, they can executed in parallel also can successively be carried out.
The method that the foregoing description is described is merely example; The invention is not restricted to this; Above-mentioned extraction in short-term stagnant zone with extract in short-term the method at static edge and can carry out various remodeling or modification, and also can utilize among the present invention other known or occur in the future be used for extracting in short-term stagnant zone and extract in short-term the method at static edge and come to extract stagnant zone and static in short-term edge in short-term from prospect.
In addition, in the above-described embodiments, extracting the processing of stagnant zone in short-term and extracting the processing at static edge in short-term all is to carry out to the prospect of entire frame, but the invention is not restricted to this.For example, in another embodiment of the present invention, extract the processing at static edge in short-term and can only carry out in the stagnant zone in short-term.For example, can utilize the canny algorithm extracting static in short-term edge in the stagnant zone in short-term.
In this case, extracting the processing of stagnant zone in short-term need carry out in the extraction processing at static edge in short-term before.Stagnant zone extracts static in short-term edge through only being directed against in short-term, can reduce calculated amount, raises the efficiency.
Referring to Fig. 1, then in step S106, can determine when long stagnant zone static state edge when long based on stagnant zone and static in short-term edge in short-term.
Wherein, stagnant zone can refer to the zone that remains static for a long time in the stagnant zone in short-term when long, and static edge can refer to the edge that remains static for a long time in the static in short-term edge when long.
Particularly; Remain static and cumulative time of being in nonstatic state stagnant zone when judging whether pixel belongs to long according to each pixel in the stagnant zone in short-term; And remain static and cumulative time of being in nonstatic state stagnating margin when judging whether pixel belongs to long according to each pixel in the static edge in short-term, can obtain when long stagnant zone static edge when long thus.
Through such mode, can eliminate the influence that abandon passive movement prospect (for example, the crowded stream of people etc.) is frequently blocked effectively, improve the accuracy that abandon detects greatly.
Then, referring to Fig. 1, at step S108 place, can be when long static edge determine abandon the stagnant zone when long.
Specifically, can be when long the corresponding relation (, stagnant zone should have adjacent with it static state edge when long when long) of static edge stagnant zone when long, come further to determine abandon.
More particularly, can with have adjacent when long during static edge long stagnant zone confirm as abandon, and stagnant zone is given up as noise will not have static state edge when long long the time.
Can find out, in the method that from video, detects abandon, can under the situation that need not the pursuit movement target, from video, detect abandon, in testing process, consider crowded pedestrian's etc. influence simultaneously according to embodiment shown in Figure 1.Can significantly reduce calculated amount thus and have higher robustness and less rate of false alarm simultaneously.
The method that from video, detects abandon shown in Figure 1 is merely example, the invention is not restricted to this, but can also carry out various remodeling.For example, in of the present invention another implemented, from video, can also comprise the processing of eliminating line noise in the method for detection abandon.
Wherein, term " line noise " general reference causes the noise of linear effect at least can the major part at frame.For example, when the scene in the frame stood illumination, illumination can cause linear change to the gray-scale value of most of pixel in the frame.Thereby it is " line noise " mentioned among this paper that this illumination can be regarded as.
Although in above description; With illumination is that example is illustrated line noise; But above description is merely example, and in fact, it is " line noise " among this paper that any noise that can cause this linear effect to the pixel of major part in the frame all can be regarded as.
Fig. 2 shows the process flow diagram that comprises the method for abandon that from video, detects of eliminating the line noise processing according to an embodiment of the invention.
In the embodiment shown in Figure 2, after the foreground extraction step, further comprised the step S204 that eliminates line noise, the details about the processing of eliminating line noise will be elaborated hereinafter.The step S202 of the extraction prospect among Fig. 2, extract stagnant zone in short-term and in short-term static edge step S206, confirm stagnant zone when long and when long among step S210 and Fig. 1 of step S208 and definite abandon at static edge step S102, step S104, step S106 and step S108 similar; No longer these steps are carried out repeat specification at this, so that instructions keeps succinct.
In according to method embodiment illustrated in fig. 2, owing in processing, increased the processing of eliminating line noise, can reduce near the influence in some line noise sources (for example, light source etc.).Thereby can further improve the accuracy that from video, detects abandon, reduce the wrong report incidence.
Method shown in Figure 2 also is merely example, the invention is not restricted to this, but can also carry out other remodeling and modification.
For example, in the embodiment shown in Figure 2, the processing of eliminating line noise is after the processing of the prospect of extraction, to carry out, but the invention is not restricted to this, eliminates the step of line noise and also can before the step of the prospect of extraction, carry out.
Specifically, can eliminate line noise to current frame image earlier, and then according to extracting prospect the two field picture of background model after handling.Through such processing sequence, also can eliminate the influence of line noise source (for example, illumination etc.), thereby can improve the accuracy that from video, detects abandon, reduce the wrong report incidence.
Eliminating the processing of line noise can carry out to whole scenes of present frame, also can carry out to one or more parts of present frame.
In one embodiment of the invention, can eliminate the influence of line noise to entire frame.
Fig. 3 is the process flow diagram that illustrates according to the processing of the elimination line noise of this embodiment.
As shown in Figure 3, at step S302 place, can calculate the linear relationship between present frame and the current background.
As stated, " line noise " can refer in frame, cause the noise of linear effect.If in present frame, there is above-mentioned line noise, then present frame should and current background between have a kind of linear relationship.
Thereby, can calculate this linear relationship based on present frame and current background.
The method of calculating this linear relationship can have a variety of.In one embodiment of the invention, can utilize least square method to calculate said linear relationship.
Specifically, can suppose that there is linear relationship in each pixel and each pixel in the background in the present frame, utilize least square method to calculate said linear relationship then through the method for linear fit.
In this case, can represent pixel and the relation of the respective pixel in the current background in the present frame through following formula:
A k=α×B k+β+μ k (1)
At formula (1), A kThe gray-scale value of any pixel in the expression current background, B kThe gray-scale value of representing respective pixel in the present frame respectively, α, β are respectively linear coefficient and the intercepts that is used to characterize linear relationship, μ kRepresent the deviation that possibly exist corresponding with said pixel, k ∈ M wherein, M representes the number of pixels in the present frame.
According to least square method, α, β should get the value that can make deviation minimum (can be so that ∑ μ k 2Minimum value).In view of the above, can obtain following formula through further mathematics manipulation:
α = Σ k = 1 M ( A k - A ‾ ) × ( B k - B ‾ ) Σ k = 1 M ( B k - B ‾ ) - - - ( 2 )
β = A ‾ - α × B ‾ - - - ( 3 )
Wherein,
Figure BSA00000521929200083
representes the average gray of pixel in current background and the present frame respectively.
Thus, can obtain being used to represent the linear coefficient α and the intercept β of the linear relationship between present frame and the current background.
Referring to Fig. 3, after calculating linear coefficient α and intercept β,, can calculate the degree of confidence that between present frame and current background, has said linear relationship at step S304 place.
Through calculating the processing of degree of confidence, can improve whether there being the accuracy of the judgement of linear relationship between present frame and the current background.
Wherein, exist the processing of the degree of confidence of said linear relationship to realize between calculating present frame and the current background based on various suitable parameters.
In one embodiment of the invention, can calculate said degree of confidence based on present frame and the deviation between the current background and the equilibrium degree in the current background that carry out behind the linear compensation.
Fig. 4 shows the process flow diagram according to the processing of the calculating degree of confidence of this embodiment.
Of Fig. 4, at step S402 place, can carry out linear transformation to present frame.
Specifically, can utilize linear coefficient α and intercept β present frame to be carried out linear transformation based on formula (4):
B′ k=α×B k+β (4)
Wherein, B ' kFor example can represent k the pixel compensation gray-scale value afterwards in the present frame.
Then, at step S404 place, can calculate the difference between the background of present frame and present frame after the linear transformation.
For example; Can the gray-scale value of the pixel in the present frame after the linear transformation and the gray-scale value of the respective pixel in the current background be subtracted each other the difference that obtains between the pixel, represent present frame and the difference between the current background after the linear transformation according to the difference that obtains then.
For example, can represent said difference according to formula (5).
SSR = Σ k = 1 M ( A k - α × B k - β ) 2 - - - ( 5 )
In formula (5), SSR representes the difference between the background of present frame and present frame after the linear transformation, and the implication of other symbol is identical with the implication of formula (1)-(4).
Then, at step S406 place, can calculate the equilibrium degree of current background.
For example, can calculate the equilibrium degree of background according to formula (6).
SST = Σ k 1 = M ( A k - A ‾ ) 2 - - - ( 6 )
In formula (6), SST representes the equilibrium degree of current background, and the implication of other symbol is identical with the implication of formula (1)-(4).
Then, at step S408 place, calculate degree of confidence according to said difference and equilibrium degree.
For example, can calculate degree of confidence according to formula (7):
R = 1 - SSR SST - - - ( 7 )
In formula (7), R representes degree of confidence, and the implication of other symbol is identical with the implication of formula (1)-(6).
Thus, can calculate the degree of confidence that has said linear relationship between present frame and the current background.
Although it is pointed out that in above embodiment, calculate degree of confidence based on present frame and the deviation between the current background and the equilibrium degree in the current background that have carried out behind the linear compensation, above description is merely example, the invention is not restricted to this.Said deviation and said equilibrium degree also can calculate through alternate manner.And also can only calculate degree of confidence, perhaps, also can calculate degree of confidence based on any one or more combination in other parameter, said deviation and the said equilibrium degree based on any one in said deviation and the said equilibrium degree.
Get back to Fig. 3,, can judge that whether the degree of confidence that calculates is greater than first threshold at step S306 place.
Wherein, said first threshold can be according to application need or the value that rule of thumb is provided with in advance.
If judge that the degree of confidence that calculates is less than first threshold; Show that then to have the possibility of linear relationship between present frame and the current background little; Thereby exist the possibility of larger area line noise not high in the frame, so need not to be directed against the compensation and the correcting process of line noise.
If judge that the degree of confidence that calculates is greater than first threshold; Show that then to have the possibility of linear relationship between present frame and the current background higher; Thereby exist the possibility of larger area line noise higher in the frame, so need be directed against the compensation and the correcting process of line noise.
As shown in Figure 3, in step S308, can compensate processing to present frame, to eliminate the influence of line noise wherein according to linear coefficient that in step S302, calculates and intercept.
For example, can utilize above-mentioned formula (4) to be directed against the compensation of line noise to each pixel in the present frame.
As shown in Figure 3, in step S310, come the correction prospect based on the present frame after the compensation.
Specifically, can present frame and current background after the compensation be subtracted each other, the prospect that can obtain revising has thus promptly been eliminated the prospect of the influence of line noise.
In the foregoing description, eliminate the processing of line noise and implement to entire frame, above description also is merely example, the invention is not restricted to this.In fact, the processing of eliminating line noise also can be carried out based on local part in the frame.
In this processing to local elimination line noise, owing to can only compensate processing to the bigger local part of the influence that receives line noise, so can improve the precision of the processing of eliminating line noise on the one hand; On the other hand, also can further improve the efficient of the processing of eliminating line noise.
Wherein, can come frame is divided, obtain being used to implementing eliminating each local part of the processing of line noise thus through various suitable manner.
In one embodiment of the invention, can frame be divided into n part, obtain being used to implementing eliminating each local part of the processing of line noise thus.
In another embodiment of the present invention, also can present frame be divided into the part consistent with prospect and other part, obtain being used to implementing eliminating each local part of the processing of line noise thus based on the result of foreground extraction.
In yet another embodiment of the present invention, also can according to human body present frame be divided, obtain being used to implementing eliminating each local part of the processing of line noise thus according to the human body dividing method in the Flame Image Process.
Fig. 5 is the process flow diagram that the processing of eliminating line noise according to an embodiment of the invention is shown.In this embodiment, carry out the processing of eliminating line noise based on the local part in the frame.
As shown in Figure 5, at step S502 place, can calculate linear relationship between the appropriate section in the partly local and background in the present frame.
In concrete computing, can calculate the corresponding linear relation to local parts all in the frame that marks off according to application need, also can only calculate linear relationship to the local part of the part in the frame.Through this flexible processing mode, can realize on the one hand on the other hand, also can reducing calculated amount and raising the efficiency to the concern of emphasis part.
For example, under the situation that frame is divided into n part, can calculate the linear relationship between the appropriate section in each part and the background.
Again for example, under the situation that present frame is divided into the part consistent and other part, can only calculate said and the part of prospect unanimity and the linear relationship between the appropriate section in the current background to the part consistent with prospect with prospect.
The method of calculating the linear relationship between two parts can be similar to the above-mentioned calculating present frame and the method for the linear relationship between the current background; Difference only is that the scope of calculating becomes the local part of frame from whole scenes of frame; Thereby, no longer give unnecessary details so that instructions keeps succinct at this.
Then, at step S504 place, can calculate the degree of confidence that has said linear relationship between the appropriate section in the partly local and current background in the present frame.
Specifically, can be utilized in the linear relationship that calculates among the step S502, come further to calculate the degree of confidence that has this linear relationship between the appropriate section in said local part and current background to the local part that calculates said linear relationship.
Exist the method for degree of confidence of linear relationship similar between the method for calculating the degree of confidence that has linear relationship between local part and the appropriate section in the current background in the present frame and the calculating present frame of description before and the current background.
Fig. 6 shows the process flow diagram of the processing of the degree of confidence between the appropriate section of the local part of calculating present frame according to an embodiment of the invention and current background.
As shown in Figure 6, at first,, carry out linear transformation to the local part of present frame at step S602 place.
Specifically, can be utilized in the linear relationship (for example, linear coefficient and intercept) that calculates among the step S502 the local part of present frame is carried out linear transformation.
About the concrete processing of linear transformation, can with reference to before combine the step S402 among Fig. 4 to describe details, the scope that difference only is to calculate becomes the local part of frame from whole scenes of frame, no longer gives unnecessary details so that instructions keeps succinct at this.
Get back to Fig. 6, at step S604 place, local part and the difference between the appropriate section in the background of linear transformation of can having calculated carrying out in the present frame.
For example; Can the gray-scale value of the respective pixel in the appropriate section of the gray-scale value of the pixel in the local part of the present frame after the linear transformation and current background be subtracted each other the difference that obtains between the pixel, represent the difference between the appropriate section in the partly local and background in the present frame after the linear transformation according to the difference that obtains then.
About calculating the concrete processing of two differences between the part; Can with reference to before combine the step S404 among Fig. 4 to describe details; The scope that difference only is to calculate becomes the local part of frame from whole scenes of frame, no longer gives unnecessary details so that instructions keeps succinct at this.
Referring to Fig. 6,, can calculate the equilibrium degree in the appropriate section in the background at step S606 place.
About calculating the balanced concrete processing of the appropriate section in the background; Can with reference to before combine the step S406 among Fig. 4 to describe details; The scope that difference only is to calculate becomes the local part of frame from whole scenes of frame, no longer gives unnecessary details so that instructions keeps succinct at this.
Referring to Fig. 6,, can calculate the degree of confidence that has linear relationship between the appropriate section in the partly local and background in the present frame at step S608 place.
For example, can calculate degree of confidence according to the difference that calculates at step S606 with at the equilibrium degree that S608 calculates.
For example, can come to calculate degree of confidence with the mode identical with formula (7) based on the equilibrium degree of the appropriate section of the local part of the present frame after the linear transformation and difference between the appropriate section in the current background and current background.
About calculating the concrete processing of degree of confidence, can with reference to before combine the step S408 among Fig. 4 to describe details, the scope that difference only is to calculate becomes the local part of frame from whole scenes of frame, no longer gives unnecessary details so that instructions keeps succinctly at this.
Thus, can calculate the degree of confidence that has linear relationship between local part and the appropriate section in the current background in the present frame.
Referring to Fig. 5,, judge that whether the degree of confidence that calculates is greater than second threshold value at step 506 place.
Specifically, said second threshold value can be according to application need or rule of thumb be directed against the value that the local part of present frame is provided with in advance.Wherein, said second threshold value can be the value identical with first threshold, also can be value inequality.In addition, can the second identical threshold value be set, also can second threshold value separately be set to each local part to all local parts.
If judge the degree of confidence that calculates, show that then the part of present frame partly and between the appropriate section of current background exists the possibility of linear relationship little, thereby need not to be directed against the compensation and the correcting process of line noise less than second threshold value.
If judge that the degree of confidence that calculates is greater than second threshold value; The part that then shows present frame partly and between the appropriate section of current background exists the possibility of linear relationship higher, thereby needs partly be directed against to this part of present frame the compensation and the correcting process of line noise.Obviously, through judging, can further raise the efficiency in compensation with before revising.
As shown in Figure 5, in step S508, can compensate processing to present frame, to eliminate the influence of line noise wherein according to linear coefficient that in step S502, calculates and intercept.
For example, can utilize above-mentioned formula (4) to be directed against the compensation of line noise to each pixel in the local part of present frame.
As shown in Figure 5, in step S510, come the correction prospect based on the local part of the present frame after the compensation.
Specifically, can the local part of the present frame after the compensation be subtracted each other with the appropriate section in the current background, the prospect that can obtain revising has thus promptly been eliminated the prospect of the influence of line noise.
Thus, through embodiment shown in Figure 5, realized processing based on the elimination line noise of the local part in the frame.
The above-described method that from video, detects abandon is merely example, the invention is not restricted to this, but can carry out various remodeling and modification.For example, in another embodiment of the present invention, for the validity that improves the processing of eliminating line noise and raise the efficiency, before the processing of eliminating line noise, can also comprise judging whether that needs carry out the processing of elimination line noise.
Can judge whether to need to carry out the processing of eliminating line noise based on various suitable parameters.
In one embodiment of the invention; Consider that line noise tends to cause large-area influence; Correspondingly can cause the bigger increase of foreground area thus; Thereby, can recently judge whether to need to carry out the processing of eliminating line noise based on the area between the scene total area of the foreground area of present frame and present frame.
Fig. 7 illustrates the process flow diagram that judges whether to need to carry out the processing of eliminating line noise according to an embodiment of the invention.
As shown in Figure 7, at step S702 place, can calculate the foreground area of present frame and the area ratio between the present frame total area.
Then, at step S704 place, can judge that whether this area ratio is greater than the 3rd threshold value.
Wherein, the 3rd threshold value can come reasonably to be provided with according to application scenarios or empirical value.
If area shows then that than greater than the 3rd threshold value the foreground area in the present frame is excessive, exist the possibility of line noise bigger, thus, can judge needs to carry out the processing of eliminating line noise.
If area shows then that than less than the 3rd threshold value the foreground area in the present frame is little, exist the possibility of line noise less, thus, can judge and need not carry out the processing of eliminating line noise.
In the above-described method that from video, detects abandon, can determine when long stagnant zone static state edge when long based on stagnant zone and static in short-term edge in short-term.
Particularly; The stagnant zone when cumulative time that can remain static and be in the nonstatic state according to each pixel in the stagnant zone in short-term judges whether pixel belongs to long, and remain static and cumulative time of being in nonstatic state stagnating margin when judging whether pixel belongs to long according to each pixel in the static edge in short-term.
Fig. 8 illustrates according to an embodiment of the invention the process flow diagram that comes to determine respectively the processing at static edge when long of stagnant zone when long based on stagnant zone and static in short-term edge in short-term.
As shown in Figure 8, at step S802 place, calculate the weighted cumulative value of its time that remains static to the pixel in stagnant zone and the static in short-term edge in short-term.
Then, at step S804 place, judge whether the weighted cumulative value of pixel surpasses the 4th threshold value.
If the weighted cumulative value of pixel surpasses the 4th threshold value, then this pixel correspondingly is judged to be stagnant zone when long or the pixel in static edge when long.
If the weighted cumulative value of pixel does not surpass the 4th threshold value, then this pixel is judged to be when not belonging to long stagnant zone static edge when long.
Through such mode, can in stagnant zone and static in short-term edge in short-term, select and be detected as static pixel in the most of the time in a period of time, and finally constitute when long stagnant zone static edge when long through these pixels.Thus, eliminated because the erroneous judgement that sport foreground (stream of people who for example, moves etc.) is caused.
More particularly, through rationally being provided for calculating the weight of weighted cumulative value, eliminate because the influence of sport foreground (stream of people who for example, moves etc.).
This weight for example can be configured to: can make the aggregate-value of cumulative time when pixel remains static, increase and when pixel is in the nonstatic state, reduce on the one hand; On the other hand also can be so that the aggregate-value of cumulative time gathering way in stationary state than the minimizing speed in the nonstatic state faster (that is, average weight is bigger than the average weight in the nonstatic state in stationary state).
Thereby in one embodiment of the invention, said weight can be provided in when pixel is in stationary state and remain bigger fixed value, when pixel is in the nonstatic state, remains another less fixed value.
Through such mode, can be so that the pushing the speed of weighted cumulative value of the time of pixel in remaining static be in the minimizing speed of the weighted cumulative value of the time in the nonstatic state greater than pixel.Thus; Through in a period of time, adding up; Can select the pixel that remains static in the most of the time in a period of time stagnant zone or the pixel in static edge when long when long, and then eliminate because the influence that sport foreground (stream of people who for example, moves etc.) is caused.
When in another embodiment of the present invention, said weight also can be provided in pixel and remains static and when pixel is in the nonstatic state, be changing value.Wherein, Weight can increase with fast speeds when pixel remains static; And weight can reduce with slower speed when pixel is in the nonstatic state, makes on average gather way the decreased average speed when in pixel being in nonstatic state of weight when pixel remains static.
Be set to have different mean change speed through weight, can be so that the pushing the speed of weighted cumulative value of the time of pixel in remaining static be in the minimizing speed of the weighted cumulative value of the time in the nonstatic state greater than pixel.Thus; Through in a period of time, adding up; Can select the pixel that remains static in the most of the time in a period of time stagnant zone or the pixel in static edge when long when long, and then eliminate because the influence that sport foreground (stream of people who for example, moves etc.) is caused.
As a concrete example, the weight when pixel remains static can increase with the form than the exponential function of high order, and the weight of pixel when being in the nonstatic state can reduce with the form than the exponential function of low order.
As another concrete example, the weight when pixel remains static can increase with the form than the parabolic function of high order, and the weight of pixel when being in the nonstatic state can reduce with the form than the parabolic function of low order.
More than be merely explanation about the example of the version of weight (increase or reduce), the invention is not restricted to this.The increase form of the weight when pixel remains static and the minimizing form of the weight when pixel is in the nonstatic state can also be taked other suitable manner or combination arbitrarily, as long as can be so that the gathering way of weighted cumulative value of the time of pixel in remaining static is in the minimizing speed of the weighted cumulative value of the time in the nonstatic state greater than pixel.
The method that from video, detects abandon according to the foregoing description; Can under the situation that need not the pursuit movement target, from video, detect abandon; Simultaneously (for example in testing process, considered multiple factor; Illumination, crowded pedestrian etc.) influence, can significantly reduce calculated amount thus and have higher robustness and less rate of false alarm simultaneously.
In one embodiment of the invention, after detecting abandon, can also further utilize the zone of detected abandon to adjust the processing of extraction prospect.
Specifically, in update processing, can the zone of abandon not upgraded to the background model that is used for the prospect of extracting.
Through such processing, can reduce of the influence of the zone of abandon, and then can improve the accuracy of foreground extraction background model.
Similar with above-mentioned method, the present invention also correspondingly provides the device that from video, detects abandon.
Fig. 9 shows the schematic representation of apparatus that from video, detects abandon according to an embodiment of the invention.
As shown in Figure 9, according to the device that from video, detects abandon of the embodiment of the invention can comprise foreground extraction portion 902, in short-term stagnant zone and static in short-term edge extracting portion 904, when long stagnant zone and when long static edge confirm that portion 906 and abandon confirm portion 908.
Foreground extraction portion 902 can extract prospect according to background model from present frame.
In one embodiment of the invention, foreground extraction portion 902 can carry out background modeling through known GMM (gauss hybrid models) method, and through present frame and current background are subtracted each other the prospect of extracting.
Stagnant zone and static in short-term edge extracting portion 904 can extract stagnant zone and static in short-term edge in short-term from prospect in short-term.
In one embodiment of the invention, stagnant zone and static in short-term edge extracting portion 904 can extract stagnant zone in short-term through the processing of inter-frame difference from prospect in short-term, and can utilize the canny algorithm to come from prospect, to extract the edge.Wherein, stagnant zone and static in short-term edge extracting portion 904 extract the processing of stagnant zone in short-term and extract that the processing sequence at static edge can be unrestricted in short-term in short-term, and they can executed in parallel also can successively be carried out.
The content that the foregoing description is described is merely example; The invention is not restricted to this, in short-term stagnant zone and static in short-term edge extracting portion 904 also can utilize other known or occur in the future be used for extracting in short-term stagnant zone and extract in short-term the method at static edge and come to extract stagnant zone and static in short-term edge in short-term from prospect.
In addition, in the above-described embodiments, extracting the processing of stagnant zone in short-term and extracting the processing at static edge in short-term all is to carry out to the prospect of entire frame, but the invention is not restricted to this.For example, in another embodiment of the present invention, stagnant zone and static in short-term edge extracting portion 904 also can only extract static in short-term edge in the stagnant zone in short-term in short-term.
In this case, stagnant zone and static in short-term edge extracting portion 904 need extract stagnant zone in short-term before the processing of extracting static edge in short-term in short-term.Stagnant zone extracts static in short-term edge through only being directed against in short-term, can reduce calculated amount, raises the efficiency.
Stagnant zone static edge when long confirms that portion 906 can determine when long stagnant zone static state edge when long based on stagnant zone and static in short-term edge in short-term when long.
Particularly; Remain static and cumulative time of being in nonstatic state stagnant zone when judging whether pixel belongs to long according to each pixel in the stagnant zone in short-term; And remain static and cumulative time of being in nonstatic state stagnating margin when judging whether pixel belongs to long according to each pixel in the static edge in short-term, can obtain when long stagnant zone static edge when long thus.
Through such mode, can eliminate effectively because the influence of sport foreground (for example, the stream of people who moves etc.) improves the accuracy that abandon detects greatly.
Abandon confirm portion 908 can be when long static edge determine abandon the stagnant zone when long.
Specifically, abandon confirm portion 908 can be when long the corresponding relation (, stagnant zone should have adjacent with it static state edge when long when long) of static edge stagnant zone when long, come further to determine abandon.
More particularly, abandon confirm portion 908 can with have adjacent when long during static edge long stagnant zone confirm as abandon, and stagnant zone is given up as noise will not have static state edge when long long the time.
Can find out that the device that from video, detects abandon according to embodiment shown in Figure 9 can detect abandon under the situation that need not the pursuit movement target from video, in testing process, consider crowded pedestrian's etc. influence simultaneously.Can significantly reduce calculated amount thus and have higher robustness and less rate of false alarm simultaneously.
The device that from video, detects abandon shown in Figure 9 is merely example, the invention is not restricted to this, but can also carry out various remodeling.For example, in of the present invention another implemented, from video, can also comprise line noise elimination portion in the device of detection abandon.
Wherein, this line noise elimination portion can carry out line noise before foreground extraction eliminates, and also can after foreground extraction, carry out noise removing.Through the processing of line noise elimination portion, the influence of line noise source (for example, illumination etc.) can be eliminated, thereby the accuracy that from video, detects abandon can be improved, reduce the wrong report incidence.
Line noise elimination portion eliminates the processing of line noise and can carry out to whole scenes of present frame, also can carry out to one or more parts of present frame.
In one embodiment of the invention, line noise elimination portion can eliminate the influence of line noise to entire frame.
Figure 10 is the synoptic diagram that illustrates according to the line noise elimination portion of this embodiment.
Shown in figure 10, line noise elimination portion can comprise linear relationship calculating part 1002, confidence calculations portion 1004, compensation section 1006 and prospect correction portion 1008.
Linear relationship calculating part 1002 can present frame and current background between linear relationship.
For example, linear relationship calculating part 1002 can adopt least square method to calculate said linear relationship, can obtain being used to represent the linear coefficient and the intercept of the linear relationship between present frame and the current background thus.
The degree of confidence that between present frame and current background, has said linear relationship can be calculated by confidence calculations portion 1004.
Through the processing of confidence calculations portion 1004, can improve whether there being the accuracy of the judgement of linear relationship between present frame and the current background.
The degree of confidence that has said linear relationship between present frame and the current background can be calculated based on various suitable parameters by confidence calculations portion 1004.
In one embodiment of the invention, said degree of confidence can be calculated based on present frame and the deviation between the current background and the equilibrium degree in the current background that carry out behind the linear compensation by confidence calculations portion.
Figure 11 shows the structural representation according to the confidence calculations portion of this embodiment.
Shown in figure 11, can comprise linear transformation portion 1102, difference calculating part 1104, equilibrium degree calculating part 1106 and COMPREHENSIVE CALCULATING portion 1108 according to the confidence calculations portion of this embodiment.
Wherein, linear transformation portion 1102 can utilize the linear coefficient and the intercept that calculate that present frame is carried out linear transformation.
Difference calculating part 1104 can calculate the difference between the background of present frame and present frame after the linear transformation.
For example; Can the gray-scale value of the pixel in the present frame after the linear transformation and the gray-scale value of the respective pixel in the current background be subtracted each other the difference that obtains between the pixel, represent present frame and the difference between the current background after the linear transformation according to the difference that obtains then.
For example, can according to before the formula (5) described calculate present frame and the difference between the current background after the linear transformation.
Equilibrium degree calculating part 1106 can calculate the equilibrium degree of current background.
For example, can calculate equilibrium degree according to the formula of describing before (6).
Degree of confidence can be calculated according to said difference and equilibrium degree by COMPREHENSIVE CALCULATING portion 1108.
For example, can calculate degree of confidence according to the formula of describing before (7).
Get back to Figure 10; If the degree of confidence that confidence calculations portion 1004 calculates exceeds first threshold; Show that then to have the possibility of linear relationship between present frame and the current background higher; Be to exist the possibility of larger area line noise higher in the frame, so need be directed against the compensation and the correcting process of line noise through compensation section 1006 and prospect correction portion 1008.
Specifically, compensation section 1006 can compensate present frame based on the linear relationship that calculates.
For example, can utilize above-mentioned formula (4) to be directed against the compensation of line noise to each pixel in the present frame.
Prospect correction portion 1008 can come the correction prospect to the present frame after the compensation.
Specifically, can present frame and current background after the compensation be subtracted each other, the prospect that can obtain revising has thus promptly been eliminated the prospect of the influence of line noise.
In the foregoing description, line noise is eliminated to entire frame by line noise elimination portion, and above description also is merely example, the invention is not restricted to this.In fact, line noise elimination portion also can carry out the processing of eliminating line noise based on the local part in the frame.
Figure 12 illustrates the structural representation of line noise elimination portion according to an embodiment of the invention, and wherein, line noise elimination portion can carry out the processing of eliminating line noise based on the local part in the frame.
Shown in figure 12, can comprise local line's sexual intercourse calculating part 1202, local confidence calculations portion 1204, portion of local equalize 1206 and prospect correction portion 1208 according to the line noise elimination portion of this embodiment.
Local line's sexual intercourse calculating part 1202 can calculate linear relationship between the appropriate section in the partly local and background in the present frame.
Specifically, local line's sexual intercourse calculating part 1202 can calculate the corresponding linear relation to local parts all in the frame that marks off according to application need, also can only calculate linear relationship to the local part of the part in the frame.Through this flexible processing mode, can realize on the one hand on the other hand, also can reducing calculated amount and raising the efficiency to the concern of emphasis part.
The degree of confidence that has said linear relationship between the appropriate section in the partly local and current background in the present frame can be calculated by part confidence calculations portion 1204.
Specifically; The linear relationship that part confidence calculations portion 1204 can utilize local line's sexual intercourse calculating part 1202 to calculate is to the degree of confidence of this linear relationship of existence between the appropriate section of the next further calculating of the local part that calculates said linear relationship in said local part and current background.
Figure 13 shows the structural representation of local according to an embodiment of the invention confidence calculations portion.
Shown in figure 13, local confidence calculations portion can comprise local linear transformation portion 1302, local difference calculating part 1304, partial equilibrium degree calculating part 1306 and COMPREHENSIVE CALCULATING portion 1308.
Local linear transformation portion 1302 can utilize the linear relationship (for example, linear coefficient and intercept) that calculates that the local part of present frame is carried out linear transformation.
Local part and the difference between the appropriate section in the background of linear transformation that local difference calculating part 1304 can calculate carrying out in the present frame.
For example; Can the gray-scale value of the respective pixel in the appropriate section of the gray-scale value of the pixel in the local part of the present frame after the linear transformation and current background be subtracted each other the difference that obtains between the pixel, represent the difference between the appropriate section in the partly local and background in the present frame after the linear transformation according to the difference that obtains then.
Equilibrium degree in the appropriate section that partial equilibrium degree calculating part 1306 can calculate in the background.
Degree of confidence can be calculated based on difference that calculates and the equilibrium degree that calculates by 1308 in COMPREHENSIVE CALCULATING portion.
Thus, can calculate the degree of confidence that has linear relationship between local part and the appropriate section in the current background in the present frame.
Get back to Figure 12; If the degree of confidence that local confidence calculations portion 1204 calculates exceeds second threshold value; Then showing between the appropriate section in the partly local and current background in the present frame exists the possibility of linear relationship higher; Be to exist the possibility of larger area line noise higher in the said local part in the frame, so need be directed against the compensation and the correcting process of line noise through portion of local equalize 1206 and prospect correction portion 1208.
Specifically, compensation section 1206 can compensate based on the local part of the linear relationship that calculates to present frame.
For example, can utilize above-mentioned formula (4) to be directed against the compensation of line noise to each pixel in the present frame.
Prospect correction portion 1208 can come the correction prospect to the present frame after the compensation.
Specifically, can the local part of the present frame after the compensation be subtracted each other with the appropriate section in the current background, the prospect that can obtain revising has thus promptly been eliminated the prospect of the influence of line noise.
The above-described device that from video, detects abandon is merely example, the invention is not restricted to this, but can carry out various remodeling and modification.For example, in another embodiment of the present invention, for the validity that improves the processing of eliminating line noise and raise the efficiency, can also comprise being used to judge whether that needs carry out the judging part of the processing of elimination line noise.
Said judging part can judge whether to need to carry out the processing of eliminating line noise based on various suitable parameters.
In one embodiment of the invention; Consider that line noise tends to cause large-area influence; Correspondingly can cause the bigger increase of foreground area thus; Thereby judging part can recently judge whether to need to carry out the processing of eliminating line noise based on the area between the scene total area of the foreground area of present frame and present frame.
Figure 14 shows the synoptic diagram of judging part according to an embodiment of the invention.
Shown in figure 14, judging part can comprise that area is than calculating part 1402 and judgement execution portion 1404.
Wherein, area can calculate the foreground area of present frame and the area ratio between the present frame total area than calculating part 1402.
Judge that execution portion 1404 can judge that whether this area is than greater than the 3rd threshold value.
Wherein, the 3rd threshold value can come reasonably to be provided with according to application scenarios or empirical value.
If area shows then that than greater than the 3rd threshold value the foreground area in the present frame is excessive, exist the possibility of line noise bigger, thus, can judge needs to carry out the processing of eliminating line noise.
If area shows then that than less than the 3rd threshold value the foreground area in the present frame is little, exist the possibility of line noise less, thus, can judge and need not carry out the processing of eliminating line noise.
In the above-described device that from video, detects abandon, stagnant zone static edge when long confirms that portion can determine when long stagnant zone static state edge when long based on stagnant zone and static in short-term edge in short-term when long.
Particularly; Stagnant zone when stagnant zone static edge when long confirms that cumulative time that portion can remain static according to each pixel in the stagnant zone in short-term judges whether pixel belongs to long when long, and stagnating margin when judging according to the cumulative time that each pixel in the static edge in short-term remains static whether pixel belongs to long.
Figure 15 shows according to an embodiment of the invention the structural representation that portion is confirmed at the static edge when long of stagnant zone when long.
Shown in figure 15, stagnant zone static edge when long confirms that portion can comprise that weighted cumulative calculating part 1502 and pixel confirm portion 1504 when long.
Weighted cumulative calculating part 1502 can calculate the weighted cumulative value of its time that remains static to the pixel in stagnant zone in short-term and the static in short-term edge.
Pixel confirms that portion 1504 can judge that whether the weighted cumulative value of pixel surpasses the 4th threshold value.
If the weighted cumulative value of pixel surpasses the 4th threshold value, then pixel confirms that portion 1504 can correspondingly be judged to be stagnant zone when long or the pixel in static edge when long with this pixel.
If the weighted cumulative value of pixel does not surpass the 4th threshold value, then this pixel is judged to be when not belonging to long stagnant zone static edge when long.
Through such mode; Stagnant zone static edge when long is confirmed that portion can select and is detected as static pixel in the most of the time in a period of time when long in stagnant zone and static in short-term edge in short-term, and finally constitutes when long stagnant zone static state edge when long through these pixels.Thus, eliminated because the erroneous judgement that sport foreground (stream of people who for example, moves etc.) is caused.
More particularly, through the weight that the weighted cumulative calculating part is used to calculate the weighted cumulative value rationally is set, eliminate because the influence of sport foreground (stream of people who for example, moves etc.).
This weight for example can be configured to: one side can make aggregate-value when pixel remains static, increase and when pixel is in the nonstatic state, reduce; On the other hand also can be so that aggregate-value gathering way in stationary state than the minimizing speed in the nonstatic state faster (that is, average weight is bigger than the average weight in the nonstatic state in stationary state).
Thereby in one embodiment of the invention, said weight can be provided in when pixel is in stationary state and remain bigger fixed value, when pixel is in the nonstatic state, remains another less fixed value.
Through such mode, can be so that the pushing the speed of weighted cumulative value of the time of pixel in remaining static be in the minimizing speed of the weighted cumulative value of the time in the nonstatic state greater than pixel.Thus; Through in a period of time, adding up; Can select the pixel that remains static in the most of the time in a period of time stagnant zone or the pixel in static edge when long when long, and then eliminate because the influence that sport foreground (stream of people who for example, moves etc.) is caused.
When in another embodiment of the present invention, said weight also can be provided in pixel and remains static and when pixel is in the nonstatic state, be changing value.Wherein, Weight can increase with fast speeds when pixel remains static; And weight can reduce with slower speed when pixel is in the nonstatic state, makes on average gather way the decreased average speed when in pixel being in nonstatic state of weight when pixel remains static.
Be set to have different mean change speed through weight, can be so that the pushing the speed of weighted cumulative value of the time of pixel in remaining static be in the minimizing speed of the weighted cumulative value of the time in the nonstatic state greater than pixel.Thus; Through in a period of time, adding up; Can select the pixel that remains static in the most of the time in a period of time stagnant zone or the pixel in static edge when long when long, and then eliminate because the influence that sport foreground (stream of people who for example, moves etc.) is caused.
As a concrete example, the weight when pixel remains static can increase with the form than the exponential function of high order, and the weight of pixel when being in the nonstatic state can reduce with the form than the exponential function of low order.
As another concrete example, the weight when pixel remains static can increase with the form than the parabolic function of high order, and the weight of pixel when being in the nonstatic state can reduce with the form than the parabolic function of low order.
More than be merely explanation about the example of the version of weight (increase or reduce), the invention is not restricted to this.The increase form of the weight when pixel remains static and the minimizing form of the weight when pixel is in the nonstatic state can also be taked other suitable manner or combination arbitrarily, as long as can be so that the gathering way of weighted cumulative value of the time of pixel in remaining static is in the minimizing speed of the weighted cumulative value of the time in the nonstatic state greater than pixel.
The device that from video, detects abandon according to the foregoing description; Can under the situation that need not the pursuit movement target, from video, detect abandon; Simultaneously (for example in testing process, considered multiple factor; Illumination, crowded pedestrian etc.) influence, can significantly reduce calculated amount thus and have higher robustness and less rate of false alarm simultaneously.
In one embodiment of the invention, after detecting abandon, can also further utilize the zone of detected abandon to adjust the processing of extraction prospect.
Specifically, in update processing, can the zone of abandon not upgraded to the background model that is used for the prospect of extracting.
Through such processing, can reduce of the influence of the zone of abandon, and then can improve the accuracy of foreground extraction background model.
In addition, it will be appreciated that various examples as herein described and embodiment all are exemplary, the invention is not restricted to this.In this manual, statements such as " first ", " second " only are for described characteristic is distinguished on literal, clearly to describe the present invention.Therefore, should it be regarded as having any determinate implication.
Each forms module in the said apparatus, the unit can be configured through the mode of software, firmware, hardware or its combination.Dispose spendable concrete means or mode and be well known to those skilled in the art, repeat no more at this.Under situation about realizing through software or firmware; From storage medium or network the program that constitutes this software is installed to the computing machine with specialized hardware structure (multi-purpose computer 1600 for example shown in Figure 16); This computing machine can be carried out various functions etc. when various program is installed.
In Figure 16, CPU (CPU) 1601 carries out various processing according to program stored among ROM (read-only memory) (ROM) 1602 or from the program that storage area 1608 is loaded into random-access memory (ram) 1603.In RAM 1603, also store data required when CPU 1601 carries out various processing or the like as required.CPU 1601, ROM 1602 and RAM 1603 are connected to each other via bus 1604.Input/output interface 1605 also is connected to bus 1604.
Following parts are connected to input/output interface 1605: importation 1606 (comprising keyboard, mouse or the like), output 1607 (comprise display; Such as cathode ray tube (CRT), LCD (LCD) etc. and loudspeaker etc.), storage area 1608 (comprising hard disk etc.), communications portion 1609 (comprising that NIC is such as LAN card, modulator-demodular unit etc.).Communications portion 1609 is handled such as the Internet executive communication via network.As required, driver 1610 also can be connected to input/output interface 1605.Detachable media 1611 is installed on the driver 1610 such as disk, CD, magneto-optic disk, semiconductor memory or the like as required, makes the computer program of therefrom reading be installed to as required in the storage area 1608.
Realizing through software under the situation of above-mentioned series of processes, such as detachable media 1611 program that constitutes software is being installed such as the Internet or storage medium from network.
It will be understood by those of skill in the art that this storage medium is not limited to shown in Figure 16 wherein having program stored therein, distribute so that the detachable media 1611 of program to be provided to the user with equipment with being separated.The example of detachable media 1611 comprises disk (comprising floppy disk (registered trademark)), CD (comprising compact disc read-only memory (CD-ROM) and digital universal disc (DVD)), magneto-optic disk (comprising mini-disk (MD) (registered trademark)) and semiconductor memory.Perhaps, storage medium can be hard disk that comprises in ROM 1602, the storage area 1608 or the like, computer program stored wherein, and be distributed to the user with the equipment that comprises them.
The present invention also proposes a kind of program product that stores the instruction code of machine-readable.When said instruction code is read and carried out by machine, can carry out above-mentioned method according to the embodiment of the invention.
Correspondingly, the storage medium that is used for carrying the program product of the above-mentioned instruction code that stores machine-readable is also included within of the present invention open.Said storage medium includes but not limited to floppy disk, CD, magneto-optic disk, storage card, memory stick or the like.
At last; Also need to prove; Term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability; Thereby make to comprise that process, method, article or the equipment of a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or also be included as this process, method, article or equipment intrinsic key element.In addition, under the situation that do not having much more more restrictions, the key element that limits by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises said key element and also have other identical element.
Though more than combine accompanying drawing to describe embodiments of the invention in detail, should be understood that top described embodiment just is used to explain the present invention, and be not construed as limiting the invention.For a person skilled in the art, can make various modifications and change to above-mentioned embodiment and do not deviate from essence of the present invention and scope.Therefore, scope of the present invention is only limited appended claim and equivalents thereof.

Claims (20)

1. method that from video, detects abandon comprises:
From present frame, extract prospect according to background model;
From said prospect, extract stagnant zone and static in short-term edge in short-term;
Based on said stagnant zone in short-term and said static in short-term edge, further determine when long stagnant zone static edge when long;
According to said when long static edge determine abandon the stagnant zone from said when long.
2. the method that from video, detects abandon according to claim 1 also comprises the processing of eliminating line noise, and it comprises:
Calculate the linear relationship between said present frame and the current background;
There is the degree of confidence of said linear relationship in calculating between said present frame and said current background;
Be higher than in said degree of confidence under the situation of first threshold, said present frame compensated based on the said linear relationship that calculates;
Said present frame to after the compensation is revised said prospect.
3. the method that from video, detects abandon according to claim 2, wherein, calculating exists the processing of the degree of confidence of said linear relationship to comprise between said present frame and said current background:
Carry out linear transformation according to said linear relationship to said present frame;
Said present frame and the difference between the said current background after the linear transformation has been carried out in calculating;
Calculate the equilibrium degree of said current background;
Calculate said degree of confidence according to said difference and said equilibrium degree.
4. the method that from video, detects abandon according to claim 1 also comprises the processing of eliminating line noise, and it comprises:
Calculate part and the linear relationship between the appropriate section in the current background in the said present frame;
Calculate the degree of confidence that has said linear relationship between part and the appropriate section in the said current background in the said present frame;
Based on the said linear relationship that calculates to compensating with the corresponding part of degree of confidence that is higher than second threshold value in the said present frame;
Said present frame to after the compensation comes the correction prospect.
5. the method that from video, detects abandon according to claim 4, wherein, calculate between part and the appropriate section in the said current background in the said present frame and exist the processing of the degree of confidence of said linear relationship to comprise:
Carry out linear transformation according to said linear relationship to the said part in the said present frame;
The said part of the present frame after the said linear transformation and the difference between the appropriate section in the said current background have been carried out in calculating;
Calculate the equilibrium degree of the appropriate section in the said current background;
Calculate said degree of confidence according to said difference and said equilibrium degree.
6. according to each described method that from video, detects abandon among the claim 2-5, wherein, before the processing of eliminating line noise, also comprise the processing that judges whether needs execution elimination line noise, it comprises:
Calculate the area ratio between the area of foreground area and said present frame of said present frame;
If said area is then judged and need do not carried out the processing of eliminating line noise than not surpassing the 3rd threshold value;
If said area ratio is greater than said the 3rd threshold value, then judgement needs to carry out the processing of eliminating line noise.
7. the method that from video, detects abandon according to claim 1, wherein, further determine the processing at static edge when long of stagnant zone when long based on said stagnant zone in short-term and said static in short-term edge and comprise:
Calculate the weighted cumulative value of the time that the pixel in said stagnant zone in short-term and the said static in short-term edge remains static;
When the weighted cumulative value of the time in said pixel remains static surpasses the 4th threshold value, this pixel is confirmed as said stagnant zone or said pixel in static edge when long when long.
8. the method that from video, detects abandon according to claim 7, wherein, the processing of calculating the weighted cumulative value of the time that the pixel in said stagnant zone in short-term and the said static in short-term edge remains static comprises:
Be provided for the weight of the weighted cumulative value of computing time; Said weight makes the weighted cumulative value of time when said pixel remains static, increase and when said pixel is in the nonstatic state, reduce, and the average weight in stationary state is bigger than the average weight in the nonstatic state.
9. the method that is used for detecting abandon according to claim 1 from video, wherein, according to said when long static edge comprise from the said processing of determining abandon when long the stagnant zone:
Will with said when long static edge corresponding said when long stagnant zone confirm as abandon.
10. the method that is used for detecting from video abandon according to claim 1 also comprises: the zone at the abandon place is not upgraded the background model that is used for definite prospect.
11. a device that from video, detects abandon comprises:
Foreground extraction portion is configured to from present frame, extract prospect according to background model;
Stagnant zone and static in short-term edge extracting portion are configured to from said prospect, extract stagnant zone and static in short-term edge in short-term in short-term;
Portion is confirmed at stagnant zone static edge when long when long, is configured to based on said stagnant zone in short-term and said static in short-term edge, further determines when long stagnant zone static edge when long;
Abandon is confirmed portion, be configured to according to said when long static edge determine abandon the stagnant zone from said when long.
12. the device that from video, detects abandon according to claim 11 also comprises line noise elimination portion, it comprises:
The linear relationship calculating part is configured to calculate the linear relationship between said present frame and the current background;
Confidence calculations portion is configured to calculate the degree of confidence that between said present frame and said current background, has said linear relationship;
Compensation section is configured to be higher than under the situation of first threshold in said degree of confidence, based on the said linear relationship that calculates said present frame is compensated;
Prospect correction portion is configured to revise said prospect to the said preceding frame of working as after the compensation.
13. the device that from video, detects abandon according to claim 12, wherein, said confidence calculations portion comprises:
Linear transformation portion is configured to carry out linear transformation according to said linear relationship to said present frame;
The difference calculating part is configured to calculate the said present frame that carried out after the linear transformation and difference between the said current background;
The equilibrium degree calculating part is configured to calculate the equilibrium degree of said current background;
COMPREHENSIVE CALCULATING portion is configured to calculate said degree of confidence according to said difference and said equilibrium degree.
14. the device that from video, detects abandon according to claim 11 also comprises line noise elimination portion, it comprises:
Local line's sexual intercourse calculating part is configured to calculate part and the linear relationship between the appropriate section in the current background in the said present frame;
Local confidence calculations portion is configured to calculate the degree of confidence that has said linear relationship between said part and the appropriate section in the said current background in the said present frame;
Portion of local equalize is configured to based on the said linear relationship that calculates to compensating with the corresponding part of degree of confidence that is higher than second threshold value in the said present frame;
Prospect correction portion is configured to revise said prospect to the said present frame after the compensation.
15. the device that from video, detects abandon according to claim 14, wherein, local confidence calculations portion comprises:
Local linear transformation portion is configured to carry out linear transformation according to said linear relationship to the said part in the said present frame;
Local difference calculating part is configured to calculate the said part of having carried out the said present frame after the linear transformation and the difference between the appropriate section in the said current background;
Partial equilibrium degree calculating part is configured to calculate the equilibrium degree of the appropriate section in the said current background;
COMPREHENSIVE CALCULATING portion is configured to calculate said degree of confidence according to said difference and said equilibrium degree.
16. according to each described device that from video, detects abandon among the claim 12-15, comprise also being used to judge whether that needs carry out the judging part of the processing of elimination line noise that it comprises:
Area is than calculating part, is configured to calculate the area ratio between the area of foreground area and said present frame of said present frame;
Judge execution portion, be configured to: if said area is then judged and need do not carried out the processing of eliminating line noise than not surpassing the 3rd threshold value; If the area ratio is greater than the 3rd threshold value, then judgement needs to carry out the processing of eliminating line noise.
17. the device that from video, detects abandon according to claim 11, wherein, static edge confirmed that portion comprises when stagnant zone was with length when long:
Weighted cumulative value calculating part is configured to calculate the weighted cumulative value of the time that the pixel in said stagnant zone in short-term and the said static in short-term edge remains static;
Pixel is confirmed portion, when the weighted cumulative value that is configured to the time in said pixel remains static surpasses the 4th threshold value, this pixel is confirmed as said stagnant zone or said pixel in static edge when long when long.
18. the device that from video, detects abandon according to claim 17, wherein, weighted cumulative value calculating part comprises:
Weight is provided with portion; Be configured to be provided for calculate the weight of the weighted cumulative value of said time; Said weight makes the weighted cumulative value of said time when said pixel remains static, increase and when said pixel is in the nonstatic state, reduce, and the average weight in stationary state is bigger than the average weight in the nonstatic state.
19. the device that is used for detecting abandon according to claim 11 from video, wherein, abandon confirm portion further be configured to with said when long static edge corresponding said when long stagnant zone confirm as abandon.
20. the device that is used for detecting from video abandon according to claim 11 wherein, does not upgrade the background model that is used for definite prospect in the zone at the abandon place of determining.
CN201110166825.7A 2011-06-13 2011-06-13 The method and apparatus that abandon is detected from video Expired - Fee Related CN102831384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110166825.7A CN102831384B (en) 2011-06-13 2011-06-13 The method and apparatus that abandon is detected from video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110166825.7A CN102831384B (en) 2011-06-13 2011-06-13 The method and apparatus that abandon is detected from video

Publications (2)

Publication Number Publication Date
CN102831384A true CN102831384A (en) 2012-12-19
CN102831384B CN102831384B (en) 2018-01-23

Family

ID=47334515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110166825.7A Expired - Fee Related CN102831384B (en) 2011-06-13 2011-06-13 The method and apparatus that abandon is detected from video

Country Status (1)

Country Link
CN (1) CN102831384B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714325A (en) * 2013-12-30 2014-04-09 中国科学院自动化研究所 Left object and lost object real-time detection method based on embedded system
CN110852253A (en) * 2019-11-08 2020-02-28 杭州宇泛智能科技有限公司 Ladder control scene detection method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635026B (en) * 2008-07-23 2012-05-23 中国科学院自动化研究所 Method for detecting derelict without tracking process

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714325A (en) * 2013-12-30 2014-04-09 中国科学院自动化研究所 Left object and lost object real-time detection method based on embedded system
CN103714325B (en) * 2013-12-30 2017-01-25 中国科学院自动化研究所 Left object and lost object real-time detection method based on embedded system
CN110852253A (en) * 2019-11-08 2020-02-28 杭州宇泛智能科技有限公司 Ladder control scene detection method and device and electronic equipment

Also Published As

Publication number Publication date
CN102831384B (en) 2018-01-23

Similar Documents

Publication Publication Date Title
CN111461089B (en) Face detection method, and training method and device of face detection model
CN112488073A (en) Target detection method, system, device and storage medium
Szwoch Extraction of stable foreground image regions for unattended luggage detection
CN103810711A (en) Keyframe extracting method and system for monitoring system videos
CN107665498B (en) Full convolution network aircraft detection method based on typical example mining
CN111161292B (en) Ore scale measurement method and application system
US20170358093A1 (en) Method and apparatus for updating a background model
CN108182413A (en) A kind of mine movable object detecting and tracking recognition methods
US20180342070A1 (en) Methods and systems of determining object status for false positive removal in object tracking for video analytics
CN103871082A (en) Method for counting people stream based on security and protection video image
US20100277586A1 (en) Method and apparatus for updating background
CN106803263A (en) A kind of method for tracking target and device
Bang et al. Motion object and regional detection method using block-based background difference video frames
CN105303581A (en) Adaptive parameter moving target detection method
CN103593672A (en) Adaboost classifier on-line learning method and Adaboost classifier on-line learning system
CN101290682A (en) Movement target checking method and apparatus
CN110992305A (en) Package counting method and system based on deep learning and multi-target tracking technology
CN103456009B (en) Object detection method and device, supervisory system
CN103945089A (en) Dynamic target detection method based on brightness flicker correction and IP camera
CN110189355A (en) Safe escape channel occupies detection method, device, electronic equipment and storage medium
CN104537688A (en) Moving object detecting method based on background subtraction and HOG features
CN110414514A (en) Image processing method and device
CN104866843A (en) Monitoring-video-oriented masked face detection method
CN102663362A (en) Moving target detection method t based on gray features
CN106934332A (en) A kind of method of multiple target tracking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180123

Termination date: 20210613