CN202870907U - Foreground detection device and system - Google Patents

Foreground detection device and system Download PDF

Info

Publication number
CN202870907U
CN202870907U CN 201220199715 CN201220199715U CN202870907U CN 202870907 U CN202870907 U CN 202870907U CN 201220199715 CN201220199715 CN 201220199715 CN 201220199715 U CN201220199715 U CN 201220199715U CN 202870907 U CN202870907 U CN 202870907U
Authority
CN
China
Prior art keywords
foreground
model
utility
image
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201220199715
Other languages
Chinese (zh)
Inventor
郑长春
徐名剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN BELLSENT INTELLIGENT SYSTEM CO Ltd
Original Assignee
SHENZHEN BELLSENT INTELLIGENT SYSTEM CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN BELLSENT INTELLIGENT SYSTEM CO Ltd filed Critical SHENZHEN BELLSENT INTELLIGENT SYSTEM CO Ltd
Priority to CN 201220199715 priority Critical patent/CN202870907U/en
Application granted granted Critical
Publication of CN202870907U publication Critical patent/CN202870907U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The utility model provides a foreground detection device. The foreground detection device comprises a video image acquisition unit which is used for continuously acquiring video images of a monitored region; an edge outline acquisition unit which is used for detecting each frame of video image to acquire an edge outline and for calculating the time value of pixels staying in multiple frames of edge outlines; a critical value judgment unit which is used for judging whether a time value is less than the prescribed critical value; a background modeling unit which is used for determining background pixels and for carrying out modeling on the background pixels; a foreground modeling unit which is used for determining foreground pixels and for carrying out modeling on the foreground to form an initial foreground image; and a foreground image dividing unit which is used for dividing the initial foreground image into multiple activity regions which are corresponding to multiple different learning rates. The utility model further discloses a foreground detection system. According to the utility model, foreground detection device and system use different learning rates for different movement foregrounds, and have good adaptability and timeliness.

Description

A kind of device of foreground detection and system
Technical field
The utility model relates to the monitoring field, particularly a kind of device of foreground detection and system.
Background technology
In recent years, along with the development of computer vision and artificial intelligence field correlation technique, emerge in multitude based on the prospect algorithm of intelligent video analysis.So-called foreground detection, exactly that target object is corresponding zone extracts from the image sequence supervisory sequence, for concrete traffic monitoring, exactly pedestrian and the vehicle etc. from monitoring such as the pedestrian in the scene and vehicle separated from the monitoring image sequence.The prospect method of determining and calculating can be divided three classes substantially at present: (1) background subtraction point-score; (2) time differencing method; (3) optical flow method.From the angle of Real Time Monitoring, generally adopt the background subtraction separating method to come the extraction prospect.
Method based on inter-frame difference is to utilize the absolute value of inter frame image luminance difference to analyze the kinetic characteristic of video and image sequence, thereby determines to have or not object of which movement in the image.Its detection is based on the gray-scale value of background pixel point and position and remains unchanged all that this cardinal rule carries out.The simplest inter-frame difference is first adjacent two frames of video to be carried out absolute difference, then difference value and a certain threshold function table of each pixel is compared.If greater than this threshold value, then there is motion, otherwise do not have motion.Inter-frame difference method computational complexity is low, but has several problems: at first, detected target comprises the information that changes in two frames, can have more impact point like this, and the target that detects is larger than realistic objective scope; Secondly, the overlapping part of two interframe targets is difficult for detecting; Again, when the subregion of foreground object and background have same or analogous gray-scale value, will cause the subregion of moving target to detect.
Method based on background subtraction is method the most frequently used in the present foreground object segmentation, because it can provide the most completely feature to prospect, additive method can't be equal in this.It is to utilize the difference of present image and background image to detect a kind of technology of moving region.Use the background subtraction method at first will set up a suitable background model according to the scene situation, then based on background model extraction prospect from present frame, the most frequently used method is to utilize present image and background subtracting to obtain prospect.For the background subtraction method, research emphasis concentrates on two problems, and one is how to set up the background model that can represent scene, and another is because the background subtraction method is very sensitive to scene changes, so when scene changes, how to keep making testing result still accurate with Renewal model.
But existing foreground detection is used same turnover rate for different scenes, namely wastes system resources in computation, does not have again good adaptability and real-time.
The utility model content
The utility model proposes a kind of foreground detection method, device and system, solved the saving system resources in computation, have good adaptability and the problem of real-time.
The technical solution of the utility model is achieved in that
The utility model discloses a kind of foreground detection device, comprising:
The video image acquisition unit, the video image for the continuous acquisition monitoring area obtains the multi-frame video image;
Edge contour figure acquiring unit, link to each other with described video image acquisition unit, be used for every frame video image is detected, obtain edge contour figure, described edge contour image vegetarian refreshments is added up the time value at multiframe edge profile diagram that calculating pixel point stops;
The critical value judging unit links to each other with described edge contour figure acquiring unit, is used for judging that whether described time value is less than the critical value of stipulating;
The background modeling unit links to each other with described critical value judging unit, is used for being defined as background pixel and described background pixel being carried out modeling;
The prospect modeling unit links to each other with described critical value judging unit, is used for being defined as foreground pixel and described prospect being carried out modeling, forms initial foreground image;
Foreground image is divided into the unit, links to each other with described prospect modeling unit, is used for described initial foreground image is divided into a plurality of zones of action the corresponding a plurality of different learning rates in described a plurality of zone of action.
In foreground detection device described in the utility model, described modeling is to carry out modeling by mixed Gauss model.
In foreground detection device described in the utility model, described scene comprises: station, square, harbour.
In foreground detection device described in the utility model, also comprise the erroneous frame processing unit between described video image acquisition unit and the edge contour figure acquiring unit, be used for abandoning erroneous frame, described erroneous frame comprises: blank screen, snowflake, displacement.
The utility model discloses a kind of system of foreground detection, comprise at least one video camera, the server that links to each other with described video camera, the database that links to each other with described server, the control module and the display that link to each other with described server, described control module comprise above-mentioned foreground detection device.
In the system of foreground detection described in the utility model, described video camera links to each other with described server via Ethernet, 3G, GPRS network.
Implement device and the system of a kind of foreground detection of the present utility model, have following useful technique effect:
For different sport foregrounds, use different learning rates, save computational resource, have good adaptability and real-time.
Description of drawings
In order to be illustrated more clearly in the utility model embodiment or technical scheme of the prior art, the below will do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art, apparently, accompanying drawing in the following describes only is embodiment more of the present utility model, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the method flow graph of a kind of foreground detection of the utility model;
Fig. 2 is the device construction block diagram of a kind of foreground detection of the utility model;
Fig. 3 is the system architecture diagram of a kind of foreground detection of the utility model.
Embodiment
Below in conjunction with the accompanying drawing among the utility model embodiment, the technical scheme among the utility model embodiment is clearly and completely described, obviously, described embodiment only is the utility model part embodiment, rather than whole embodiment.Based on the embodiment in the utility model, those of ordinary skills are not making the every other embodiment that obtains under the creative work prerequisite, all belong to the scope of the utility model protection.
See also Fig. 1, preferred embodiment of the present utility model, a kind of foreground detection method comprises:
S1. the video image of continuous acquisition monitoring area obtains the multi-frame video image;
S2. every frame video image is detected, obtain edge contour figure, described edge contour image vegetarian refreshments is added up, the time value at multiframe edge profile diagram that calculating pixel point stops;
Usually, the background pixel residence time is longer, and the foreground pixel residence time is shorter.In other words, in the position of a certain pixel, the color that time of occurrence is longer, the possibility that its model represents background is larger.
S3. whether judge described time value less than the critical value of stipulating, if, enter step S4, if not, step S5 entered;
S4. be defined as background pixel and described background pixel is carried out modeling;
S5. be defined as foreground pixel and described prospect is carried out modeling, form initial foreground image;
Modeling is to carry out modeling by mixed Gauss model.
Mixed Gauss model (GMM) comes the value of each pixel in the token image by a plurality of Gauss models, can carry out modeling to each pixel more accurately, in general motion detection, each two field picture with new acquisition upgrades mixed Gauss model, each pixel and mixed Gauss model coupling with current acquisition, if the match is successful then be background dot, otherwise it is the foreground point.The technical program is in the prospect modelling phase, do not carry out real-time update after prospect is set up, and carries out again modeling but set the regular hour section according to actual conditions, and usually, for the very large location of flow of the people, modeling was once again every 1 minute for we.
S6. described initial foreground image is divided into a plurality of zones of action, the corresponding a plurality of different learning rates in described a plurality of zone of action.
At this, learning rate refers to, according to the old video image of existing prospect, if do not change, then " study " come, keep original foreground image, in the square, long-time, there were not the stream of people or object to pass through such as 10 minutes, the study old image of coming then, if change, then on the old prospect again modeling upgraded.
For instance, a square needs learning rate large, is porch, exit, ticket office, and for the view on square, locates such as fountain etc., and the learning rate that needs is slightly low.
Wherein, the monitoring area comprises: station, square, harbour.
Further, also comprise S11 between step S1 and step S2, abandon the step of erroneous frame, described erroneous frame comprises: blank screen, snowflake, displacement.
We adopt gauss hybrid models to carry out background modeling, have good adaptivity and real-time than other algorithms (such as stream method, frame difference method etc.).Simultaneously we have introduced the concept of learning rate in the process that prospect is upgraded, when learning rate hour, system's ability that changes that conforms is lower, needs the long period could set up foreground model; When learning rate was larger, system's ability that changes that conforms was stronger, Prospects For Changes model quickly, but for the target that rests in a period of time in the scene, may learn in the background.For this situation, we intend learning rate is adjusted, and different learning rates is set in not existing together in image, to adapt to the needs of scene changes.
See also the device of Fig. 2, a kind of foreground detection, be used for realizing above-mentioned method, comprising:
Video image acquisition unit 10, edge contour figure acquiring unit 20, critical value judging unit 30, background modeling unit 40, foreground image are divided into unit 50, foreground image is divided into unit 60.
Video image acquisition unit 10, the video image for the continuous acquisition monitoring area obtains the multi-frame video image;
Edge contour figure acquiring unit 20, link to each other with video image acquisition unit 10, be used for every frame video image is detected, obtain edge contour figure, described edge contour image vegetarian refreshments is added up the time value at multiframe edge profile diagram that calculating pixel point stops;
Critical value judging unit 30 links to each other with edge contour figure acquiring unit 20, is used for judging that whether described time value is less than the critical value of stipulating;
Background modeling unit 40 links to each other with critical value judging unit 30, is used for being defined as background pixel and described background pixel being carried out modeling;
Prospect modeling unit 50 links to each other with critical value judging unit 40, is used for being defined as foreground pixel and described prospect being carried out modeling, forms initial foreground image;
Foreground image is divided into unit 60, links to each other with prospect modeling unit 50, is used for described initial foreground image is divided into a plurality of zones of action the corresponding a plurality of different learning rates in described a plurality of zone of action.
Wherein, modeling is to carry out modeling by mixed Gauss model, and scene comprises: station, square, harbour.
Preferably, also comprise erroneous frame processing unit 15 between video image acquisition unit 10 and the edge contour figure acquiring unit 20, be used for abandoning erroneous frame that erroneous frame comprises: blank screen, snowflake, displacement.
See also the system of Fig. 3, a kind of foreground detection, comprise at least one video camera 100, the server 200 that links to each other with video camera 100, the database 300 that links to each other with server 200, the control module 350 that links to each other with server 200 and display 500, control module 350 comprise above-mentioned foreground detection device.
Wherein, video camera 100 links to each other by Ethernet, 3G, GPRS network with server 200.
The course of work of native system is: a plurality of images of at least one video camera 100 picked-up, upload onto the server 200 by Ethernet, 3G, GPRS network, undertaken depositing data in database 300 after the modeling and showing at display 500 by the control module 350 that links to each other with server 200.
Implement device and the system of a kind of foreground detection of the present utility model, have following useful technique effect:
For different sport foregrounds, use different learning rates, save computational resource, have good adaptability and real-time.
The above only is preferred embodiment of the present utility model; not in order to limit the utility model; all within spirit of the present utility model and principle, any modification of doing, be equal to replacement, improvement etc., all should be included within the protection domain of the present utility model.

Claims (2)

1. the system of a foreground detection comprises at least one video camera, the server that links to each other with described video camera, the database that links to each other with described server, the control module and the display that link to each other with described server is characterized in that described control module comprises the foreground detection device.
2. the system of foreground detection according to claim 1 is characterized in that, described video camera links to each other with described server via Ethernet, 3G, GPRS network.
CN 201220199715 2012-05-07 2012-05-07 Foreground detection device and system Expired - Fee Related CN202870907U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201220199715 CN202870907U (en) 2012-05-07 2012-05-07 Foreground detection device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201220199715 CN202870907U (en) 2012-05-07 2012-05-07 Foreground detection device and system

Publications (1)

Publication Number Publication Date
CN202870907U true CN202870907U (en) 2013-04-10

Family

ID=48037607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201220199715 Expired - Fee Related CN202870907U (en) 2012-05-07 2012-05-07 Foreground detection device and system

Country Status (1)

Country Link
CN (1) CN202870907U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050665A (en) * 2014-06-10 2014-09-17 华为技术有限公司 Method and device for estimating foreground dwell time in video image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050665A (en) * 2014-06-10 2014-09-17 华为技术有限公司 Method and device for estimating foreground dwell time in video image

Similar Documents

Publication Publication Date Title
Xia et al. Towards improving quality of video-based vehicle counting method for traffic flow estimation
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN102663743B (en) Personage's method for tracing that in a kind of complex scene, many Kameras are collaborative
CN101303727B (en) Intelligent management method based on video human number Stat. and system thereof
CN101141633B (en) Moving object detecting and tracing method in complex scene
CN101751677B (en) Target continuous tracking method based on multi-camera
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
CN104200466B (en) A kind of method for early warning and video camera
CN106204586B (en) A kind of moving target detecting method under complex scene based on tracking
CN103971521A (en) Method and device for detecting road traffic abnormal events in real time
CN101325690A (en) Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN103714325A (en) Left object and lost object real-time detection method based on embedded system
CN103049787A (en) People counting method and system based on head and shoulder features
CN110415268A (en) A kind of moving region foreground image algorithm combined based on background differential technique and frame difference method
CN104063885A (en) Improved movement target detecting and tracking method
CN103729858A (en) Method for detecting article left over in video monitoring system
CN101770648A (en) Video monitoring based loitering system and method thereof
CN101727570A (en) Tracking method, track detection processing unit and monitor system
CN102930719A (en) Video image foreground detection method for traffic intersection scene and based on network physical system
CN101299274B (en) Detecting method and system for moving fixed target
CN111738336A (en) Image detection method based on multi-scale feature fusion
CN104301697A (en) Automatic public place violence incident detection system and method thereof
CN103226701A (en) Modeling method of video semantic event
CN103679690A (en) Object detection method based on segmentation background learning
CN103049748B (en) Behavior monitoring method and device

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
C53 Correction of patent for invention or patent application
CB03 Change of inventor or designer information

Inventor after: Zheng Changchun

Inventor before: Zheng Changchun

Inventor before: Xu Mingjian

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: ZHENG CHANGCHUN XU MINGJIAN TO: ZHENG CHANGCHUN

DD01 Delivery of document by public notice
DD01 Delivery of document by public notice

Addressee: Shenzhen Bellsent Intelligent System Co.,Ltd.

Document name: Notification to Pay the Fees

DD01 Delivery of document by public notice
DD01 Delivery of document by public notice

Addressee: SHENZHEN BELLSENT INTELLIGENT SYSTEM Co.,Ltd.

Document name: Notification of Termination of Patent Right

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130410

Termination date: 20190507