CN103077521B - A kind of area-of-interest exacting method for video monitoring - Google Patents

A kind of area-of-interest exacting method for video monitoring Download PDF

Info

Publication number
CN103077521B
CN103077521B CN201310006815.6A CN201310006815A CN103077521B CN 103077521 B CN103077521 B CN 103077521B CN 201310006815 A CN201310006815 A CN 201310006815A CN 103077521 B CN103077521 B CN 103077521B
Authority
CN
China
Prior art keywords
pixel
area
interest
depth
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310006815.6A
Other languages
Chinese (zh)
Other versions
CN103077521A (en
Inventor
金志刚
刘晓辉
徐楚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201310006815.6A priority Critical patent/CN103077521B/en
Publication of CN103077521A publication Critical patent/CN103077521A/en
Application granted granted Critical
Publication of CN103077521B publication Critical patent/CN103077521B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides the extracting method of area-of-interest in a kind of video image, be applicable to field of video monitoring.Described method can by area-of-interest automatic mark out, specifically comprise the following steps: 1) adopt binocular camera (in advance demarcate) to carry out video acquisition 2) correct image to left and right two viewpoint, eliminate distortion 3) left and right viewpoint selection one, its depth map is calculated by binocular parallax, and build its Gaussian mixture model-universal background model 4 by consecutive image) use the Depth contrasts of the overall situation to obtain degree of depth salient region, use the method for background difference to obtain moving region simultaneously, obtain two width bianry images 5) XOR is done to the two width bianry images that step 4 obtains, then connected region (denoising) is screened, and expansive working 6 is done to remaining connected region) finally mark profile and boundary rectangle, obtain final area-of-interest.The present invention effectively overcomes color and brightness changes the impact detected area-of-interest, has very high accuracy and robustness.

Description

A kind of area-of-interest exacting method for video monitoring
Technical field
The invention belongs to technical field of image detection, relate to a kind of area-of-interest exacting method that may be used for video monitoring.
Background technology
For video and image, user is only interested in subregion wherein, and this part region comprises larger quantity of information, can more significant attracting attention.Use computing machine to obtain area-of-interest automatically and accurately, have a wide range of applications in fields such as image detection, target identification, tracking.The extracting method of existing a lot of area-of-interest at present, such as frame difference method and the background subtraction method [1] of moving target recognition, based on the method for extracting region [2] that saliency is analyzed, and some detection algorithms for certain objects (as HOG pedestrian detection [3]), these methods can extract the area-of-interest with certain discrimination.But frame difference and background differential technique can only detect foreground moving object, lack the recognition capability to static target; The region detection of conspicuousness is easily subject to the impact of color and brightness, and Detection results is also unstable; Detection algorithm for certain objects identifies often through the sorter trained, and it is relatively more accurate to detect, but often can only for specific objective, and range of application is narrower and be difficult in real time.
List of references
【1】Zivkovic,Z.and F.van der Heijden(2006).Efficient adaptive density estimation per image pixelfor thetask ofbackground subtraction[J].Patternrecognition letters 27(7):773-780.
【2】Itti L,Koch C andNiebur E.Amodel ofsaliency-based visual attention forrapid scene analysis[J].IEEETransaction on PatternAnalysis and Machine Intelligence.1998,20(11):1254-1259.
【3】N.Dalal,B.Triggs,Histograms oforiented gradients forhuman detection[C].In:ComputerVisionand Pattern Recognition,San Diego,CA,June 20–25,2005.
Summary of the invention
The object of the invention is to the above-mentioned deficiency overcoming prior art, provide a kind of new area-of-interest exacting method, it can extract the area-of-interest of dynamic and static state in video image simultaneously, and effectively can overcome the impact of color and luminance factor.
Technical scheme of the present invention is:
For an area-of-interest exacting method for video monitoring, comprise the following steps:
1) use the binocular camera demarcated in advance, or binocular camera is connected the binocular video image that computer collection obtains left and right viewpoint;
2) according to the inside and outside parameter of video camera or binocular camera, binocular is done to the binocular video image of left and right viewpoint and corrects, eliminate distortion, make the alignment that two width images can be strict;
3) one of them selecting left and right viewpoint is as referenced viewpoints, BM pattern matching algorithm is used to calculate binocular parallax to obtain depth map, and utilize the consecutive image sequence of this viewpoint, re-use the background model that gauss hybrid models (GMM) sets up this viewpoint;
4) according to obtained depth map, use the degree of depth squared difference sum of overall weighting to calculate all degree of depth and significantly scheme wherein, DS ifor the degree of depth conspicuousness that pixel i is corresponding, d iand d jfor the degree of depth that pixel i and pixel j is corresponding, p i=(x i, y i) and p j=(x j, y j) be pixel i, j position, || d i-d j|| in image between pixel i and pixel j Euclidean distance, ω (p i, p j) be the weight relevant with two pixel Euclidean distances; Threshold value T is greater than in the remarkable figure of selected depth 1pixel DS i> T 1, obtain bianry image I 1;
5) according to obtained background model, background subtraction method is used to obtain the bianry image I representing target 2, obtain prospect area-of-interest;
6) by I 1, I 2do XOR, obtain a secondary new image I 3;
7) to I 3in all connected regions screen, remove and be less than the isolated area of certain pixel count.And expansive working is done to remaining area, obtain regional ensemble R;
8) R is area-of-interest set ROI, mark profile and boundary rectangle, and result is indicated in display.
Preferably, step 5) in, when meeting following formula, think that pixel belongs to prospect area-of-interest:
|i(t)-bg(t)|≥T 2
Wherein, i (t) represents the pixel in t frame, and bg (t) is the background model of t frame, T 2for threshold value.
The present invention, by extracting degree of depth salient region and motion target area, both merging, marks final region after screening.Can not only moving region be extracted, and can stagnant zone be extracted, avoid in classic method color and luminance factor to the impact of area-of-interest simultaneously.The present invention can mark various foreground target region accurately, also compares to meet human eye and select the subjectivity of area-of-interest.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the area-of-interest exacting method for video monitoring of the present invention.
Binocular camera example in Fig. 2 the present invention.
Embodiment
For making object of the present invention, implementation and advantage more clear, below the present invention is described in further detail.Method flow of the present invention as shown in Figure 1.
Video acquisition can use the binocular camera on market, can designed, designed binocular camera, and camera parallax range can be set to 100mm ~ 200mm, as Fig. 2.Binocular calibration can adopt plane gridiron pattern calibrating template, coordinate corresponding relation according to template angle point and image mid point obtains (focal length, imaging initial point, distortion factor) in binocular camera, outer (rotation matrix and translation vector) parameter.
It is the monocular internal reference data (focal length, imaging initial point, distortion factor) and binocular relative position relation (rotation matrix and translation vector) that obtain after calibrating according to camera that binocular corrects, respectively elimination distortion is carried out to left and right view and row is aimed at, such that the imaging origin of left and right view is consistent, two camera optical axises are parallel, left and right imaging plane is coplanar, to polar curve row alignment.Binocular corrects and uses BOUGUET method, more specifically adopts the cvStereoRectify in opencv, cvInitUndistortRectifyMap and cvRemap function.
The present invention tests and adopts left viewpoint as referenced viewpoints (also can use right viewpoint), uses BM pattern matching algorithm to calculate binocular parallax to obtain depth map., as Matching power flow, SAD window size is 5 to use absolute value difference sum (sum of absolute differences, SAD) in the present invention; Use the parallax between the viewpoint of BM (Boyer-Moore) searching algorithm calculating left and right, minimum parallax is set to 25 pixels.More specifically be adopt the cvFindStereoCorrespondenceBM function in opencv.
Consecutive image based on left viewpoint calculates Gaussian Mixture background, and Gaussian mixture model-universal background model adopts 5 Gaussian distribution, and initial variance is 30, and the threshold value of Gaussian distribution weight sum is 0.7, and learning rate is 0.05; The parameter using EM algorithm to carry out Gauss model upgrades.
Obtaining after based on the depth map of left viewpoint and Background, calculating and significantly scheming based on the degree of depth of global depth contrast, calculate sport foreground.
Significantly schemed by formula 1 compute depth: DS i = Σ j = 1 N | | d i - d j | | 2 ω ( p i , p j ) Formula 1
Wherein DS ifor the degree of depth conspicuousness that pixel i is corresponding, d iand d jfor the degree of depth that pixel i and pixel j is corresponding, p i=(x i, y i) and p j=(x j, y j) be pixel i, j position, ω (p i, p j) be the weight relevant with two pixel Euclidean distances. the Euclidean distance definition of pixel i, j is as formula 2: D ( i , j ) = ( x i - x j ) 2 + ( y i - y j ) 2 Formula 2
Use threshold method after obtaining degree of depth conspicuousness, selected depth conspicuousness is greater than threshold value T 1all pixels, obtain the bianry image I representing degree of depth conspicuousness ROI 1.
Sport foreground pixel is obtained by formula 3: | i (t)-bg (t) |>=T 2formula 3
Wherein, i (t) represents the pixel in t frame, and bg (t) is the background model of t frame, T 2for threshold value. obtain all pixels meeting formula 2, form the bianry image I representing motion target area 2
The expression degree of depth salient region image I that the present invention is obtaining 1with sport foreground area image I 2after, both are done XOR, obtains the bianry image I merged 3.
Use 5 to take advantage of the mean filter of 5 to remove noise spot, use expansive working to obtain final connected region set R.More particularly adopt the blur function in opencv and dilate function, then the threshold function check edge in opencv is used, use findContours function to find profile, use boundingRect function to obtain the boundary rectangle of all connected regions.

Claims (2)

1., for an area-of-interest exacting method for video monitoring, comprise the following steps:
1) use the binocular camera demarcated in advance, or binocular camera is connected the binocular video image that computer collection obtains left and right viewpoint;
2) according to the inside and outside parameter of video camera or binocular camera, binocular is done to the binocular video image of left and right viewpoint and corrects, eliminate distortion, make the alignment that two width images can be strict;
3) one of them selecting left and right viewpoint is as referenced viewpoints, BM pattern matching algorithm is used to calculate binocular parallax to obtain depth map, and utilize the consecutive image sequence of this viewpoint, re-use the background model that gauss hybrid models (GMM) sets up this viewpoint;
4) according to obtained depth map, use the degree of depth squared difference sum of overall weighting to calculate all degree of depth and significantly scheme wherein, DS ifor the degree of depth conspicuousness that pixel i is corresponding, d iand d jfor the degree of depth that pixel i and pixel j is corresponding, p i=(x i, y i) and p j=(x j, y j) be pixel i, j position, || d i-d j|| in image between pixel i and pixel j Euclidean distance, ω (p i, p j) be the weight relevant with two pixel Euclidean distances; Threshold value T is greater than in the remarkable figure of selected depth 1pixel DS i> T 1, obtain bianry image I 1;
5) according to obtained background model, background subtraction method is used to obtain the bianry image I representing target 2, obtain prospect area-of-interest;
6) by I 1, I 2do XOR, obtain a secondary new image I 3;
7) to I 3in all connected regions screen, remove and be less than the isolated area of certain pixel count, and expansive working is done to remaining area, obtain regional ensemble R;
8) R is area-of-interest set ROI, mark profile and boundary rectangle, and result is indicated in display.
2. the area-of-interest exacting method for video monitoring according to claim 1, is characterized in that, step 5) in, when meeting following formula, think that pixel belongs to prospect area-of-interest:
|i(t)-bg(t)|≥T 2
Wherein, i (t) represents the pixel in t frame, and bg (t) is the background model of t frame, T 2for threshold value.
CN201310006815.6A 2013-01-08 2013-01-08 A kind of area-of-interest exacting method for video monitoring Expired - Fee Related CN103077521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310006815.6A CN103077521B (en) 2013-01-08 2013-01-08 A kind of area-of-interest exacting method for video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310006815.6A CN103077521B (en) 2013-01-08 2013-01-08 A kind of area-of-interest exacting method for video monitoring

Publications (2)

Publication Number Publication Date
CN103077521A CN103077521A (en) 2013-05-01
CN103077521B true CN103077521B (en) 2015-08-05

Family

ID=48154040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310006815.6A Expired - Fee Related CN103077521B (en) 2013-01-08 2013-01-08 A kind of area-of-interest exacting method for video monitoring

Country Status (1)

Country Link
CN (1) CN103077521B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8983176B2 (en) * 2013-01-02 2015-03-17 International Business Machines Corporation Image selection and masking using imported depth information
CN105654459B (en) * 2014-11-28 2018-04-24 深圳超多维光电子有限公司 Calculate the depth profile method and apparatus of scene main body
CN105894530A (en) * 2014-12-11 2016-08-24 深圳市阿图姆科技有限公司 Detection and tracking solution scheme aiming at motion target in video
CN104537681A (en) * 2015-01-21 2015-04-22 北京联合大学 Method and system for extracting spectrum-separated visual salient region
CN104867113B (en) * 2015-03-31 2017-11-17 酷派软件技术(深圳)有限公司 The method and system of perspective image distortion correction
CN105160650B (en) * 2015-07-07 2016-08-24 南京华捷艾米软件科技有限公司 A kind of method extracting the continuous subject image of the degree of depth from image
US20170054897A1 (en) * 2015-08-21 2017-02-23 Samsung Electronics Co., Ltd. Method of automatically focusing on region of interest by an electronic device
US10915998B2 (en) 2016-12-21 2021-02-09 Huawei Technologies Co., Ltd. Image processing method and device
CN107608392A (en) * 2017-09-19 2018-01-19 浙江大华技术股份有限公司 The method and apparatus that a kind of target follows
CN107818685A (en) * 2017-10-25 2018-03-20 司法部司法鉴定科学技术研究所 A kind of method that state of motion of vehicle is obtained based on Vehicular video
CN108259881A (en) * 2018-01-30 2018-07-06 深圳市得色科技有限公司 3D synthetic methods and its system based on parallax estimation
CN108898044B (en) * 2018-04-13 2021-10-29 顺丰科技有限公司 Loading rate obtaining method, device and system and storage medium
CN110581977B (en) * 2018-06-07 2021-06-04 杭州海康威视数字技术股份有限公司 Video image output method and device and three-eye camera
CN109472776B (en) * 2018-10-16 2021-12-03 河海大学常州校区 Depth significance-based insulator detection and self-explosion identification method
CN109887005B (en) * 2019-02-26 2023-05-30 天津城建大学 TLD target tracking method based on visual attention mechanism
CN110503093B (en) * 2019-07-24 2022-11-04 中国航空无线电电子研究所 Region-of-interest extraction method based on disparity map DBSCAN clustering
CN110728173A (en) * 2019-08-26 2020-01-24 华北石油通信有限公司 Video transmission method and device based on target of interest significance detection
CN110929668A (en) * 2019-11-29 2020-03-27 珠海大横琴科技发展有限公司 Commodity detection method and device based on unmanned goods shelf
CN111126216A (en) * 2019-12-13 2020-05-08 支付宝(杭州)信息技术有限公司 Risk detection method, device and equipment
CN111783878B (en) * 2020-06-29 2023-08-04 北京百度网讯科技有限公司 Target detection method, target detection device, electronic equipment and readable storage medium
CN113034551A (en) * 2021-05-31 2021-06-25 南昌虚拟现实研究院股份有限公司 Target tracking and labeling method and device, readable storage medium and computer equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750731A (en) * 2012-07-05 2012-10-24 北京大学 Stereoscopic vision significance calculating method based on left and right monocular receptive field and binocular fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750731A (en) * 2012-07-05 2012-10-24 北京大学 Stereoscopic vision significance calculating method based on left and right monocular receptive field and binocular fusion

Also Published As

Publication number Publication date
CN103077521A (en) 2013-05-01

Similar Documents

Publication Publication Date Title
CN103077521B (en) A kind of area-of-interest exacting method for video monitoring
EP2811423B1 (en) Method and apparatus for detecting target
EP2915333B1 (en) Depth map generation from a monoscopic image based on combined depth cues
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
WO2018023916A1 (en) Shadow removing method for color image and application
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
WO2015010451A1 (en) Method for road detection from one image
EP2858008A2 (en) Target detecting method and system
CN109086724B (en) Accelerated human face detection method and storage medium
CN112801074B (en) Depth map estimation method based on traffic camera
CN104268853A (en) Infrared image and visible image registering method
CN102622769A (en) Multi-target tracking method by taking depth as leading clue under dynamic scene
CN102722891A (en) Method for detecting image significance
CN106709901B (en) Simulation mist drawing generating method based on depth priori
CN104463911A (en) Small infrared moving target detection method based on complicated background estimation
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN104517095A (en) Head division method based on depth image
CN110414385A (en) A kind of method for detecting lane lines and system based on homography conversion and characteristic window
CN103577832B (en) A kind of based on the contextual people flow rate statistical method of space-time
CN105809673A (en) SURF (Speeded-Up Robust Features) algorithm and maximal similarity region merging based video foreground segmentation method
CN105913464A (en) Multi-body target online measurement method based on videos
Miller et al. Person tracking in UAV video
CN111028263A (en) Moving object segmentation method and system based on optical flow color clustering
CN104537632A (en) Infrared image histogram enhancing method based on edge extraction
US9727780B2 (en) Pedestrian detecting system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150805

Termination date: 20210108

CF01 Termination of patent right due to non-payment of annual fee