CN101714256B - Omnibearing vision based method for identifying and positioning dynamic target - Google Patents

Omnibearing vision based method for identifying and positioning dynamic target Download PDF

Info

Publication number
CN101714256B
CN101714256B CN2009102285809A CN200910228580A CN101714256B CN 101714256 B CN101714256 B CN 101714256B CN 2009102285809 A CN2009102285809 A CN 2009102285809A CN 200910228580 A CN200910228580 A CN 200910228580A CN 101714256 B CN101714256 B CN 101714256B
Authority
CN
China
Prior art keywords
image
delta
particle
target
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009102285809A
Other languages
Chinese (zh)
Other versions
CN101714256A (en
Inventor
丁承君
段萍
王南
张明路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN2009102285809A priority Critical patent/CN101714256B/en
Publication of CN101714256A publication Critical patent/CN101714256A/en
Application granted granted Critical
Publication of CN101714256B publication Critical patent/CN101714256B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an omnibearing vision based method for identifying and positioning a dynamic target, belonging to the technical field of dynamic image analysis. The method comprises the following steps of: 1, acquiring an omnibearing vision sequence image, and preprocessing the omnibearing vision sequence image to obtain a binary image separating a moving target and a background area; 2, searching a local area by an optical flow method, matching feature points between adjacent frames of the image, and detecting the moving target of an image sequence; 3, estimating the moving state of the moving target by a particle filtering algorithm, and predicting the parameter of the moving target in a subsequent frame so as to complete a tracking process. The invention can obviously reduce the calculating amount and enhance the accuracy by identifying and positioning the dynamic target by the method.

Description

Dynamic object identification and localization method based on omni-directional visual
Technical field
The invention belongs to the dynamic image analysis technical field, relate to a kind of Target Recognition and localization method based on omni-directional visual.
Background technology
The basic task of dynamic image analysis is to detect movable information, recognition and tracking fortune target from image sequence.It relates to Flame Image Process, graphical analysis, artificial intelligence and pattern-recognition, computer vision etc. and studies carefully the field; it is very active branch in Flame Image Process and the computer vision neighborhood; obtained widespread use in commercial production, the fields such as health, national defense construction for the treatment of, therefore the research to it has crucial sincere justice.
In order to discern moving target and to realize to its tracking, people adopt the method for optical flow field usually, from the image sequence that contains moving target of real-time collection, extract optical flow field, filter out the bigger motion target area of light stream and calculate the velocity of moving target, thereby realized the tracking of moving target.
The object detection method based on light stream in the past mainly is divided into two classes: (1) utilizes the differential optic flow technique promptly to utilize the fundamental equation of light stream, and additional certain constraint obtains fine and close optical flow field, extracts moving target again.The deficiency of the method is that calculated amount is bigger, and real-time is not strong.(2) use the characteristic light stream technology, seek unique point and mate in image, obtain the sparse optical flow field, the real-time of extracting this method of target is improved, but the quantity of information deficiency causes the omission of target easily.And aspect target following, way is in the past usually separated it, and after realize detecting, the feature of based target is followed the tracks of again, does the complexity that has just increased algorithm process like this, brings complicated processing procedure when the entering and withdraw from of target.
Summary of the invention
The objective of the invention is to above-mentioned deficiency at prior art, the present invention proposes a kind of under omni-directional visual the effective ways of maneuvering target recognition and tracking.Real-time and robustness that this method can improve identification and follow the tracks of make the mobile robot have the comprehensive function of continental embankment independent navigation and maneuvering target tracking.
The technical solution used in the present invention is as follows:
A kind of dynamic object identification and localization method based on omni-directional visual comprise the following steps:
Step 1: obtain the omni-directional visual sequence image, this sequence image is carried out pre-service, obtain a moving target and background and make a distinction bianry image;
Step 2: carry out local area search with optical flow method, carry out the coupling of unique point between the image consecutive frame, detect the moving target of image sequence;
Step 3: target state is estimated that the parameter of predicted motion target in subsequent frame finished tracing process by particle filter algorithm.
As preferred implementation, above-mentioned dynamic object identification and localization method based on omni-directional visual, step 2 is wherein carried out according to following method, if the moving image function f (x, y) be continuous function about variable x, y, during moment t, (x, the gray-scale value of y) locating are f to 1 a=on the image t(x, y), when moment t+ Δ t, this this point moves to reposition, and its position on image becomes (x+ Δ x, y+ Δ y), and gray-scale value is designated as f T+ Δ t(x+ Δ x, y+ Δ y), the purpose of coupling is exactly the corresponding point of seeking a, makes f t(x, y)=f T+ Δ t(x+ Δ x, y+ Δ y), and make an a=(x, y) in the neighborhood of the M * N that sets, least mean-square error MSE (Δ x, Δ y) minimum, can make MSE (Δ x, Δ y) minimum be Optimum Matching point opt=(Δ x, Δ y),
Make f=f t(x, y)-f T+ Δ t(x, y), ▿ f = [ ∂ f t + Δt ∂ Δx , ∂ f t + Δt ∂ Δy ] T Be the gradient of pixel (Δ x, Δ y), then, MN 2 · [ ∂ MSE ( Δx , Δy ) ∂ ( Δx , Δy ) ] T = Σ m = 1 M Σ n = 1 N ▿ f T · ( Δx , Δy ) - Σ m = 1 M Σ n = 1 N f
Order U = Σ m = 1 M Σ n = 1 N ▿ f T , V = Σ m = 1 M Σ n = 1 N f , Try to achieve Optimum Matching point opt=(Δ x, Δ y)=U -1V mates by seek unique point in image, detects the moving target of image sequence;
Step 3 is wherein carried out according to following method:
(1) according to the result in second step, initial target is positioned, and obtains the initial motion parameter of target:
P Init=(P Initx, P Init y), establish each particle and represent a kind of possible motion state, getting population is N, the initial weight w of particle i=1, then have N possible motion state parameters P i=(P i X, P i Y), (i ∈ 1 ... N).
(2) carry out particle resampling process, eliminate the less particle of weights, keep the bigger particle of weights;
(3) change the iterative process of particle filter algorithm over to: from each later frame of second frame, each particle is carried out system state to be shifted and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state, finish tracing process;
Carry out state transitions according to following formula: to particle N i, P is arranged i Xt=A 1P i Xt-1+ B 1w i T-1And P i Xt=A 2P i Xt-1+ B 2w i T-1, wherein, A 1, A 2, B 1, B 2Be constant, A gets 1, and B is that particle is propagated radius, and w is the random number in [1,1];
Carry out systematic observation according to following method:
(1) after each particle state shifts, utilize new coordinate and, calculate a minimum average B configuration absolute difference function MAD i
(2) establishing probability density function is p ( z k | x k i ) = exp { - 1 2 σ 2 MAD i } Wherein, σ is a constant, and then the weights of each particle are: w k i = w k - 1 i p ( z k | x k i ) ;
(3) weights to each particle carry out normalized: w k i = w k i / Σ i = 1 N w k i ;
(4) further optimal estimation, the posterior probability of establishing the t moment is known, and then tracking parameter P is expressed as: p Xt opt = Σ i = 1 N w i p X i , p Xt opt Yt = Σ i = 1 N w i p X i , Afterwards, can make t=t+1 again, return resampling then.
Substantive distinguishing features of the present invention is, at first the omni-directional visual image is carried out pre-service, seeking unique point with optical flow method in image then mates, obtain the sparse optical flow field, at last by the parameter of particle filter predicted motion target in subsequent frame, set up the coupling matrix between consecutive frame, analyze the coupling matrix and judge the moving target state, thus pursuit movement target effectively.Compare with existing method, the method that adopts the present invention to propose can reduce operand significantly and improve accuracy rate.
Description of drawings
Fig. 1 general flow chart that is used for the compound recognition and tracking device of light stream-particle of omni-directional visual environment of the present invention.
Embodiment
Referring to Fig. 1, dynamic object identification and localization method based on omni-directional visual of the present invention comprise the following steps:
Step 1: obtain the omni-directional visual sequence image, image is carried out pre-service, target and background is separated, prepare for follow-up optical flow field calculates.By gauss low frequency filter image is carried out smoothly in advance, carry out the gradient sharpening then, find the movement edge of image object,, carry out Threshold Segmentation in order to cut apart target object and background.At first directly select to determine a threshold value by histogram, take dynamically to adjust threshold value for sequence image, allow each gray values of pixel points of image and this threshold value compare then, if greater than this threshold value, just this gray values of pixel points is changed to 255 (expression backgrounds), otherwise this gray values of pixel points is changed to 0 (object), so just moving target and background has been made a distinction.Just become bianry image through the image of Threshold Segmentation, had only 0 and 255 two kind of gray-scale value.
Step 2: carry out local area search with optical flow method, carry out the coupling of unique point between the image consecutive frame.
For sequence image, the consecutive frame time interval is very little, and spatial point moves little in adjacent two two field pictures, and front and back frame object space correlativity is bigger.
If (x y) is continuous function about variable x, y to the moving image function f.If during moment t, (x, the gray-scale value of y) locating are f to 1 a=on the image t(x, y), when moment t+ Δ t, this point moves to reposition, and its position on image becomes (x+ Δ x, y+ Δ y), and gray-scale value is designated as f T+ Δ t(x+ Δ x, y+ Δ y), the purpose of coupling is exactly the corresponding point of seeking a, allows it and f t(x, y) equate, promptly
f t(x,y)=f t+Δt(x+Δx,y+Δy)(1)
And make an a=(x, y) in the neighborhood of the m * n that sets, least mean-square error MSE (Δ x, Δ y) minimum.
MSE ( Δx , Δy ) = 1 MN Σ m = 1 M Σ n = 1 N [ f t ( x , y ) - f t + Δt ( x + Δx , y + Δy ) ] 2 - - - ( 2 )
Can make MSE (Δ x, Δ y) minimum be Optimum Matching point opt=(Δ x, Δ y).
Make that MSE (Δ x, Δ y) is zero to the first order derivative of (Δ x, Δ y):
∂ MSE ( Δx , Δy ) ∂ ( Δx , Δy ) | ( Δx , Δy ) = opt = ( 0,0 ) - - - ( 3 )
By (2), can get
∂ MSE ( Δx , Δy ) ∂ ( Δx , Δy ) = - 2 MN Σ m = 1 M Σ n = 1 N [ f t ( x , y ) - f t + Δt ( x + Δx , y + Δy ) ] · ( ∂ f t + Δt ∂ Δx ∂ f t + Δt ∂ Δy ) - - - ( 4 )
Launch with Taylor's formula:
∂ MSE ( Δx,Δy ) ∂ ( Δx , Δy ) = - 2 MN Σ m = 1 M Σ n = 1 N [ f t ( x , y ) - f t + Δt ( x , y ) - ( ∂ f t + Δt ∂ Δx , ∂ f t + Δt ∂ Δy ) · ( Δx , Δy ) ] · ( ∂ f t + Δt ∂ Δx , ∂ f t + Δt ∂ Δy ) - - - ( 5 )
Make f=f t(x, y)-f T+ Δ t(x, y)
▿ f = [ ∂ f t + Δt ∂ Δx , ∂ f t + Δt ∂ Δy ] T Be the gradient of pixel (Δ x, Δ y),
(5) but abbreviation get ∂ MSE ( Δx , Δy ) ∂ ( Δx , Δy ) = - 2 MN Σ m = 1 M Σ n = 1 N [ f - ▿ f T · ( Δx , Δy ) ] · ▿ f T - - - ( 6 )
Again because ▿ f · ▿ f T = 1
But the following formula abbreviation is:
MN 2 · [ ∂ MSE ( Δx , Δy ) ∂ ( Δx , Δy ) ] T = Σ m = 1 M Σ n = 1 N ▿ f T · ( Δx , Δy ) - Σ m = 1 M Σ n = 1 N f - - - ( 7 )
Order U = Σ m = 1 M Σ n = 1 N ▿ f T , V = Σ m = 1 M Σ n = 1 N f
Can get Optimum Matching point opt=(Δ x, Δ y)=U -1V
Mate by in image, seeking unique point, detect the moving target of image sequence.
Step 3: utilize the validity feature of target, by particle filter algorithm target state is estimated, the parameter of predicted motion target in subsequent frame finished tracing process.
At first carry out the particle initialization, the initial target piece is positioned, obtain particle w kTemplate, as manual initialization, auto-initiation or the like obtains target w afterwards again kOriginal state, i.e. the state P of its initial time of occurring Init=(P Init x, P Init y), getting population is N (each particle is represented a kind of possible motion state), establishes the initial weight w of particle i=1, then have N possible motion state parameters P i=(P i X, P i Y)
(i ∈ 1 ... N), P wherein iCan select p InitPoint in the certain limit on every side.
Capable then particle resampling process is eliminated the less particle of weights, keeps the bigger particle of weights.
At last, preset iterations, change the iterative process of particle filter algorithm over to.From each later frame of second frame, each particle carried out system state shifts and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state.
State transitions: to particle N i, have
P i Xt=A 1P i Xt-1+B 1w i t-1(8)
P i Xt=A 2P iX t-1+B 2w i t-1(9)
Wherein, A 1, A 2, B 1, B 2Be constant, general A gets 1, and B is that particle is propagated radius (in the system state transfer process, the scope that particle institute can propagate), and w is [1,1] interior random number.
Systematic observation: after each particle state shifted, MADi of the new coordinate Calculation of promptly available correspondence established probability density function and is p ( z k | x k i ) = exp { - 1 2 σ 2 MAD i } - - - ( 10 )
Wherein, σ is a constant, and MAD is a minimum average B configuration absolute difference function.
MAD ( i , j ) = 1 M × N Σ m = 1 M Σ n = 1 N | T ( m , n ) - F ( m + i , n + j ) |
Then the weights of each particle are: w k i = w k - 1 i p ( z k | x k i ) - - - ( 11 )
Normalization: w k i = w k i / Σ i = 1 N w k i - - - ( 12 )
Further optimal estimation, the posterior probability of establishing the t moment is known, and then tracking parameter P can be expressed as:
p Xt opt = Σ i = 1 N w i p X i , p Xt opt Yt = Σ i = 1 N w i p X i , - - - ( 13 )
Afterwards, can make t=t+1 again, return resampling then.

Claims (3)

1. dynamic object identification and localization method based on an omni-directional visual comprise the following steps:
Step 1: obtain the omni-directional visual sequence image, this sequence image is carried out pre-service, obtain the bianry image that object and background are made a distinction;
Step 2: carry out local area search with optical flow method, carry out the coupling of unique point between the image consecutive frame, detect the moving target of image sequence;
Step 3: target state is estimated that the motion state parameters of predicted motion target in subsequent frame finished tracing process by particle filter algorithm.
2. dynamic object identification and localization method based on omni-directional visual according to claim 1, step 2 is wherein carried out according to following method, establishes moving image function f (x, y) be continuous function about variable x, y, during moment t, (x, the gray-scale value of y) locating are f to 1 a=on the image t(x, y), when moment t+ Δ t, this point moves to reposition, and its position on image becomes (x+ Δ x, y+ Δ y), and gray-scale value is designated as f T+ Δ t(x+ Δ x, y+ Δ y), the purpose of coupling is exactly the corresponding point of seeking a, makes f t(x, y)=f T+ Δ t(x+ Δ x, y+ Δ y), and make an a=(x, y) in the neighborhood of the M * N that sets, least mean-square error MSE (Δ x, Δ y) minimum, can make MSE (Δ x, Δ y) minimum be Optimum Matching point opt=(Δ x, Δ y),
Make f=f t(x, y)-f T+ Δ t(x, y),
Figure FSB00000631930500011
Be the gradient of pixel (Δ x, Δ y), then, MN 2 · [ ∂ MSE ( Δx , Δy ) ∂ ( Δx , Δy ) ] T = Σ m = 1 M Σ n = 1 N ▿ f T · ( Δx , Δy ) - Σ m = 1 M Σ n = 1 N f
Order
Figure FSB00000631930500013
Figure FSB00000631930500014
Try to achieve Optimum Matching point opt=(Δ x, Δ y)=U -1V mates by seek unique point in image, detects the moving target of image sequence.
3. dynamic object identification and localization method based on omni-directional visual according to claim 1, step 3 is wherein carried out according to following method:
(1) according to the result of step 2, initial target is positioned, and obtains the initial motion parameter of target:
P Init=(P Init x, P Init y), establish each particle and represent a kind of possible motion state, getting population is N, the initial weight w of particle i=1, then have N possible motion state parameters P i=(P i X, P i Y), (i ∈ 1 ... N);
(2) carry out particle resampling process, eliminate the less particle of weights, keep the bigger particle of weights;
(3) change the iterative process of particle filter algorithm over to: from each later frame of second frame, each particle is carried out system state to be shifted and systematic observation, calculate the weights of particle, and all particles are weighted estimated value with the export target state, finish tracing process.
CN2009102285809A 2009-11-13 2009-11-13 Omnibearing vision based method for identifying and positioning dynamic target Expired - Fee Related CN101714256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009102285809A CN101714256B (en) 2009-11-13 2009-11-13 Omnibearing vision based method for identifying and positioning dynamic target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009102285809A CN101714256B (en) 2009-11-13 2009-11-13 Omnibearing vision based method for identifying and positioning dynamic target

Publications (2)

Publication Number Publication Date
CN101714256A CN101714256A (en) 2010-05-26
CN101714256B true CN101714256B (en) 2011-12-14

Family

ID=42417873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009102285809A Expired - Fee Related CN101714256B (en) 2009-11-13 2009-11-13 Omnibearing vision based method for identifying and positioning dynamic target

Country Status (1)

Country Link
CN (1) CN101714256B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9311708B2 (en) * 2014-04-23 2016-04-12 Microsoft Technology Licensing, Llc Collaborative alignment of images
CN102110297B (en) * 2011-03-02 2012-10-10 无锡慧眼电子科技有限公司 Detection method based on accumulated light stream and double-background filtration
CN103426184B (en) 2013-08-01 2016-08-10 华为技术有限公司 A kind of optical flow tracking method and apparatus
CN104778677B (en) * 2014-01-13 2019-02-05 联想(北京)有限公司 A kind of localization method, device and equipment
CN106483577B (en) * 2015-09-01 2019-03-12 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 A kind of optical detecting gear
CN105975911B (en) * 2016-04-28 2019-04-19 大连民族大学 Energy-aware based on filter moves well-marked target detection method
CN106447696B (en) * 2016-09-29 2017-08-25 郑州轻工业学院 A kind of big displacement target sparse tracking that locomotion evaluation is flowed based on two-way SIFT
CN106950985B (en) * 2017-03-20 2020-07-03 成都通甲优博科技有限责任公司 Automatic delivery method and device
CN107065866A (en) * 2017-03-24 2017-08-18 北京工业大学 A kind of Mobile Robotics Navigation method based on improvement optical flow algorithm
CN107764271B (en) * 2017-11-15 2023-09-26 华南理工大学 Visible light visual dynamic positioning method and system based on optical flow
CN108053446A (en) * 2017-12-11 2018-05-18 北京奇虎科技有限公司 Localization method, device and electronic equipment based on cloud
CN108920997A (en) * 2018-04-10 2018-11-30 国网浙江省电力有限公司信息通信分公司 Judge that non-rigid targets whether there is the tracking blocked based on profile
CN109255329B (en) * 2018-09-07 2020-04-21 百度在线网络技术(北京)有限公司 Method and device for determining head posture, storage medium and terminal equipment
CN111147763B (en) * 2019-12-29 2022-03-01 眸芯科技(上海)有限公司 Image processing method based on gray value and application
CN111951949B (en) * 2020-01-21 2021-11-09 武汉博科国泰信息技术有限公司 Intelligent nursing interaction system for intelligent ward
CN114347030A (en) * 2022-01-13 2022-04-15 中通服创立信息科技有限责任公司 Robot vision following method and vision following robot
CN115962783B (en) * 2023-03-16 2023-06-02 太原理工大学 Positioning method of cutting head of heading machine and heading machine

Also Published As

Publication number Publication date
CN101714256A (en) 2010-05-26

Similar Documents

Publication Publication Date Title
CN101714256B (en) Omnibearing vision based method for identifying and positioning dynamic target
EP3633615A1 (en) Deep learning network and average drift-based automatic vessel tracking method and system
CN102385690B (en) Target tracking method and system based on video image
Nieto et al. Road environment modeling using robust perspective analysis and recursive Bayesian segmentation
CN102903122B (en) Video object tracking method based on feature optical flow and online ensemble learning
CN102663429B (en) Method for motion pattern classification and action recognition of moving target
CN112016445B (en) Monitoring video-based remnant detection method
CN106600625A (en) Image processing method and device for detecting small-sized living thing
CN105023278A (en) Movable target tracking method and system based on optical flow approach
Elmezain et al. A robust method for hand gesture segmentation and recognition using forward spotting scheme in conditional random fields
Elmezain et al. Hand trajectory-based gesture spotting and recognition using HMM
CN113011367A (en) Abnormal behavior analysis method based on target track
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN106682573B (en) A kind of pedestrian tracting method of single camera
CN111862145B (en) Target tracking method based on multi-scale pedestrian detection
CN101924871A (en) Mean shift-based video target tracking method
CN110991397B (en) Travel direction determining method and related equipment
Nayagam et al. A survey on real time object detection and tracking algorithms
Ali et al. Vehicle detection and tracking in UAV imagery via YOLOv3 and Kalman filter
Kandil et al. A comparative study between SIFT-particle and SURF-particle video tracking algorithms
CN110309729A (en) Tracking and re-detection method based on anomaly peak detection and twin network
CN109636834A (en) Video frequency vehicle target tracking algorism based on TLD innovatory algorithm
CN117173792A (en) Multi-person gait recognition system based on three-dimensional human skeleton
CN115188081B (en) Complex scene-oriented detection and tracking integrated method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111214

Termination date: 20141113

EXPY Termination of patent right or utility model