CN107197199A - A kind of intelligent monitoring and controlling device and method for tracking target - Google Patents

A kind of intelligent monitoring and controlling device and method for tracking target Download PDF

Info

Publication number
CN107197199A
CN107197199A CN201710362550.1A CN201710362550A CN107197199A CN 107197199 A CN107197199 A CN 107197199A CN 201710362550 A CN201710362550 A CN 201710362550A CN 107197199 A CN107197199 A CN 107197199A
Authority
CN
China
Prior art keywords
mrow
mtd
target
tracking
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710362550.1A
Other languages
Chinese (zh)
Inventor
管凤旭
车浩
严浙平
张宏瀚
周佳加
刘怀东
周丽萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201710362550.1A priority Critical patent/CN107197199A/en
Publication of CN107197199A publication Critical patent/CN107197199A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is to provide a kind of intelligent monitoring and controlling device and method for tracking target.Intelligent monitoring and controlling device includes head moving cell, video acquisition unit, video flowing control unit and server unit.When method for tracking target make it that nuclear phase pass filtered target tracking runs into target occlusion and model change, according to circumstances dynamic renewal learning speed, the effective nuclear phase that improves closes filtered target tracking not good defect of tracking effect when running into target occlusion and model change, realizes more preferable target following.Utilize the present invention, it is possible to achieve the intelligent monitoring and tracking of target under complex scene.To the situation stability of long-time tracking more preferably, thus tracking effect can be made to effectively improve to blocking and model changes faster situation and has more preferable adaptability.

Description

A kind of intelligent monitoring and controlling device and method for tracking target
Technical field
The present invention relates to a kind of movable object tracking device.The present invention also relates to be a kind of movable object tracking side Method.Specifically a kind of intelligent monitoring and controlling device and method for tracking target.
Background technology
Video Supervision Technique is grouped into by camera part, image stage, systems control division point and display record portion, should Many places for society.Its by camera acquisition to image carry out specific processing to play application value.Closely In the past few years, improving and developing rapidly along with computer technology, network technology, memory technology and chip technology, especially flows Media services technology must be applied in monitoring system so that wired and wireless remote video technology achieves important breakthrough, The public place of society is widely used in a variety of applications, prominent to be made that in terms of containment crime, offer evidence, protection people's property The contribution gone out.
For example in house, according to the design standard of digital video monitor system and requirement, whole progress security protection is deployed to ensure effective monitoring and control of illegal activities, The problems such as including specific control point position, transmission mode, Computer Vision and cradle head control;Then this multiple numerals are supervised The video of control client is uploaded in the webserver in real time, the follow-up intelligent method processing of video is carried out in the server, with reality A kind of existing intelligent video monitoring system.In numerous intelligent videos and computer vision methods, motion target tracking method is to close The ring of key one, can constantly obtain the latest image of target, be the premise of other various video intelligent processing methods, for example, target Behavioural analysis, Activity recognition etc..
At present, the algorithm of moving object detection and motion target tracking has many kinds, solves the reality in some monitoring Problem, such as nuclear phase close filtered target track algorithm.It is to the perfect of CSK track algorithms that nuclear phase, which closes filtered target track algorithm,. CSK algorithms have inherently employed the property of circular matrix to simplify calculating, are adopted using the sample mode of dense sampling Sample, although it has collected all data of target, but these are all that make use of its gray feature, its expression energy to target Power is inadequate.The material alterations that nuclear phase closes filtered target track algorithm are exactly using multichannel HOG (Histogram of Oriented Gradient) core correlation filtering target tracking algorism is described to target and uses Kernel-Based Methods feature Feature is merged, thus the grader obtained can have the outstanding ability of expression target.For But most of algorithms, make The processing mode of different samples is represented with binary system, i.e., represents positive negative sample respectively using 1 and 0.This processing method can not Effectively represent the importance of different samples, that is to say, that for it is far and near different with target when should treat with a certain discrimination.Nuclear phase closes filter Ripple target tracking algorism is by using [0,1] interval numerical value as regression value, and its value is meant that more greatly closer to target, Otherwise represent further away from target, so that the sample obtained under different skews there are different weights, as a result show such a expression side Formula can obtain more excellent effect.Nuclear phase, which closes filtered target track algorithm, is tackling various unfavorable factors, for example, have non-rigid Deformation, mixed and disorderly background and the performance of lighting change situation are more good, but quickly change with model for there is larger block The problems such as performance it is not good enough.
The content of the invention
The intelligent monitoring and tracking of target under complex scene can be realized it is an object of the invention to provide a kind of, is had more preferably Adaptability, the intelligent monitoring and controlling device of stability and tracking effect.It is a kind of based on intelligence prison the present invention also aims to provide Control the method for tracking target of device.
The intelligent monitoring and controlling device of the present invention includes head moving cell, video acquisition unit, video flowing control unit kimonos Business device unit;Head moving cell is a six degree of freedom head;Video acquisition unit is camera, be fixed on head and with Embedded image processing platform is connected, for carrying out video acquisition;Video flowing control unit is that an embedded image processing is flat Platform, one end is connected with camera for transmitting video image, and the other end is connected the control command of the reception server unit with head, Control the action of head;Server unit is used to receive image, operation adaptive updates learning rate nuclear phase close filtered target with Track method, the control instruction of head is issued to embedded image processing platform.
The method for tracking target of intelligent monitoring and controlling device based on the present invention is:
(1) frame video image is obtained, judges whether it is the first two field picture, if the first two field picture, grader is trained, And grader is updated, return and continue to obtain video image;If not the first two field picture, just pass through newest point after renewal Class device determines the position of target in image, continues step (2);
(2) size of the maximum of wave filter response with updating threshold θ is judged, if the maximum of wave filter response is more than Threshold θ, then carry out step (3), otherwise carries out step (5);
(3) detection target occlusion and model situation of change;
(4) according to target occlusion and model situation of change, regularized learning algorithm speed is formed with reference to the situation of all frames before New grader more new formula, carries out the renewal of grader, and updates haar features, return to step (1);
(5) maximum of wave filter response is less than threshold θ, then is carried out using the haar feature matching methods in SURF algorithm Target location is corrected, if matching target, the target location arrived using current matching, as the target location of present frame, Return to step (1), if matching is unsuccessful, proceeds target signature matching, until matching target.
1st, it is to the method that target occlusion situation is judged,
The response of core correlation filterCore correlation filter response maximum beSample Wide and high respectively d and l, by occlusion detection algorithm, if matched per two field pictureThe position at place is MmaxIf blocking threshold Value λ1、0<λ1<1, area factor λ2、0<λ2<1, MmaxSurrounding is more thanHave a position Mi, ask for MiArrive a little MmaxEuclidean distance disi=| | Mmax-Mi| |, whenMore than λ2It is judged as blocking during dl, is not block otherwise.
λ is taken in experiment1For 0.8, λ2For 0.3.
2nd, the determination methods to object module situation of change are,
Wherein, for the wide and high respectively d and l image block of the sample of target appearance model in each frame, m represents image The number of pixel, x in blockijThe gray value of each pixel is represented, wherein 0≤i≤d-1,0≤j≤l-1, p represent pth Two field picture, R represent difference between adjacent two frames picture, can reflect picture target apparent model situation of change, its value it is bigger The learning rate that representative model changes greatly, correspondingly higher could obtain more preferable tracking effect, otherwise R values are small corresponding to smaller η values.
The actual values of R are between 0 to 255.
3rd, the method that the dynamic of model modification formula learning speed changes is:
Represent that learning rate η factor of influence, i.e. Q are bigger with Q, it is greater value more to answer regularized learning algorithm speed, makes η=Q, Q It is expressed from the next,
Wherein,
The invention provides a kind of device for intelligent monitoring, present invention also offers it is a kind of for intelligent monitoring from Adapt to renewal learning speed nuclear phase and close filtered target tracking, this method causes nuclear phase to close filtered target tracking and run into mesh Mark is blocked when change with model, according to circumstances dynamic renewal learning speed, effectively improves nuclear phase pass filtered target track side Method not good defect of tracking effect when running into target occlusion and model change, realizes more preferable target following.
The invention has the advantages that:
1st, the present invention is utilized, it is possible to achieve the intelligent monitoring and tracking of target under complex scene.
2nd, using the present invention improved adaptive updates nuclear phase close filtering method, can to block and model change Faster situation has more preferable adaptability, and the situation stability to long-time tracking is more preferable, thus can obtain tracking effect Effectively improve.
Brief description of the drawings
Fig. 1 supervising device system block diagrams.
Fig. 2 algorithm process flow charts.
Fig. 3 video field administrative division maps.
Fig. 4 (a)-Fig. 4 (f) original algorithm keeps track design sketch.
The tracking effect figure of algorithm after Fig. 5 (a)-Fig. 5 (f) improvement.
Embodiment
Illustrate below and the present invention is described in more detail.
With reference to Fig. 1, intelligent monitoring and controlling device of the invention is embodied as:
Step 1:Camera collection image, and by the image transmitting collected to embedded image processing platform.
The function of described camera collection image is realized based on V4L2 frameworks, is divided into 4 steps:
1) parameter initialization of USB camera;
2) apply for frame cushion space to driving and be mapped to user control;
3) capture and simple process video;
4) USB camera equipment is closed.
Step 2:Embedded image processing platform is by image transmitting to server end.
Described image transmitting process is:
1) the H.264 software codec based on FFmpeg
FFmpeg be it is a set of can be used for record, handle digital audio/video, it is possible to be translated into the solution of stream, choosing Coding and decoding video and transcoding function are realized with FFmpeg storehouses.For developer, this method is concise, utilizes third party Storehouse FFmpeg can more preferably complete cross-platform program development, and transplanting and upgrading more facilitate.It is exactly to utilize in the present invention FFmpeg storehouses, will be H.264 form per frame video data compression.
2) transmission of video based on jrtplib storehouses
The information for being encoded to H.264 form is sent by Real-time Transport Protocol, and jrtplib storehouses have provided the user good RTP associations View is supported.RTP is to come together to complete work with UDP, and video and audio processing, RTCP are carried out on the basis of it make use of TCP/IP Agreement is to be used for real-time monitoring data transmission quality and congestion control.The present invention is by the H.264 source code flow collected according to RTP The requirement of agreement is packaged into bag, then RTP bags are transferred to server by the api function provided by jrtplib storehouses.
Step 3:The api function that server end is provided by jrtplib storehouses receives the video of embedded image processing platform Stream information, is carrying out video flowing decompression by FFmpeg storehouses, is then running adaptive updates learning rate nuclear phase and close filtered target Tracking, and pass relevant information back embedded image processing platform.
Step 4:The information that embedded image processing platform is passed back according to server end, instructs head to be moved accordingly Make.
In combination with Fig. 2, adaptive updates learning rate nuclear phase of the invention closes filtered target tracking, realizes step It is as follows:
1st, a frame video image is obtained, judges whether it is the first two field picture, if the first two field picture, grader is trained, And grader is updated, return and continue to obtain video image.If not the first two field picture, just pass through newest point after renewal Class device determines the position of target in image, continues step (2).
2nd, size of the maximum of wave filter response with updating threshold θ is judged, if the maximum of wave filter response is more than Threshold θ, then carry out step 3, otherwise carries out step 5.
The maximum of core correlation filter response is For the response of core correlation filter.
3rd, detection target occlusion and model situation of change.
It is described detection target occlusion situation determination methods be:
The response of core correlation filterCore correlation filter response maximum beSample Wide and high respectively d and l, by occlusion detection algorithm, if matched per two field pictureThe position at place is Mmax, if blocking threshold Value λ1(0 < λ1< 1), area factor λ2(0 < λ2< 1), it is more than around MmaxHave a position Mi, ask for institute There is Euclidean distance dis of the Mi points to Mmaxi=| | Mmax-Mi| |, whenMore than λ2It is judged as blocking during dl, otherwise, Not block.λ is taken in experiment1For 0.8, λ2For 0.3,For t frame conversion coefficients.If J is,
It is described detection object module situation of change method be:
R represents the change situation of adjacent two frames picture target apparent model, wherein, for target appearance model in each frame Sample it is wide and it is high be respectively d and l image block for, m represents the number of pixel in image block, xijTo represent each picture The gray value of vegetarian refreshments, wherein 0≤i≤d-1,0≤j≤l-1, p represent the pixel of pth two field picture, the actual values of R 0 to 255 it Between.
R represents the difference between adjacent two frames picture, can more reflect the situation of change of the target apparent model of picture, and its value is got over Big representative model change is bigger.
4th, according to target occlusion and model situation of change, regularized learning algorithm speed is formed with reference to the situation of all frames before New grader more new formula, carries out the renewal of grader, and updates haar features, return to step 1.
The specific method of described learning rate adjustment is:
R represents the difference between adjacent two frames picture, can more reflect the situation of change of the target apparent model of picture, and its value is got over Big representative model change is bigger, and the higher learning rate of correspondence could obtain more preferable tracking effect, correspond to conversely, R values are small Smaller η values.
Learning rate η indicates target appearanceTo the learning ability of present frame, η is bigger, and explanation learning ability is stronger, for The situation that object module is changed greatly, the feelings such as the rotation of the significantly deformation of non-rigid, attitudes vibration and planar object Condition, there is more preferable tracking effect, and changes smaller for target appearance model, such as quick object variations, illumination variation, shooting Machine visual angle change and the situation for the change caused by surrounding environment such as blocking, η are got over hour, and tracking effect is better.
The present invention represents learning rate η factor of influence with Q, i.e. Q is bigger, and it is greater value more to answer regularized learning algorithm speed.I.e. η=Q, Q is made to be expressed from the next,
The specific method that described grader updates is:
In protokaryon correlation filtering method for tracking target, object module classifier parameters after convertingWith target appearance mould Type x two parts are constituted.It is to utilize following (4) (5) formula in model modification,
Wherein, η is learning rate, ordinary circumstance choosing value 0.02,WithIt is that t-1 frames and t frames are updated respectively Conversion coefficient, andWithThen be respectively t-1 frames and t frames it is updated after target appearance model, xtRepresent t frame moulds Type,Represent t frame conversion coefficients.
In protokaryon correlation filtering method for tracking target tracker object module and conversion coefficient study update in only Consider target in present frame, not in view of previous frame so that the validity of renewal is substantially reduced.
With reference to the update mode of all frames of previous frame of MOSSE tracker considerations, model is updated.From the first frame to The object module of t frames is provided as { xj:J=1 ..., t }, then adaptive updates learning rate nuclear phase closes filtered target tracking and needed The formula to be solved is revised as
T frame conversion coefficients can be obtained according to above formula
Wherein,Element isκ () is former algorithm Kernel Function, yiFor xiCorresponding regressand value.
If θ for update threshold value, when wave filter respond maximum be more than threshold value, i.e.,When, then object module Update and still utilize formula (4) (5), and conversion coefficient nowDenominatorWith moleculeFollowing formula (8) (9) is just respectively adopted To update.
5th, the maximum of wave filter response is less than threshold θ, then carries out mesh using the haar feature matching methods of SURF algorithm Cursor position amendment, if matching target, the target location arrived using current matching, as the target location of present frame, is returned Step 1 is returned, if matching is unsuccessful, proceeds target signature matching, until the match is successful.
The step of realizing that the described haar feature matching methods using SURF algorithm carry out target location modification method is:
(1) as shown in figure 3, when target region is completely in video display area, i.e., transverse and longitudinal coordinate is all in display When in the coordinate range in region, maximum is responded according to threshold test and is compared with θ, when more than θ, update public with original Formula (8) (9) is updated, and now updates haar characteristic values, and when less than θ, is reset using haar characteristic matchings Position, there is position to continue to track and update grader template as newest target in the target location after positioning;
(2) when target's center's coordinate has beyond viewing area scope, stop the renewal of sorter model, calculated using SURF Method matching finds target and position occurs, because real-time processing can not be fully achieved in SURF characteristic matchings, so needing to wait a frame figure After as having matched, then go to take next frame, by test, i.e., be to take a frame to be matched every three or four frames, expection can be reached Effect.After using haar characteristic matching target locations, grader renewal is re-started, now the target position of haar characteristic matchings Put the coordinate of the target position as latest frame.
Adaptive updates learning rate nuclear phase, which closes filtered target tracking, effectively improves protokaryon correlation filtering target The not good defect of tracking effect of the tracking when running into target occlusion and display model change, while adaptive updates learn Speed nuclear phase, which closes filtered target tracking, also preferably realizes tracking for a long time.As shown in Fig. 4 (a)-Fig. 4 (f), Fig. 4 (a), Fig. 4 (b), Fig. 4 (c) represent that protokaryon correlation filtering method for tracking target is running into tracking effect when blocking, and Fig. 4 (c) is very Significantly show, target run into block when, tracking effect is not good, tracking target lose.Fig. 4 (d), Fig. 4 (e), Fig. 4 (f) tables Show tracking effect of the protokaryon correlation filtering method for tracking target when running into display model change, Fig. 4 (e) and (f) represent target When occurring the large change of display model, tracking effect is not good.Such as organize shown in Fig. 5 (a)-Fig. 5 (f), Fig. 5 (a), Fig. 5 (b), Fig. 5 (c) represents that adaptive updates learning rate nuclear phase closes filtered target tracking and running into tracking effect when blocking, Fig. 5 (c) show, target run into block when, tracking effect is good, can accurately keep up with target.For group Fig. 4 (c), The ability of tracking target is greatly improved.Fig. 5 (d), Fig. 5 (e), Fig. 5 (f) represent that adaptive updates learning rate nuclear phase is closed Tracking effect of the filtered target tracking when film table model changes, the mesh after being improved it can be seen from Fig. 5 (e), Fig. 5 (f) Tracking effect of the mark tracking when running into display model change is good, can accurately track target, relative to For Fig. 4 (e), Fig. 4 (f), the tracking ability of tracking after improvement is substantially improved.Therefore, can by the comparison of upper figure To find out, the tracking effect of the method for tracking target after improvement is substantially better than former method for tracking target.

Claims (6)

1. a kind of intelligent monitoring and controlling device, it is characterized in that:Including head moving cell, video acquisition unit, video flowing control unit And server unit;Head moving cell is a six degree of freedom head;Video acquisition unit is camera, is fixed on head And be connected with embedded image processing platform, for carrying out video acquisition;Video flowing control unit is at an embedded image Platform, one end is connected with camera for transmitting video image, and the other end is connected the control of the reception server unit with head Order, controls the action of head;Server unit is used to receive image, runs adaptive updates learning rate core correlation filtering mesh Tracking is marked, the control instruction of head is issued to embedded image processing platform.
2. a kind of method for tracking target of the intelligent monitoring and controlling device based on described in claim 1, it is characterized in that:
(1) frame video image is obtained, judges whether it is the first two field picture, if the first two field picture, grader is trained, and more New grader, returns and continues to obtain video image;If not the first two field picture, just pass through the newest grader after renewal The position of target in image is determined, continues step (2);
(2) size of the maximum of wave filter response with updating threshold θ is judged, if the maximum of wave filter response is more than threshold value θ, then carry out step (3), otherwise carries out step (5);
(3) detection target occlusion and model situation of change;
(4) according to target occlusion and model situation of change, regularized learning algorithm speed forms new with reference to the situation of all frames before Grader more new formula, carries out the renewal of grader, and updates haar features, return to step (1);
(5) maximum of wave filter response is less than threshold θ, then carries out target using the haar feature matching methods in SURF algorithm Position correction, if matching target, the target location arrived using current matching, as the target location of present frame, is returned Step (1), if matching is unsuccessful, proceeds target signature matching, until matching target.
3. mark tracking according to claim 2, it is characterized in that:The method judged target occlusion situation For,
The response of core correlation filterCore correlation filter response maximum beSample it is wide and Height is respectively d and l, by occlusion detection algorithm, if matched per two field pictureThe position at place is MmaxIf, occlusion threshold λ1、 0<λ1<1, area factor λ2、0<λ2<1, MmaxSurrounding is more thanHave a position Mi, ask for MiM is arrived a littlemax's Euclidean distance disi=| | Mmax-Mi| |, whenWhen be judged as blocking, be not block otherwise.
4. the mark tracking according to Claims 2 or 3, it is characterized in that:To the judgement side of object module situation of change Method is,
<mrow> <mi>R</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mi>d</mi> <mo>,</mo> <mi>l</mi> </mrow> </munderover> <mrow> <mo>(</mo> <mfrac> <mrow> <mo>|</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>p</mi> </msubsup> <mo>-</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mrow> <mi>p</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>|</mo> </mrow> <mi>m</mi> </mfrac> <mo>)</mo> </mrow> </mrow>
Wherein, for the wide and high respectively d and l image block of the sample of target appearance model in each frame, m is represented in image block The number of pixel, xijThe gray value of each pixel is represented, wherein 0≤i≤d-1,0≤j≤l-1, p represent pth frame figure Picture, R represents difference between adjacent two frames picture, can reflect that situation of change, its value of target apparent model of picture bigger is represented The learning rate that model changes greatly, correspondingly higher could obtain more preferable tracking effect, otherwise R values are small corresponding to smaller η values.
5. the mark tracking according to Claims 2 or 3, it is characterized in that:Model modification formula learning speed it is dynamic State change method be:
Represent that learning rate η factor of influence, i.e. Q are bigger with Q, it is greater value more to answer regularized learning algorithm speed, makes η=Q, Q is under Formula represents,
<mrow> <mi>Q</mi> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>0.01</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>J</mi> <mo>&lt;</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>&amp;le;</mo> <mi>R</mi> <mo>&lt;</mo> <mn>2.5</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0.025</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>J</mi> <mo>&lt;</mo> <mn>1</mn> <mo>,</mo> <mn>2.5</mn> <mo>&amp;le;</mo> <mi>R</mi> <mo>&lt;</mo> <mn>8</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0.04</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>J</mi> <mo>&lt;</mo> <mn>1</mn> <mo>,</mo> <mn>8</mn> <mo>&amp;le;</mo> <mi>R</mi> <mo>;</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mn>0.01</mn> <mi>J</mi> </mfrac> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>J</mi> <mo>&amp;GreaterEqual;</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, R represents the difference between adjacent two frames picture,
6. mark tracking according to claim 4, it is characterized in that:The dynamic of model modification formula learning speed changes The method of change into:
Represent that learning rate η factor of influence, i.e. Q are bigger with Q, it is greater value more to answer regularized learning algorithm speed, makes η=Q, Q is under Formula represents,
<mrow> <mi>Q</mi> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>0.01</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>J</mi> <mo>&lt;</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>&amp;le;</mo> <mi>R</mi> <mo>&lt;</mo> <mn>2.5</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0.025</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>J</mi> <mo>&lt;</mo> <mn>1</mn> <mo>,</mo> <mn>2.5</mn> <mo>&amp;le;</mo> <mi>R</mi> <mo>&lt;</mo> <mn>8</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0.04</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>J</mi> <mo>&lt;</mo> <mn>1</mn> <mo>,</mo> <mn>8</mn> <mo>&amp;le;</mo> <mi>R</mi> <mo>;</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfrac> <mn>0.01</mn> <mi>J</mi> </mfrac> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>J</mi> <mo>&amp;GreaterEqual;</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, R represents the difference between adjacent two frames picture,
CN201710362550.1A 2017-05-22 2017-05-22 A kind of intelligent monitoring and controlling device and method for tracking target Pending CN107197199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710362550.1A CN107197199A (en) 2017-05-22 2017-05-22 A kind of intelligent monitoring and controlling device and method for tracking target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710362550.1A CN107197199A (en) 2017-05-22 2017-05-22 A kind of intelligent monitoring and controlling device and method for tracking target

Publications (1)

Publication Number Publication Date
CN107197199A true CN107197199A (en) 2017-09-22

Family

ID=59875461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710362550.1A Pending CN107197199A (en) 2017-05-22 2017-05-22 A kind of intelligent monitoring and controlling device and method for tracking target

Country Status (1)

Country Link
CN (1) CN107197199A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108055501A (en) * 2017-11-22 2018-05-18 天津市亚安科技有限公司 A kind of target detection and the video monitoring system and method for tracking
CN108345885A (en) * 2018-01-18 2018-07-31 浙江大华技术股份有限公司 A kind of method and device of target occlusion detection
CN109597431A (en) * 2018-11-05 2019-04-09 视联动力信息技术股份有限公司 A kind of method and device of target following
CN109660768A (en) * 2019-01-07 2019-04-19 哈尔滨理工大学 One kind being based on Embedded moving object detection intelligent video monitoring system
CN109697385A (en) * 2017-10-20 2019-04-30 中移(苏州)软件技术有限公司 A kind of method for tracking target and device
CN109753846A (en) * 2017-11-03 2019-05-14 北京深鉴智能科技有限公司 Target following system for implementing hardware and method
CN113822297A (en) * 2021-08-30 2021-12-21 北京工业大学 Device and method for identifying target of marine vessel
CN117152258A (en) * 2023-11-01 2023-12-01 中国电建集团山东电力管道工程有限公司 Product positioning method and system for intelligent workshop of pipeline production

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100594513B1 (en) * 2005-08-04 2006-06-30 한국전력공사 Image monitoring system connected with close range radar
CN102194234A (en) * 2010-03-03 2011-09-21 中国科学院自动化研究所 Image tracking method based on sequential particle swarm optimization
CN102256109A (en) * 2011-06-07 2011-11-23 上海芯启电子科技有限公司 Automatic tracking camera system for multiple targets and focusing method for system
CN106372590A (en) * 2016-08-29 2017-02-01 江苏科技大学 Sea surface ship intelligent tracking system and method based on machine vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100594513B1 (en) * 2005-08-04 2006-06-30 한국전력공사 Image monitoring system connected with close range radar
CN102194234A (en) * 2010-03-03 2011-09-21 中国科学院自动化研究所 Image tracking method based on sequential particle swarm optimization
CN102256109A (en) * 2011-06-07 2011-11-23 上海芯启电子科技有限公司 Automatic tracking camera system for multiple targets and focusing method for system
CN106372590A (en) * 2016-08-29 2017-02-01 江苏科技大学 Sea surface ship intelligent tracking system and method based on machine vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
余礼杨等: "改进的和相关滤波器目标跟踪算法", 《计算机应用》 *
常运等: "基于云台摄像机的快速移动人群的检测与跟踪", 《液晶与显示》 *
张雷: "复杂场景下实时目标跟踪算法及实现技术研究", 《中国优秀博士学位论文全文数据库 信息科技辑》 *
赵璐璐: "基于相关滤波的目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697385A (en) * 2017-10-20 2019-04-30 中移(苏州)软件技术有限公司 A kind of method for tracking target and device
CN109753846A (en) * 2017-11-03 2019-05-14 北京深鉴智能科技有限公司 Target following system for implementing hardware and method
US10810746B2 (en) 2017-11-03 2020-10-20 Xilinx Technology Beijing Limited Target tracking hardware implementation system and method
CN108055501A (en) * 2017-11-22 2018-05-18 天津市亚安科技有限公司 A kind of target detection and the video monitoring system and method for tracking
CN108345885A (en) * 2018-01-18 2018-07-31 浙江大华技术股份有限公司 A kind of method and device of target occlusion detection
CN109597431A (en) * 2018-11-05 2019-04-09 视联动力信息技术股份有限公司 A kind of method and device of target following
CN109597431B (en) * 2018-11-05 2020-08-04 视联动力信息技术股份有限公司 Target tracking method and device
CN109660768A (en) * 2019-01-07 2019-04-19 哈尔滨理工大学 One kind being based on Embedded moving object detection intelligent video monitoring system
CN113822297A (en) * 2021-08-30 2021-12-21 北京工业大学 Device and method for identifying target of marine vessel
CN113822297B (en) * 2021-08-30 2024-03-01 北京工业大学 Marine ship target recognition device and method
CN117152258A (en) * 2023-11-01 2023-12-01 中国电建集团山东电力管道工程有限公司 Product positioning method and system for intelligent workshop of pipeline production
CN117152258B (en) * 2023-11-01 2024-01-30 中国电建集团山东电力管道工程有限公司 Product positioning method and system for intelligent workshop of pipeline production

Similar Documents

Publication Publication Date Title
CN107197199A (en) A kind of intelligent monitoring and controlling device and method for tracking target
US10699126B2 (en) Adaptive object detection and recognition
US10628961B2 (en) Object tracking for neural network systems
US11004209B2 (en) Methods and systems for applying complex object detection in a video analytics system
CN110135249B (en) Human behavior identification method based on time attention mechanism and LSTM (least Square TM)
WO2018188453A1 (en) Method for determining human face area, storage medium, and computer device
Wang et al. Enabling edge-cloud video analytics for robotics applications
US20200051250A1 (en) Target tracking method and device oriented to airborne-based monitoring scenarios
US6829391B2 (en) Adaptive resolution system and method for providing efficient low bit rate transmission of image data for distributed applications
US9317762B2 (en) Face recognition using depth based tracking
WO2020228766A1 (en) Target tracking method and system based on real scene modeling and intelligent recognition, and medium
US20170091953A1 (en) Real-time cascaded object recognition
CN112001347B (en) Action recognition method based on human skeleton morphology and detection target
CN108965687A (en) Shooting direction recognition methods, server and monitoring method, system and picture pick-up device
CN113536972B (en) Self-supervision cross-domain crowd counting method based on target domain pseudo label
CN109063581A (en) Enhanced Face datection and face tracking method and system for limited resources embedded vision system
WO2023273628A1 (en) Video loop recognition method and apparatus, computer device, and storage medium
CN114078275A (en) Expression recognition method and system and computer equipment
WO2022052782A1 (en) Image processing method and related device
CN102333221B (en) Panoramic background prediction video coding and decoding method
CN114266952A (en) Real-time semantic segmentation method based on deep supervision
CN105554040A (en) Remote video monitoring method and system
Hou et al. Real-time surveillance video salient object detection using collaborative cloud-edge deep reinforcement learning
CN111160262A (en) Portrait segmentation method fusing human body key point detection
CN116248861A (en) Intelligent video detection method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination