CN105389829A - Low-complexity dynamic object detecting and tracking method based on embedded processor - Google Patents

Low-complexity dynamic object detecting and tracking method based on embedded processor Download PDF

Info

Publication number
CN105389829A
CN105389829A CN201510666653.8A CN201510666653A CN105389829A CN 105389829 A CN105389829 A CN 105389829A CN 201510666653 A CN201510666653 A CN 201510666653A CN 105389829 A CN105389829 A CN 105389829A
Authority
CN
China
Prior art keywords
tracking
low
background model
target
flush bonding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510666653.8A
Other languages
Chinese (zh)
Inventor
陈彩莲
李昕宇
杨博
关新平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201510666653.8A priority Critical patent/CN105389829A/en
Publication of CN105389829A publication Critical patent/CN105389829A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a low-complexity dynamic object detecting and tracking method based on an embedded processor, wherein a background model is established according to a method of extracting a sample set from a first image frame, thereby reducing calculation amount in establishing a background model and improving real-time performance of the algorithm through number reduction of the frames. Because the low-complexity dynamic object detecting and tracking method is combined with a conservative strategy and a relatively active strategy in background model updating, the low-complexity dynamic object detecting and tracking method has advantages of improving detection efficiency, reducing error detection probability, improving video target tracking efficiency of the embedded processor and improving efficiency of a whole video sensor network. According to the low-complexity dynamic object detecting and tracking method, through utilizing the conservative strategy and the relatively active strategy in background model updating, cooperation of a plurality of node comparison detecting modes is utilized; shared information participates calculation and load of each node is equalized. A relatively good tracking effect and a relatively high real-time performance can be obtained. Furthermore the low-complexity dynamic object detecting and tracking method can be operated in the embedded processor with relatively low computing capacity, thereby realizing relatively high robustness, high real-time performance and low complexity.

Description

A kind of low complex degree Detection dynamic target based on flush bonding processor and tracking
Technical field
The present invention relates to Detection dynamic target and tracking field, particularly use the limited flush bonding processor of arithmetic capability to carry out the method for video dynamic target tracking.
Background technology
Along with computer vision deepening continuously in built-in field research, Video processing achieves huge achievement in the nearest more than ten years, as one of branch that it is important, video tracking is at intelligent video monitoring, have a wide range of applications based on the field such as man-machine interaction, medical image of video.In recent years, embedded image processor is efficient with it, and the feature of low energy consumption causes the concern of people.Therefore, embedded image processor realizes video processnig algorithms and there is high using value.Meanwhile, the monitor network of multiple embedded image treating apparatus composition, can greatly improve tracking efficiency by information interaction, and reduce monitoring cost, lays a good foundation, thus meet the requirement of low-carbon environment-friendly for promoting intelligent monitoring.
In recent years, the fast development of the communication technology, embedded technology and sensor technology allows the microsensor with perception, computing power and communication capacity start to enter the visual field of people.After these sensor network consistings can perception in phase, various environment in acquisition and processing network's coverage area or monitoring target information, and be distributed to the user of needs.Numerical information and real world merge by sensor network, profoundly change the interactive mode of man and nature.Meanwhile, this sensor has a wide range of applications at numerous areas such as military affairs, industrial or agricultural control, biologic medical, environmental monitorings.Compare single temperature, the sensors such as humidity, the sensor network that multiple sensors is formed can receive, and analyzes abundanter information, also more can actual conditions in reflecting regional.So will be a good research direction based on the technology of video sensor network in future, development prospect be very wide.
Using video sensor to carry out target following is an important direction, and from 2003, domestic and international research institution and linked groups started to launch research to video sensor at interior multi-media sensor successively.As the famous institutions of higher learning such as California, USA university, Portland state university, CMU all start to set up video sensor network research group and start associated scientific research plan.Meanwhile, Chinese scholar also gives great attention to the research of video sensor network aspect, and Beijing University of Post & Telecommunication's intelligent communication software and key lab of multimedia Beijing, Harbin Institute of Technology, Inst. of Computing Techn. Academia Sinica also start to explore to this field.
In recent years, along with the improving constantly of processing power of flush bonding processor, allow and use more complex information to become possibility as the video sensor in source.Video sensor compares traditional sensors, and it contains much information, and the scope that can monitor is broader, can detect, follows the tracks of the plurality of target can not monitored in the past.Meanwhile, can conveniently dispose also is a large advantage of video sensor.Dispose flexibly, monitoring range is wide, and containing much information is its major advantage.
But because the computing power of node is limited, there is a serious shortage in the supply in the energy supply of node in addition, and simultaneously a large amount of information cannot directly be transmitted by network, can send after the calculating needing node to carry out to a certain degree.Which has limited the development of video sensor.
According to the difference of tracking mode in the middle of video tracking technology, following four classes roughly can be divided into:
The first kind is the tracking based on region.Such algorithm needs the template first determining to comprise target to be tracked, and template is generally rectangle or irregularly shaped; Then use correlation criterion tracking target, the most frequently used correlation criterion is SSD.It can come the position of estimating target in every frame video in conjunction with Kalman filter prediction scheduling algorithm.Its advantage is that when there is not target occlusion, tracking effect is good.Shortcoming is mainly greatly consuming time; Secondly algorithm requires that target distortion is little, and can not occur seriously blocking.
Equations of The Second Kind is the tracking of feature based.This kind of algorithm mainly utilizes the features such as straight line, curve, angle point to carry out related algorithm, it relevant to as if one or several local features of target.Such algorithm generally adopts Canny operator obtain the edge feature of target or adopt SUSAN operator to obtain the angle point information of target.It is more blunt that the advantage of such algorithm is to block target, as long as also include local feature, just can successfully follow the tracks of.The difficult point of algorithm is the motion initialization matter how solved in object tracking process.
3rd class is the tracking of based upon activities profile.Movable contour model is based on Snake model, and the parametric curve using energy minimum represents, the energy taking curvilinear function as parameter by minimization carries out Dynamic iterations, and profile can be upgraded automatically.The calculated amount of this kind of algorithm is little, if the initial stage can be separated each moving target and initialized target profile rightly, even if so part is blocked, also can follow the tracks of continuously, but often be difficult to initialization profile.
4th class is the tracking based on model.This algorithm utilizes the outward appearance of special object and the priori of shape to set up object module, then in follow-up frame of video, carries out coupling and real-time update model.Usually line-plot method, two-dimensional silhouette, three-dimensional model three kinds of methods are had to target Modling model.But be in fact difficult to the geometric model accurately obtaining all moving targets, which limits the use of the track algorithm based on model, simultaneously because calculative parameter is many, calculated amount large, the time of this meeting at substantial, be difficult to meet the real-time of following the tracks of.
As can be seen from the above, needing to carry out in the application of video tracking, in the middle of the application especially adopting the lower flush bonding processor of arithmetic capability, higher to the requirement of real-time of algorithm, and Equations of The Second Kind and the 3rd class methods meet the demands.But all there is the problem of initialization difficulty in these two kinds of methods.If initialized problem can not be solved well, follow the tracks of and just do not know where to begin.Initialization aspect, existing mode is the initialization completing whole model by the feature in multiple image mostly, so very large on the impact of real-time.Process multiframe data are disadvantageous for flush bonding processor, are especially needing in the middle of the frequent video sensor network switched.Therefore, the tracking not having a kind of effect and real-time to have concurrently in prior art.
Needing to carry out in the application of video tracking, in the middle of the application especially adopting the lower flush bonding processor of arithmetic capability, higher to the requirement of real-time of algorithm, and Equations of The Second Kind and the 3rd class methods meet the demands.But all there is the problem of initialization difficulty in these two kinds of methods.If initialized problem can not be solved well, follow the tracks of and just do not know where to begin.Initialization aspect, existing mode is the initialization completing whole model by the feature in multiple image mostly, so very large on the impact of real-time.Process multiframe data are disadvantageous for flush bonding processor, are especially needing in the middle of the frequent video sensor network switched.Therefore, those skilled in the art is devoted to develop the tracking that a kind of effect of the flush bonding processor lower based on arithmetic capability and real-time have concurrently.
Summary of the invention
Because the above-mentioned defect of prior art, technical matters to be solved by this invention is the tracking how finding effect and real-time based on the lower flush bonding processor of arithmetic capability to have concurrently.
For achieving the above object, the invention provides a kind of low complex degree Detection dynamic target based on flush bonding processor and tracking, comprise the following steps:
A. video input;
B. frame of video pre-service;
C. background model is set up;
D. background model is upgraded;
E. target's center's coordinate is obtained;
F. locating and tracking.
Further, described steps A comprises:
A1. video equipment initialization, what complete video input apparatus is enable;
A2. processor end initialisation image input, prepares the acquisition carrying out image;
A3. processor sends instruction to video input apparatus, starts to catch image.
Further, described step C comprises:
C1. its background model sample set is set up to each pixel.
Further, described step D comprises:
D1. each pixel of next frame image is compared with the pixel in background model, if greatly
In given threshold value, then think a coupling, mate number increases simultaneously, and every in background model
Individual element relatively after, calculate its coupling number, if be less than given threshold value, then think that it is dynamic prospect,
If be greater than given threshold value, then think that it is background.
Further, described step e comprises:
E1. for the prospect in the middle of each frame, its center is got, as the centre coordinate of target.
Further, described step F comprises:
F1. the coordinate obtained is processed, the result of process is sent to the next node simultaneously observing target simultaneously, after node completes calculating, return correlation values and complete the calculating of final coordinate, realize following the tracks of.
The object of the present invention is to provide a kind of low complex degree video tracing method being applicable to flush bonding processor, adopt the method can improve the efficiency that flush bonding processor carries out video frequency object tracking, thus strengthen the efficiency of whole video sensor network.
Low complex degree Detection dynamic target based on flush bonding processor provided by the present invention and tracking, adopt the first two field picture to extract the method establishment background model of its sample set.Decrease calculated amount during Background Modeling on the one hand, the minimizing of frame number improves the real-time of algorithm on the other hand; Low complex degree Detection dynamic target based on flush bonding processor provided by the invention and tracking, owing to combining the conservative strategy in the middle of background model renewal and strategy comparatively initiatively, thus improve its efficiency detected, reduce the probability of error-detecting.
The method of the invention has following technique effect:
1, the present invention is by the use for the conservative strategy in the middle of background model renewal and strategy comparatively initiatively, decreases the error rate in the middle of Detection dynamic target, improves the real-time of detection;
2, adopt the contrast detection mode of multiple node to link, information of sharing participates in calculating, the balanced burden of each node.
3, emulation and experimental result show, the present invention can obtain good tracking effect and higher real-time, can run simultaneously, show good robustness, real-time and low complex degree on the flush bonding processor that arithmetic capability is lower.
Be described further below with reference to the technique effect of accompanying drawing to design of the present invention, concrete structure and generation, to understand object of the present invention, characteristic sum effect fully.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of a preferred embodiment of the present invention;
Fig. 2 is the explanation about video input form of a preferred embodiment of the present invention;
Fig. 3 is the environment schematic for typical indoor Scenario Design of a preferred embodiment of the present invention;
Fig. 4 is the schematic diagram about angle calculation of a preferred embodiment of the present invention.
Embodiment
More in the restriction of embedded end algorithm for design, maximum problem is to reach object with limited computational resource.Enough simplify although PC holds operable video tracking algorithm to have, still cannot directly use for embedded, need to improve for flush bonding processor characteristic.Therefore, how to utilize limited computational resource to follow the tracks of and just become maximum challenge with the object of location.
100. video tracking algorithms
Use flush bonding processor to carry out dynamic target tracking and can be divided into three parts.Image inputs, target detection and dynamically following the tracks of.Idiographic flow as shown in Figure 1.
The importation of image, needs, from camera importing data frame by frame, to complete the pre-service for raw data, as Gaussian smoothing, to reduce image noise and the fuzzy interference for subsequent treatment simultaneously.
Target detection part, first will set up background model, can complete the foundation of background model with the first frame, can import detection frame by frame subsequently.Second frame input starts, and can upgrade background model, and in the middle of the process of renewal, dynamic prospect just can be detected
Dynamic tracking section, can obtain the centre coordinate of dynamic object, thus show on screen according to the information of prospect, import next frame and can realize dynamic tracking of circulating after completing process.
110. image inputs
The signal source problem that process after image input part needs to consider needs.Just be to provide size to fit specifically, stable image source clearly.The video format that analog video input equipment is caught is generally YUV422 form, and this is a kind of color coding form, and its form gathered as shown in Figure 2.
Wherein, each pixel size is 16, Y is luminance component, UV is color difference components, in the middle of each word, and the monochrome information Y0 of first character joint storage first frame, the colour difference information U0 of the first frame is stored in second byte, store the monochrome information Y1 of the second frame in 3rd byte, store the colour difference information V0 of the first frame in last byte, by that analogy.When carrying out target following, for color part without the need to being concerned about, namely only need the monochrome information extracting each frame just can complete.Therefore, for the coding of YUV422, adopt the mode of interval sampling just the monochrome information of each frame can be extracted.Namely for the first byte and the 3rd byte-extraction information of each word.
For the image after extraction, carry out gaussian filtering to reduce the interference of noise.Adopt the gaussian kernel of 7 × 7 smoothing in this algorithm.
120. target detection
Need in the process of target detection to set up background model to testing environment, afterwards background model is upgraded, reach the object detecting dynamic object.
In the stage of setting up background model, ViBe algorithm is improved.The each pixel in the middle of to the first frame is needed to set up sample set in former algorithm.And in the middle of this paper algorithm, when setting up sample set, adopt the quantity of each pixel 20 samples to carry out.The 8 neighborhood Stochastic choice being chosen as current sample of sample.It should be noted that, built-in random number generation function is had in the middle of OpenCV, this facilitate that the use of ViBe, and flush bonding processor does not have similar function can select, calculate if use the built-in random seed of C language to carry out random number, greatly can tie down the travelling speed of system.Based on this, algorithm adopts the mode of the table of random numbers to carry out stochastic sampling herein.Specifically, be stored in flush bonding processor exactly by random number generated in advance with array form, when needs grab sample, the element in the select progressively table of random numbers, as the subscript in image array sample set, can reach the object of Stochastic choice.And the length of the table of random numbers should meet the long-time randomness selected, unsuitable too short, the oversize storage pressure that also can increase system, should regulate as required simultaneously.
In the more new stage of background model, be divided into detection model matching state and statistics two steps are carried out to the number of times of Matching Model.Detection model matching state is to current pixel and background model, and namely the sample set of current pixel compares, result whether Matching Model.Statistical match number is the number of samples of statistical match.After initialization completes, from the second frame, scanning is noticed to each pixel, by it with in the background model of same position pixel, namely contrast the N number of sample value in the middle of background sample set.Method makes difference to its grey scale pixel value to ask absolute value, the result of gained compared with the matching threshold R preset.If result is less than this threshold value R, represent that this pixel exists a coupling, mate number increases simultaneously.Afterwards to matched pixel number n 0(x, y) adds up, then result is compared with the minimum matching number threshold value #min preset.If final matching number sum is less than this minimum matching number threshold value #min, just represent that this pixel is foreground point; Correspondingly, if final matching number is greater than this minimum matching number threshold value #min, then represents that this pixel is background dot, be shown below.
M 0 ( x , y ) = 1 , n 0 ( x , y ) < # min 0 , n 0 ( x , y ) &GreaterEqual; # min
Wherein M 0(x, y)=1 represents that t frame (x, y) point is considered to foreground point, M 0(x, y)=0 represents that t frame (x, y) point is considered to background dot, thus the M that finally bears results 0(x, y), this is the foreground detection image of a binaryzation.
It should be noted that when determining the sample value will replaced in sample set, ViBe algorithm random selecting sample value upgrades, and can make seamlessly transitting between two different sample values like this.Because it is random for upgrading, the probability that a sample value of current pixel is not updated in t is (N-1)/N.Assuming that when Time Continuous, after the time of dt, the probability that this sample value is not still updated is
P ( t , t + d t ) = ( N - 1 N ) ( t + d t ) - t
Namely
P ( t , t + d t ) = e - ln ( N N - 1 ) d t
130. dynamically follow the tracks of
After detecting dynamic object, just can obtain the positional information of prospect, i.e. all M 0the point of (x, y)=1.The barycenter of hypothetical target in target internal, to all M 0the point of (x, y)=1 is added up, and asks the arithmetic mean of its pixel horizontal ordinate and ordinate respectively.
c x = &Sigma; i = 0 N x i N
c y = &Sigma; i = 0 N y i N
Wherein, x, y meet:
M 0(x,y)=1
The tracking to simple target and detection can be completed like this.
300. dual sensor location algorithms
For indoor scene, video sensor network as shown in Figure 3 can be built, for simplicity, for two video sensors.Scene shown in figure is the vertical view in room, and two identical image sensors are positioned over the corner in room respectively, and namely sensor 1 is positioned over B point, and sensor 2 is positioned over C point.Have while overlap in the image field of simultaneously each camera, namely isosceles trapezoid BCHG is viewing area.When target appears in the region that two camera image fields cover simultaneously, only need to calculate target depth, just can position target.Concrete steps are divided into parameter calibration, calculate primary data, transformation parameter, continue to calculate and verification.
310. calibrating parameters
In order to enable flush bonding processor calculate correct result, needing to demarcate the placement location of sensor and image field data, ensureing that the error of system is minimum.The data of demarcating are needed to have camera image field size ∠ GBC and ∠ HCB in whole system, distance i between two sensors, target projects to the mapping relations (namely each pixel is corresponding to the position in real world) of image coordinate in the coordinate of camera CCD place plane and video sensor in real world.
320. calculate primary data
In order to obtain the coordinate of target in real world, needing to calculate target depth, is exactly the length l calculating line segment BF specifically.Need solving a triangle BFC for this reason.According to data with existing, camera image field size ∠ GBC and ∠ HCB, the distance i between two sensors is known quantity, and in triangle BFC, three angle [alpha] 1, α 2 and β and length of side l, k need to solve.
Wherein, can calculate very soon for sensor 1, α 1, α 2 transmits after then needing sensor 2 calculating, as shown in Figure 4.
As shown in Figure 4, fan-shaped BCG is the field range of camera 1, line segment for the long limit of camera CCD plane, the i.e. long limit of whole image, F point is target location, and L is line segment with arc MN intersection point, K is line segment with line segment intersection point, the i.e. picture of target on CCD.According to geometrical principle, can obtain:
&alpha; 1 &angle; G B C = M L M N
Namely arc ML and the ratio of arc MN are exactly the ratio of α 1 and ∠ GBC radian.Because in the middle of actual conditions, CCD length is very little, so have:
M L M N &ap; M K M N
Therefore, α 1 can obtain by the position of target on image is approximate.
&alpha; 1 = M K M N &CenterDot; &angle; G B C
In like manner can calculate α 2 by sensor 2.
330. Transfer Parameters
Parameter alpha 1 between two video sensors, α 2 are transmitted, continues for the other side.
340. result of calculation
Proceed to calculate according to the parameter after transmitting, final result can be obtained.For sensor 1, can be obtained by sine:
l = i s i n &beta; &CenterDot; s i n &alpha; 1
Can calculate the position of target in world coordinate system thus, and the absolute position of target also just uniquely determines.In like manner can obtain the result of sensor 2:
k = i s i n &beta; &CenterDot; s i n &alpha; 1
Position for sensor 2 target also can determine.
More than describe preferred embodiment of the present invention in detail.Should be appreciated that the ordinary skill of this area just design according to the present invention can make many modifications and variations without the need to creative work.Therefore, all technician in the art, all should by the determined protection domain of claims under this invention's idea on the basis of existing technology by the available technical scheme of logical analysis, reasoning, or a limited experiment.

Claims (6)

1., based on low complex degree Detection dynamic target and the tracking of flush bonding processor, it is characterized in that, comprise the following steps:
A. video input;
B. frame of video pre-service;
C. background model is set up;
D. background model is upgraded;
E. target's center's coordinate is obtained;
F. locating and tracking.
2. the low complex degree Detection dynamic target based on flush bonding processor according to claim 1 and tracking, it is characterized in that, described steps A comprises:
A1. video equipment initialization, what complete video input apparatus is enable;
A2. processor end initialisation image input, prepares the acquisition carrying out image;
A3. processor sends instruction to video input apparatus, starts to catch image.
3. the low complex degree Detection dynamic target based on flush bonding processor according to claim 1 and tracking, it is characterized in that, described step C comprises:
C1. its background model sample set is set up to each pixel.
4. the low complex degree Detection dynamic target based on flush bonding processor according to claim 1 and tracking, it is characterized in that, described step D comprises:
D1. each pixel of next frame image is compared with the pixel in background model, if be greater than given threshold value, then think a coupling, simultaneously mate number increase, and for each element in background model relatively after, calculate its coupling number, if be less than given threshold value, then think that it is dynamic prospect, if be greater than given threshold value, then think that it is background.
5. the low complex degree Detection dynamic target based on flush bonding processor according to claim 1 and tracking, it is characterized in that, described step e comprises:
E1. for the prospect in the middle of each frame, its center is got, as the centre coordinate of target.
6. the low complex degree Detection dynamic target based on flush bonding processor according to claim 1 and tracking, it is characterized in that, described step F comprises:
F1. the coordinate obtained is processed, the result of process is sent to the next node simultaneously observing target simultaneously, after node completes calculating, return correlation values and complete the calculating of final coordinate, realize following the tracks of.
CN201510666653.8A 2015-10-15 2015-10-15 Low-complexity dynamic object detecting and tracking method based on embedded processor Pending CN105389829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510666653.8A CN105389829A (en) 2015-10-15 2015-10-15 Low-complexity dynamic object detecting and tracking method based on embedded processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510666653.8A CN105389829A (en) 2015-10-15 2015-10-15 Low-complexity dynamic object detecting and tracking method based on embedded processor

Publications (1)

Publication Number Publication Date
CN105389829A true CN105389829A (en) 2016-03-09

Family

ID=55422078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510666653.8A Pending CN105389829A (en) 2015-10-15 2015-10-15 Low-complexity dynamic object detecting and tracking method based on embedded processor

Country Status (1)

Country Link
CN (1) CN105389829A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558831A (en) * 2018-11-27 2019-04-02 成都索贝数码科技股份有限公司 It is a kind of fusion space-time model across camera shooting head's localization method
CN110610514A (en) * 2018-06-15 2019-12-24 株式会社日立制作所 Method, device and electronic equipment for realizing multi-target tracking

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100208998A1 (en) * 2007-07-08 2010-08-19 Marc Van Droogenbroeck Visual background extractor
CN103150730A (en) * 2013-03-07 2013-06-12 南京航空航天大学 Round small target accurate detection method based on image
CN103179401A (en) * 2013-03-19 2013-06-26 燕山大学 Processing method and device for multi-agent cooperative video capturing and image stitching
CN103839279A (en) * 2014-03-18 2014-06-04 湖州师范学院 Adhesion object segmentation method based on VIBE in object detection
CN104063885A (en) * 2014-07-23 2014-09-24 山东建筑大学 Improved movement target detecting and tracking method
CN104535047A (en) * 2014-09-19 2015-04-22 燕山大学 Multi-agent target tracking global positioning system and method based on video stitching

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100208998A1 (en) * 2007-07-08 2010-08-19 Marc Van Droogenbroeck Visual background extractor
CN103150730A (en) * 2013-03-07 2013-06-12 南京航空航天大学 Round small target accurate detection method based on image
CN103179401A (en) * 2013-03-19 2013-06-26 燕山大学 Processing method and device for multi-agent cooperative video capturing and image stitching
CN103839279A (en) * 2014-03-18 2014-06-04 湖州师范学院 Adhesion object segmentation method based on VIBE in object detection
CN104063885A (en) * 2014-07-23 2014-09-24 山东建筑大学 Improved movement target detecting and tracking method
CN104535047A (en) * 2014-09-19 2015-04-22 燕山大学 Multi-agent target tracking global positioning system and method based on video stitching

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SEBASTIAN BRUTZER 等: "Evaluation of Background Subtraction Techniques for Video Surveillance", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
单志龙 等: "一种使用灰度预测模型的强自适应性移动节点定位算法", 《电子与信息学报》 *
吴剑舞 等: "一种基于改进ViBe的运动目标检测方法", 《计算机与现代化》 *
孙水发 等: "室外视频前景检测中的形态学改进ViBe算法", 《计算机工程与应用》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610514A (en) * 2018-06-15 2019-12-24 株式会社日立制作所 Method, device and electronic equipment for realizing multi-target tracking
CN110610514B (en) * 2018-06-15 2023-09-19 株式会社日立制作所 Method, device and electronic equipment for realizing multi-target tracking
CN109558831A (en) * 2018-11-27 2019-04-02 成都索贝数码科技股份有限公司 It is a kind of fusion space-time model across camera shooting head's localization method
CN109558831B (en) * 2018-11-27 2023-04-07 成都索贝数码科技股份有限公司 Cross-camera pedestrian positioning method fused with space-time model

Similar Documents

Publication Publication Date Title
Sun et al. Gesture recognition based on kinect and sEMG signal fusion
CN111444828B (en) Model training method, target detection method, device and storage medium
Huang et al. Retracted: Jointly network image processing: Multi‐task image semantic segmentation of indoor scene based on CNN
Zhang et al. A new writing experience: Finger writing in the air using a kinect sensor
CN103389699B (en) Based on the supervisory control of robot of distributed intelligence Monitoring and Controlling node and the operation method of autonomous system
Marin-Jimenez et al. 3D human pose estimation from depth maps using a deep combination of poses
CN107943837A (en) A kind of video abstraction generating method of foreground target key frame
CN102270348B (en) Method for tracking deformable hand gesture based on video streaming
CN103968845B (en) A kind of DSP Yu FPGA parallel multi-mode star image processing method for star sensor
CN103150020A (en) Three-dimensional finger control operation method and system
CN103105924B (en) Man-machine interaction method and device
CN104317391A (en) Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system
CN102184551A (en) Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN107885871A (en) Synchronous superposition method, system, interactive system based on cloud computing
Kong et al. A hybrid framework for automatic joint detection of human poses in depth frames
CN104217428A (en) Video monitoring multi-target tracking method for fusion feature matching and data association
CN103065131A (en) Method and system of automatic target recognition tracking under complex scene
CN102853830A (en) Robot vision navigation method based on general object recognition
CN104167006A (en) Gesture tracking method of any hand shape
CN103577792A (en) Device and method for estimating body posture
CN111862030A (en) Face synthetic image detection method and device, electronic equipment and storage medium
Zhang et al. 3D human pose estimation from range images with depth difference and geodesic distance
CN105389829A (en) Low-complexity dynamic object detecting and tracking method based on embedded processor
CN109189219A (en) The implementation method of contactless virtual mouse based on gesture identification
Bao et al. A new approach to hand tracking and gesture recognition by a new feature type and HMM

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160309