CN108898057A - Track method, apparatus, computer equipment and the storage medium of target detection - Google Patents

Track method, apparatus, computer equipment and the storage medium of target detection Download PDF

Info

Publication number
CN108898057A
CN108898057A CN201810516188.3A CN201810516188A CN108898057A CN 108898057 A CN108898057 A CN 108898057A CN 201810516188 A CN201810516188 A CN 201810516188A CN 108898057 A CN108898057 A CN 108898057A
Authority
CN
China
Prior art keywords
frame image
pixel
current frame
tracking target
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810516188.3A
Other languages
Chinese (zh)
Other versions
CN108898057B (en
Inventor
林凡
成杰
张秋镇
唐昌宇
杨峰
李盛阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GCI Science and Technology Co Ltd
Original Assignee
GCI Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GCI Science and Technology Co Ltd filed Critical GCI Science and Technology Co Ltd
Priority to CN201810516188.3A priority Critical patent/CN108898057B/en
Publication of CN108898057A publication Critical patent/CN108898057A/en
Application granted granted Critical
Publication of CN108898057B publication Critical patent/CN108898057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to method, apparatus, computer equipment and the storage mediums of tracking target detection, are applied to unmanned technical field.The method includes:Obtain the current frame image of tracking target;It determines the background model for tracking target in current frame image, tracking target is extracted from current frame image according to the background model, as first object testing result;The edge pixel point in the current frame image is obtained, the tracking target in current frame image is determined according to the edge pixel point, as the second object detection results;According to the comparing result of the first object testing result and the second object detection results, the testing result of tracking target is obtained.The embodiment of the present invention solves the problems, such as that existing tracking target detection complexity is high, and improves the accuracy of tracking target detection.

Description

Track method, apparatus, computer equipment and the storage medium of target detection
Technical field
The present invention relates to unmanned technical fields, set more particularly to the method, apparatus of tracking target detection, computer Standby and storage medium.
Background technique
With the development of science and technology and social progress, unmanned technology will be that the important of future automobile, aircraft etc. is ground Study carefully direction.
In automobile, aircraft in moving process, the identification to tracking target is the basis of unmanned technology, tracks mesh Target recognition accuracy directly influences unpiloted hommization and safety.Currently, the identification main method of tracking target For:Vehicle is tracked using image pyramid optical flow tracking mode, including establishes image pyramid, according to image pyramid In vehicle characteristics point in every tomographic image exact value, generate the pursuit path of vehicle.In the implementation of the present invention, it invents Following problem exists in the prior art in people:This method it is computationally intensive, exist tracking target detection lag, be unfavorable for reality When processing burst driving condition.
Summary of the invention
Based on this, it is necessary to aiming at the problem that existing way is to tracking target detection lag, provide a kind of tracking target inspection Method, apparatus, computer equipment and the storage medium of survey.
According to the first aspect of the invention, a kind of method tracking target detection is provided, including:
Obtain the current frame image of tracking target;
It determines the background model for tracking target in current frame image, is extracted from current frame image according to the background model Target is tracked out, as first object testing result;
The edge pixel point in the current frame image is obtained, is determined in current frame image according to the edge pixel point Tracking target, as the second object detection results;
According to the comparing result of the first object testing result and the second object detection results, the inspection of tracking target is obtained Survey result.
The background model of target is tracked in the determining current frame image in one of the embodiments, including:
Determine current frame image compared to the changed pixel of previous frame image;
The corresponding background model of previous frame image is updated according to the changed pixel, updated back Background model of the scape model as the tracking target in current frame image.
In one of the embodiments, according to the changed pixel to the corresponding background mould of previous frame image Before type is updated, further include:
The history video for obtaining tracking target, has the history video to obtain continuous historical frames image;
Pixel in each historical frames image is clustered, is determined according to the cluster result of multiple historical frames images initial Background model, as the corresponding background model of first frame image.
The pixel in each historical frames image clusters in one of the embodiments, including:
To each pixel in history frame number image, the minimum range D of its pixel value Yu existing cluster centre is calculatedmin
If DminGreater than distance threshold Ta, then it is that the pixel creates a new cluster, and by the picture of the pixel Center of the element value as the new cluster;
If Dmin<Ta, then the pixel is clustered to corresponding and has cluster, the pixel number for having cluster adds 1, and update the center for having cluster.
The cluster result according to multiple historical frames images determines initial back-ground model in one of the embodiments, Including:
From the cluster result of multiple historical frames images, selected pixels point quantity is greater than or equal to the cluster of given threshold, Initial back-ground model is determined according to the cluster selected.
Background model is corresponded to previous frame image using following formula in one of the embodiments, to be updated:
Wherein, Bk(x, y) is the pixel in the background model updated, Bk-1(x, y) is the corresponding background of previous frame image Pixel in model, Ik(x, y) is in current frame image relative to the changed pixel of previous frame image, coefficient 0<α< 1, indicate the turnover rate of preset background model.
In one of the embodiments, before the step of obtaining the edge pixel point in the current frame image, also wrap It includes:
The gradient direction and gradient amplitude for determining pixel in current frame image, according to the gradient direction and gradient of pixel Amplitude detection pixel is edge pixel point or non-edge pixels point.
In one of the embodiments, before the step of obtaining the edge pixel point in the current frame image, also wrap It includes:
Current frame image is filtered, the smoothed image of current frame image is obtained.
The step of the tracking target in current frame image is determined according to the edge pixel point in one of the embodiments, Suddenly, including:
Obtain current frame image is detected by first threshold and second threshold first edge point set with Second edge point set;Wherein, first threshold is greater than second threshold;
The corresponding marginal point of first edge point set is connected, when being connected to endpoint, finds side from second edge point set The connection of edge point, connection are completed to obtain the image of the tracking target in current frame image.
In one of the embodiments, current frame image detect by first threshold and second threshold in acquisition Before the first edge point set and second edge point set that arrive, further include:
Determine that first threshold and second threshold, the first threshold are greater than according to predetermined optimum gradation segmentation threshold The optimum gradation segmentation threshold, second threshold are less than the optimum gradation segmentation threshold;
The optimum gradation segmentation threshold meets condition:Using the optimum gradation segmentation threshold to picture in current frame image Vegetarian refreshments is classified, and the variance between two obtained classification set is maximum.
Further include in one of the embodiments,:
Determine the gray level of pixel in current frame image;
Pixel in current frame image is divided into two classification set according to intensity segmentation threshold value, respectively corresponds tracking target Classification set and background class set;
Calculate the variance between tracking target classification set and background class set;
Intensity segmentation threshold value is adjusted within the scope of 0~L, and calculates corresponding variance, and determines the maximum value of variance;Its In, L indicates the maximum gray scale of pixel in current frame image;
Corresponding intensity segmentation threshold value when variance maximum value is obtained, as optimum gradation segmentation threshold.
The comparison first object testing result and the second object detection results in one of the embodiments, obtain Include to the step of testing result for tracking target:
The intersection for determining first object testing result and the second object detection results obtains tracking target by the intersection Testing result.
According to the second aspect of the invention, a kind of device tracking target detection is provided, including:
Image collection module, for obtaining the current frame image of tracking target;
First detection module, for determining the background model for tracking target in current frame image, according to the background model Tracking target is extracted from current frame image, as first object testing result;
Second detection module, for obtaining the edge pixel point in the current frame image, according to the edge pixel point The tracking target in current frame image is determined, as the second object detection results;And
Contrasting detection module, for the comparison knot according to the first object testing result and the second object detection results Fruit obtains the testing result of tracking target.
According to the third aspect of the invention we, a kind of computer equipment, including memory and processor are provided;The storage Device, for storing computer program;When the computer program is executed by the processor, so that the processor is realized such as The method of above-mentioned tracking target detection.
According to the fourth aspect of the invention, a kind of computer readable storage medium is provided, computer program is stored thereon with, When the computer program is executed by the processor, so that the processor realizes the side such as above-mentioned tracking target detection Method.
Implement embodiment provided by the invention, recognition and tracking target is being needed to be, obtains the present frame of tracking target in real time Image;On the one hand, the background model that target is tracked in current frame image is determined, according to the background model from current frame image Tracking target is extracted, as first object testing result;On the other hand, the edge pixel in the current frame image is obtained Point determines the tracking target in current frame image according to the edge pixel point, as the second object detection results;Based on upper Two aspects are stated, by the comparing result of first object testing result and the second object detection results, obtain the detection of tracking target As a result, tracking target thus can be accurately identified, while two aspects can be detected simultaneously, the detection complexity of two aspects is equal It is lower, be conducive to overcome the problems, such as target following detection lag.
Detailed description of the invention
Fig. 1 is the system architecture diagram that the method that target detection is tracked in one embodiment is applicable in;
Fig. 2 is the schematic flow chart of the method for the tracking target detection of an embodiment;
Fig. 3 is the schematic flow chart of the determination first object testing result of an embodiment;
Fig. 4 is the schematic flow chart of the second object detection results of determination of an embodiment;
Fig. 5 is the schematic flow chart of the determination optimum gradation segmentation threshold of an embodiment;
Fig. 6 is the schematic diagram of the device of the tracking target detection of an embodiment;
Fig. 7 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
The method of tracking target detection provided by the present application, can be adapted in system architecture as shown in Figure 1.Wherein move Dynamic equipment can be intelligent locomotive, automatic driving vehicle, unmanned plane etc..The mobile device be provided with camera shooting mechanism, console with And driving mechanism.The mobile device realizes that position is mobile by driving mechanism, and in moving process, camera shooting mechanism can captured in real-time The video or image of tracking target in front of mobile device;Console receives the image of camera shooting mechanism, detects current tracking mesh Target image identifies the state of current tracking target, is also based on the state of current tracking target under control driving mechanism Send out control instruction corresponding, to adjust the moving condition of mobile device, including but not limited to the moving direction of adjustment mobile device, Speed etc..
It should be noted that wherein, console can be also possible to the collection of multiple processors with an individual processor It closes.Such as:On automatic driving vehicle, console be can be by the set of image processor and vehicle device controller;In unmanned plane On, console can be by image processor and fly the set of control processor.
In one embodiment, as shown in Fig. 2, providing a kind of method for tracking target detection, it is applied in this way It is illustrated, includes the following steps for above-mentioned mobile device:
S101 obtains the current frame image of tracking target.
In the embodiment of the present invention, the video of tracking target can be obtained in real time, thus obtains real-time frame image;It can also be by According to setting shooting interval, the current frame image of shooting tracking target.
It needs, in the embodiment of the present invention, tracking target refers to appointing with mesh in front of mobile device moving direction Mark can be the pedestrian in front, be also possible to the vehicle in front.It in embodiments of the present invention, will be to move with automatic driving vehicle For dynamic equipment, corresponding tracking target is described by taking the vehicle in front as an example.
S102 determines the background model that target is tracked in current frame image, according to the background model from current frame image In extract tracking target, as first object testing result.
In the embodiment of the present invention, background model refers to that tracking target is locating to external environment, such as:To automatic driving car For, other image informations in current frame image before front vehicles can be understood as background model.
S103 obtains the edge pixel point in the current frame image, determines present frame according to the edge pixel point Tracking target in image, as the second object detection results.
In the embodiment of the present invention, the edge pixel point in existing image detecting technique acquisition current frame image can be used, It is without limitation.By connecting edge pixel point, the image information of the tracking target in current frame image can be obtained.
S104 obtains tracking mesh according to the comparing result of the first object testing result and the second object detection results Target testing result.
Wherein, it according to the first object testing result and the second object detection results, can mutually compensate, mutually correct, Thus the image for obtaining tracking target in current frame image is improved compared to target is tracked in traditional approach detection frame image Track the accuracy in detection of target.And determine that first object testing result and the second object detection results can be performed simultaneously, and Algorithm complexity is not high, overcomes the problems, such as target following detection lag on the whole.
In one embodiment, the background model that target is tracked in current frame image is determined, including:Determine current frame image phase Pixel more changed than previous frame image;According to the changed pixel to the corresponding background mould of previous frame image Type is updated, background model of the updated background model as the tracking target in current frame image.Such as:It can be according to such as Lower formula corresponds to background model to previous frame image and is updated:
Wherein, Bk(x, y) is the pixel in the background model updated, Bk-1(x, y) is the corresponding background of previous frame image Pixel in model, Ik(x, y) is in current frame image relative to the changed pixel of previous frame image, coefficient 0<α< 1, indicate the turnover rate of preset background model.
On the basis of obtaining previous frame image corresponding background model, due to the pixel variation between two continuous frames image It is smaller, therefore it can be based on changed pixel, the corresponding background model of previous frame image is updated, it is thus quick to obtain The background model that target is tracked into current frame image, improves the determination efficiency of background model, while reducing background model Determining computational complexity.
It in one embodiment, can be according to current frame image and back after obtaining the corresponding background model of current frame image The difference of scape model extracts tracking target from current frame image.The complexity of the realization algorithm of this mode is low, is conducive to Reduce the lag of tracking target identification.
The background model method of determination of above-described embodiment is therefore the initial back-ground model based on the existing background model Accuracy is constructed, will affect the accuracy of the subsequent background model for updating and obtaining.Specifically, in one embodiment, initial background The building mode of model includes:The history video for obtaining tracking target, has the history video to obtain continuous historical frames image; Pixel in each historical frames image is clustered, initial background mould is determined according to the cluster result of multiple historical frames images Type, as the corresponding background model of first frame image.Detailed process for example shown in Fig. 3, includes the following steps:
S301 obtains the history video of tracking target, has the history video to obtain continuous historical frames image, it is assumed that have M historical frames image.
S302 clusters the pixel in historical frames image, obtains the corresponding cluster result of historical frames image.
In one embodiment, the implementation of the step can be:To each pixel in history frame number image, it is calculated Pixel value and existing cluster centre distance Dmin;If DminGreater than distance threshold Ta, then new for the pixel newly-built one Cluster, and using the pixel value of the pixel as the center of the new cluster;If Dmin<Ta, then the pixel is clustered Has cluster to corresponding, the pixel number for having cluster adds 1, and updates the center for having cluster.
It should be noted that for a historical frames image, when beginning, enabling K=1 is initial clustering number, randomly selects one For pixel as initial clustering, the pixel value of the pixel is the center of initial clustering.To each pixel position, pixel is determined The minimum range D at the center of the pixel value and already existing cluster of pointminIf DminGreater than defined threshold value Ta, then it is the picture Vegetarian refreshments increases a new cluster, K=K+1, and the center that the pixel value of the pixel is clustered as the K+1.Conversely, if Dmin<Ta, then the pixel value of the pixel, which belongs to, has cluster, such as belongs to k-th cluster, then the pixel number of k-th cluster Add 1, and updates the center of k-th cluster based on pixel value mean value.
In one embodiment, for each historical frames image, when its pixel clusters completion, can also include according to each cluster Pixel quantity number, cluster is ranked up.
S303 detects whether not clustered there are also historical frames image, if so, return step S302, if it is not, executing next Step.
S304, from the cluster result of M historical frames image, selected pixels point quantity is greater than or equal to the poly- of given threshold Class determines initial back-ground model according to the cluster selected.
It should be understood that can also be from the cluster result of M historical frames image, selected pixels point quantity most Q Cluster;Initial back-ground model is determined according to the Q cluster;Wherein, Q is more than or equal to 1, and Q is less than M.
Initiate background model through the foregoing embodiment can base in order to guarantee the building accuracy of initial back-ground model Initial back-ground model is determined in one section of video of tracking target, due to including multiple continuous frame images in one section of video, thus The initial back-ground model determined is more accurate, and then can reduce the factors such as environment to the influence for tracking objective result.
On the other hand, shown in Figure 4, the process of corresponding second detection image of detecting and tracking target may include walking as follows Suddenly:
S401 is filtered current frame image, obtains the smoothed image of current frame image.
In one embodiment, the filter function G (x, y) and current frame image progress convolution algorithm for being 1.4 by variance, obtain Smoothed image, filter function G (x, y) can be:
S402 is based on smoothed image, determines optimum gradation segmentation threshold.
Wherein, optimum gradation segmentation threshold meets condition:Using the optimum gradation segmentation threshold to picture in current frame image Vegetarian refreshments is classified, and the variance between two obtained classification set is maximum.
S403 determines first threshold T according to optimum gradation segmentation thresholdhWith second threshold Tl, the first threshold ThIt is greater than The optimum gradation segmentation threshold, second threshold TlLess than the optimum gradation segmentation threshold.
S404 is obtained and is passed through first threshold ThWith second threshold TlThe first edge that current frame image is detected Point set and second edge point set.
In one embodiment, the method for determination of edge pixel point may include:Determine the gradient of pixel in current frame image Direction and gradient amplitude are edge pixel point or non-edge according to the gradient direction of pixel and gradient amplitude detection pixel point Pixel.Such as:Calculate image on pixel (i, j) gradient amplitude A (i, j) and direction a (i, j), if pixel A (i, J) value is less than A (i, j) value of two neighbor pixels along its gradient line direction, then it is assumed that and the pixel is non-edge point, It is on the contrary then be marginal point.It is appreciated that other endpoint detections modes can also be used, the marginal point in image is determined.
S405, the corresponding marginal point of connection first edge point set, when being connected to endpoint, seeks from second edge point set Marginal point is looked for connect, connection is completed to obtain the image of the tracking target in current frame image.
It is shown in Figure 5, in one embodiment, determine the process packet of the corresponding optimum gradation segmentation threshold of a frame image It includes:
S501 determines the gray level of pixel in current frame image.
Assuming that in current frame image pixel maximum gray scale be L, then in current frame image pixel gray level model It encloses for 0~L.
S502 chooses a gray value threshold value within the scope of 0~L.
Pixel in current frame image is divided into two classification set according to the intensity segmentation threshold value, respectively corresponded by S503 Track target classification set and background class set;And calculate the side between tracking target classification set and background class set Difference.
S504 adjusts intensity segmentation threshold value, return step S503, until obtaining L intensity segmentation threshold within the scope of 0~L It is worth corresponding variance.
S505 determines the maximum value of variance.
S506 obtains corresponding intensity segmentation threshold value when variance maximum value, as optimum gradation segmentation threshold.
Specifically for example:If the pixel number of a frame number image is N, and has L gray level (0,1 ..., L-1), gray scale The pixel number that grade is i is ni, then haveThe frame image histogram is normalized, there is probability density distributionAndAssuming that the frame image is divided into two classification set C by intensity segmentation threshold value toAnd Cb, i.e. CoWith CbRespectively correspond the pixel with gray level { 0,1 ..., t } and { t+1, t+2 ..., L-1 }, CoAnd CbRespectively correspond tracking Target classification set and background class set.Wherein, CoAnd CbClassification set probability of happening be respectively:
CoAnd CbClassification set mean value be respectively:
The frame image and grand mean are:
μ=wbμb+woμo
CoAnd CbVariance (variance i.e. between tracking target classification set and background class set) is between classification set:
σ2=woo-μ)2+wbb-μ)2
Change t from 0 to L-1, optimum gradation segmentation threshold T can make above-mentioned variance maximum, i.e.,:
Further, in one embodiment, the first object testing result that comparison is obtained by any of the above-described embodiment with Second object detection results, the concrete mode for obtaining the testing result of tracking target may include:Determine first object testing result With the intersection of the second object detection results, the testing result of tracking target is obtained by the intersection of the two.
The tracking object detection results obtained as a result, by two kinds of approach are mutually corrected, and are mutually compensated, and tracking mesh is improved Identify other accuracy.
It should be understood that for the various method embodiments described above, although each step in flow chart is according to arrow Instruction is successively shown, but these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless having herein bright True explanation, there is no stringent sequences to limit for the execution of these steps, these steps can execute in other order.And And at least part step in the flow chart of embodiment of the method may include multiple sub-steps or multiple stages, this is a little Step or stage are not necessarily to execute completion in synchronization, but can execute at different times, these sub-steps Perhaps the execution sequence in stage be also not necessarily successively carry out but can with the sub-step of other steps or other steps or At least part in person's stage executes in turn or alternately.
Based on thought identical with the method for tracking target detection in above-described embodiment, tracking target inspection is also provided herein The device of survey.
In one embodiment, as shown in fig. 6, the device of the tracking target detection of the present embodiment includes:Image obtains mould Block 601, first detection module 602, the second detection module 603 and contrasting detection module 604, details are as follows for each module:
Image collection module 601, for obtaining the current frame image of tracking target;
First detection module 602, for determining the background model for tracking target in current frame image, according to the background mould Type extracts tracking target from current frame image, as first object testing result;
Second detection module 603, for obtaining the edge pixel point in the current frame image, according to the edge pixel Point determines the tracking target in current frame image, as the second object detection results;And
Contrasting detection module 604, for the comparison according to the first object testing result and the second object detection results As a result, obtaining the testing result of tracking target.
In one embodiment, first detection module 602 includes:
Pixel detection unit, for determining current frame image compared to the changed pixel of previous frame image;
Context update unit, for according to the changed pixel to the corresponding background model of previous frame image into Row updates, background model of the updated background model as the tracking target in current frame image.
In one embodiment, above-mentioned context update unit, by following formula to previous frame image correspond to background model into Row updates:
Wherein, Bk(x, y) is the pixel in the background model updated, Bk-1(x, y) is the corresponding background of previous frame image Pixel in model, Ik(x, y) is in current frame image relative to the changed pixel of previous frame image, coefficient 0<α< 1, indicate the turnover rate of preset background model.
In one embodiment, the device of above-mentioned tracking target detection further includes:
Initial background module has the history video to obtain continuous history for obtaining the history video of tracking target Frame image;Pixel in each historical frames image is clustered, is determined according to the cluster result of multiple historical frames images initial Background model, as the corresponding background model of first frame image.
In one embodiment, the mode clustered to the pixel in each historical frames image includes:
To each pixel in history frame number image, the minimum range D of its pixel value Yu existing cluster centre is calculatedmin; If DminGreater than distance threshold Ta, then be the pixel create a new cluster, and using the pixel value of the pixel as The center of the new cluster;If Dmin<Ta, then the pixel is clustered to corresponding and has a cluster, it is described to have cluster Pixel number adds 1, and updates the center for having cluster.
In one embodiment, the mode for determining initial back-ground model according to the cluster result of multiple historical frames images can be: From the cluster result of multiple historical frames images, selected pixels point quantity is greater than or equal to the cluster of given threshold, according to selecting Cluster determine initial back-ground model.
In one embodiment, above-mentioned second detection module 603 includes:
Marginal point acquiring unit, for determining the gradient direction and gradient amplitude of pixel in current frame image, according to picture The gradient direction and gradient amplitude detection pixel point of vegetarian refreshments are edge pixel point or non-edge pixels point.
In one embodiment, above-mentioned second detection module 603 further includes:
Filter unit, for before the step of obtaining the edge pixel point in the current frame image, to present frame figure As being filtered, the smoothed image of current frame image is obtained.
In one embodiment, above-mentioned second detection module 603 further includes:
Object detection unit passes through first threshold T for obtaininghWith second threshold TlCurrent frame image detect The first edge point set and second edge point set arrived;Wherein, first threshold ThGreater than second threshold Tl
The corresponding marginal point of first edge point set is connected, when being connected to endpoint, finds side from second edge point set The connection of edge point, connection are completed to obtain the image of the tracking target in current frame image.
In one embodiment, the device of above-mentioned tracking target detection further includes:
Threshold determination module, for determining the gray level of pixel in current frame image;It will be worked as according to intensity segmentation threshold value Pixel is divided into two classification set in prior image frame, respectively corresponds tracking target classification set and background class set;It calculates Track the variance between target classification set and background class set;Intensity segmentation threshold value is adjusted within the scope of 0~L, and is calculated Corresponding variance, and determine the maximum value of variance;Wherein, L indicates the maximum gray scale of pixel in current frame image;Acquisition side Corresponding intensity segmentation threshold value when poor maximum value, as optimum gradation segmentation threshold.
In one embodiment, above-mentioned contrasting detection module 604, for determining first object testing result and the inspection of the second target The intersection for surveying result obtains the testing result of tracking target by the intersection.
The specific of device about tracking target detection limits the method that may refer to above for tracking target detection Restriction, details are not described herein.Modules in the device of above-mentioned tracking target detection can be fully or partially through software, hard Part and combinations thereof is realized.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, It can also be stored in a software form in the memory in computer equipment, execute the above modules in order to which processor calls Corresponding operation.
The device for implementing tracking target detection provided in an embodiment of the present invention, is needing recognition and tracking target to be to obtain in real time Take the current frame image of tracking target;On the one hand, the background model that target is tracked in current frame image is determined, according to the background Model extracts tracking target from current frame image, as first object testing result;On the other hand, the present frame is obtained Edge pixel point in image determines the tracking target in current frame image according to the edge pixel point, as the second mesh Mark testing result;Based on above-mentioned two aspect, by the comparing result of first object testing result and the second object detection results, obtain To the testing result of tracking target, tracking target thus can be accurately identified, while two aspects can be detected simultaneously, two sides The detection complexity in face is lower, is conducive to overcome the problems, such as target following detection lag.
In addition, the logical partitioning of each program module is only in the embodiment of the device of the tracking target detection of above-mentioned example It is the realization of the configuration requirement or software for example, can according to need in practical application, such as corresponding hardware It is convenient to consider, above-mentioned function distribution is completed by different program modules, i.e., by the inside of the device of the tracking target detection Structure is divided into different program modules, to complete all or part of the functions described above.
In one embodiment, a kind of computer equipment is provided, which can be the control of mobile device Platform, internal structure chart can be as shown in Figure 7.The computer equipment include by system bus connect processor, memory, Display screen and input unit.Wherein, processor is for providing calculating and control ability;Memory includes that non-volatile memories are situated between Matter, built-in storage, the non-volatile memory medium are stored with operating system and computer program, which is non-volatile The operation of operating system and computer program in storage medium provides environment;Display screen can be liquid crystal display or electronics Ink display screen;Input unit can be the touch layer covered on display screen, be also possible to physical button, trace ball or Trackpad Deng.
It will be understood by those skilled in the art that structure shown in Fig. 7, only part relevant to application scheme is tied The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is stored in memory Computer program, the processor realize following steps when executing computer program:
Obtain the current frame image of tracking target;
It determines the background model for tracking target in current frame image, is extracted from current frame image according to the background model Target is tracked out, as first object testing result;
The edge pixel point in the current frame image is obtained, is determined in current frame image according to the edge pixel point Tracking target, as the second object detection results;
According to the comparing result of the first object testing result and the second object detection results, the inspection of tracking target is obtained Survey result.
In one embodiment, following steps are also realized when processor executes computer program:
Determine current frame image compared to the changed pixel of previous frame image;According to the changed pixel The corresponding background model of previous frame image is updated, updated background model is as the tracking target in current frame image Background model.
In one embodiment, following steps are also realized when processor executes computer program:
The history video for obtaining tracking target, has the history video to obtain continuous historical frames image;
Pixel in each historical frames image is clustered, is determined according to the cluster result of multiple historical frames images initial Background model, as the corresponding background model of first frame image.
In one embodiment, following steps are also realized when processor executes computer program:
To each pixel in history frame number image, the minimum range D of its pixel value Yu existing cluster centre is calculatedmin
If DminGreater than distance threshold Ta, then it is that the pixel creates a new cluster, and by the picture of the pixel Center of the element value as the new cluster;
If Dmin<Ta, then the pixel is clustered to corresponding and has cluster, the pixel number for having cluster adds 1, and update the center for having cluster.
In one embodiment, following steps are also realized when processor executes computer program:
From the cluster result of multiple historical frames images, selected pixels point quantity is greater than or equal to the cluster of given threshold, Initial back-ground model is determined according to the cluster selected.
In one embodiment, following steps are also realized when processor executes computer program:
Background model is corresponded to previous frame image according to following formula to be updated:
Wherein, Bk(x, y) is the pixel in the background model updated, Bk-1(x, y) is the corresponding background of previous frame image Pixel in model, Ik(x, y) is in current frame image relative to the changed pixel of previous frame image, coefficient 0<α< 1, indicate the turnover rate of preset background model.
In one embodiment, following steps are also realized when processor executes computer program:
The gradient direction and gradient amplitude for determining pixel in current frame image, according to the gradient direction and gradient of pixel Amplitude detection pixel is edge pixel point or non-edge pixels point.
In one embodiment, following steps are also realized when processor executes computer program:
Before the step of obtaining the edge pixel point in the current frame image, place is filtered to current frame image Reason, obtains the smoothed image of current frame image.
In one embodiment, following steps are also realized when processor executes computer program:
It obtains and passes through first threshold ThWith second threshold TlThe first edge point set that current frame image is detected With second edge point set;Wherein, first threshold ThGreater than second threshold Tl
The corresponding marginal point of first edge point set is connected, when being connected to endpoint, finds side from second edge point set The connection of edge point, connection are completed to obtain the image of the tracking target in current frame image.
In one embodiment, following steps are also realized when processor executes computer program:
First threshold T is determined according to predetermined optimum gradation segmentation thresholdhWith second threshold Tl, the first threshold ThGreater than the optimum gradation segmentation threshold, second threshold TlLess than the optimum gradation segmentation threshold;
The optimum gradation segmentation threshold meets condition:Using the optimum gradation segmentation threshold to picture in current frame image Vegetarian refreshments is classified, and the variance between two obtained classification set is maximum.
In one embodiment, following steps are also realized when processor executes computer program:
Determine the gray level of pixel in current frame image;
Pixel in current frame image is divided into two classification set according to intensity segmentation threshold value, respectively corresponds tracking target Classification set and background class set;
Calculate the variance between tracking target classification set and background class set;
Intensity segmentation threshold value is adjusted within the scope of 0~L, and calculates corresponding variance, and determines the maximum value of variance;Its In, L indicates the maximum gray scale of pixel in current frame image;
Corresponding intensity segmentation threshold value when variance maximum value is obtained, as optimum gradation segmentation threshold.
In one embodiment, following steps are also realized when processor executes computer program:
The intersection for determining first object testing result and the second object detection results obtains tracking target by the intersection Testing result.
Based on above-mentioned computer equipment, tracking target can be accurately identified, while the detection complexity of method is low, be conducive to gram The problem of taking target following detection lag.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program realizes following steps when being executed by processor:Obtain the current frame image of tracking target;
It determines the background model for tracking target in current frame image, is extracted from current frame image according to the background model Target is tracked out, as first object testing result;
The edge pixel point in the current frame image is obtained, is determined in current frame image according to the edge pixel point Tracking target, as the second object detection results;
According to the comparing result of the first object testing result and the second object detection results, the inspection of tracking target is obtained Survey result.
In one embodiment, following steps are also realized when computer program is executed by processor:
Determine current frame image compared to the changed pixel of previous frame image;According to the changed pixel The corresponding background model of previous frame image is updated, updated background model is as the tracking target in current frame image Background model.
In one embodiment, following steps are also realized when computer program is executed by processor:
The history video for obtaining tracking target, has the history video to obtain continuous historical frames image;
Pixel in each historical frames image is clustered, is determined according to the cluster result of multiple historical frames images initial Background model, as the corresponding background model of first frame image.
In one embodiment, following steps are also realized when computer program is executed by processor:
To each pixel in history frame number image, the minimum range D of its pixel value Yu existing cluster centre is calculatedmin
If DminGreater than distance threshold Ta, then it is that the pixel creates a new cluster, and by the picture of the pixel Center of the element value as the new cluster;
If Dmin<Ta, then the pixel is clustered to corresponding and has cluster, the pixel number for having cluster adds 1, and update the center for having cluster.
In one embodiment, following steps are also realized when computer program is executed by processor:
From the cluster result of multiple historical frames images, selected pixels point quantity is greater than or equal to the cluster of given threshold, Initial back-ground model is determined according to the cluster selected.
In one embodiment, following steps are also realized when computer program is executed by processor:
Background model is corresponded to previous frame image according to following formula to be updated:
Wherein, Bk(x, y) is the pixel in the background model updated, Bk-1(x, y) is the corresponding background of previous frame image Pixel in model, Ik(x, y) is in current frame image relative to the changed pixel of previous frame image, coefficient 0<α< 1, indicate the turnover rate of preset background model.
In one embodiment, following steps are also realized when computer program is executed by processor:
The gradient direction and gradient amplitude for determining pixel in current frame image, according to the gradient direction and gradient of pixel Amplitude detection pixel is edge pixel point or non-edge pixels point.
In one embodiment, following steps are also realized when processor executes computer program:
Before the step of obtaining the edge pixel point in the current frame image, place is filtered to current frame image Reason, obtains the smoothed image of current frame image.
In one embodiment, following steps are also realized when computer program is executed by processor:
It obtains and passes through first threshold ThWith second threshold TlThe first edge point set that current frame image is detected With second edge point set;Wherein, first threshold ThGreater than second threshold Tl
The corresponding marginal point of first edge point set is connected, when being connected to endpoint, finds side from second edge point set The connection of edge point, connection are completed to obtain the image of the tracking target in current frame image.
In one embodiment, following steps are also realized when computer program is executed by processor:
First threshold T is determined according to predetermined optimum gradation segmentation thresholdhWith second threshold Tl, the first threshold ThGreater than the optimum gradation segmentation threshold, second threshold TlLess than the optimum gradation segmentation threshold;
The optimum gradation segmentation threshold meets condition:Using the optimum gradation segmentation threshold to picture in current frame image Vegetarian refreshments is classified, and the variance between two obtained classification set is maximum.
In one embodiment, following steps are also realized when computer program is executed by processor:
Determine the gray level of pixel in current frame image;
Pixel in current frame image is divided into two classification set according to intensity segmentation threshold value, respectively corresponds tracking target Classification set and background class set;
Calculate the variance between tracking target classification set and background class set;
Intensity segmentation threshold value is adjusted within the scope of 0~L, and calculates corresponding variance, and determines the maximum value of variance;Its In, L indicates the maximum gray scale of pixel in current frame image;
Corresponding intensity segmentation threshold value when variance maximum value is obtained, as optimum gradation segmentation threshold.
In one embodiment, following steps are also realized when computer program is executed by processor:
The intersection for determining first object testing result and the second object detection results obtains tracking target by the intersection Testing result.
Based on above-mentioned computer storage medium, tracking target can be accurately identified, while the detection complexity of method is low, favorably It is lagged in overcoming the problems, such as that target following detects.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield all should be considered as described in this specification.The description of above-mentioned each embodiment all emphasizes particularly on different fields, in some embodiment The part not being described in detail may refer to the associated description of other embodiments.
Term " includes " and " having " and their any deformations in embodiment, it is intended that cover non-exclusive packet Contain.Such as contain series of steps or the process, method, system, product or equipment of (module) unit are not limited to arrange Out the step of or unit, but optionally further comprising the step of not listing or unit, or optionally further comprising for these mistakes The intrinsic other step or units of journey, method, product or equipment.
" multiple " referred in embodiment refer to two or more."and/or", the association for describing affiliated partner are closed System indicates may exist three kinds of relationships, for example, A and/or B, can indicate:Individualism A exists simultaneously A and B, individualism These three situations of B.Character "/" typicallys represent the relationship that forward-backward correlation object is a kind of "or".
" first second " referred in embodiment be only be the similar object of difference, do not represent for the specific of object Sequence, it is possible to understand that specific sequence or precedence can be interchanged in ground, " first second " in the case where permission.It should manage The object that solution " first second " is distinguished is interchangeable under appropriate circumstances so that the embodiments described herein can in addition to Here the sequence other than those of diagram or description is implemented.
Only several embodiments of the present invention are expressed for above embodiments, and but it cannot be understood as to patent of invention The limitation of range.It should be pointed out that for those of ordinary skill in the art, in the premise for not departing from the application design Under, various modifications and improvements can be made, these belong to the protection scope of the application.Therefore, the protection of the application patent Range should be determined by the appended claims.

Claims (15)

1. a kind of method for tracking target detection, which is characterized in that including:
Obtain the current frame image of tracking target;
Determine in current frame image track target background model, according to the background model extracted from current frame image with Track target, as first object testing result;
Obtain the edge pixel point in the current frame image, according to the edge pixel point determine in current frame image with Track target, as the second object detection results;
According to the comparing result of the first object testing result and the second object detection results, the detection knot of tracking target is obtained Fruit.
2. the method according to claim 1, wherein tracking the background mould of target in the determining current frame image Type, including:
Determine current frame image compared to the changed pixel of previous frame image;
The corresponding background model of previous frame image is updated according to the changed pixel, updated background mould Background model of the type as the tracking target in current frame image.
3. according to the method described in claim 2, it is characterized in that, according to the changed pixel to former frame figure Before being updated as corresponding background model, further include:
The history video for obtaining tracking target, has the history video to obtain continuous historical frames image;
Pixel in each historical frames image is clustered, initial background is determined according to the cluster result of multiple historical frames images Model, as the corresponding background model of first frame image.
4. according to the method described in claim 2, it is characterized in that, the pixel in each historical frames image gathers Class, including:
To each pixel in history frame number image, the minimum range D of its pixel value Yu existing cluster centre is calculatedmin
If DminGreater than distance threshold Ta, then it is that the pixel creates a new cluster, and by the pixel value of the pixel Center as the new cluster;
If Dmin<Ta, then the pixel is clustered to corresponding and has cluster, by the corresponding pixel number for having cluster and including Add 1, and updates the corresponding center for having cluster.
5. according to the method described in claim 3, it is characterized in that, described determine according to the cluster result of multiple historical frames images Initial back-ground model, including:
From the cluster result of multiple historical frames images, selected pixels point quantity is greater than or equal to the cluster of given threshold, according to The cluster selected determines initial back-ground model.
6. according to the method described in claim 2, it is characterized in that, corresponding to background model to previous frame image using following formula It is updated:
Wherein, Bk(x, y) is the pixel in the background model updated, Bk-1(x, y) is the corresponding background model of previous frame image In pixel, Ik(x, y) is in current frame image relative to the changed pixel of previous frame image, coefficient 0<α<1, table Show the turnover rate of preset background model.
7. method according to any one of claims 1 to 6, which is characterized in that obtaining the edge in the current frame image Before the step of pixel, further include:
The gradient direction and gradient amplitude for determining pixel in current frame image, according to the gradient direction and gradient amplitude of pixel Detection pixel point is edge pixel point or non-edge pixels point.
8. the method according to the description of claim 7 is characterized in that obtaining the edge pixel point in the current frame image Before step, further include:
Current frame image is filtered, the smoothed image of current frame image is obtained.
9. the method according to the description of claim 7 is characterized in that being determined in current frame image according to the edge pixel point Tracking target the step of, including:
Obtain the first edge point set and second detected by first threshold and second threshold to current frame image Edge point set;Wherein, first threshold is greater than second threshold;
The corresponding marginal point of first edge point set is connected, when being connected to endpoint, finds marginal point from second edge point set Connection, connection are completed to obtain the image of the tracking target in current frame image.
10. according to the method described in claim 9, it is characterized in that, obtaining through first threshold and second threshold to current Before first edge point set and second edge point set that frame image is detected, further include:
Determine that first threshold and second threshold, the first threshold are greater than described according to predetermined optimum gradation segmentation threshold Optimum gradation segmentation threshold, second threshold are less than the optimum gradation segmentation threshold;
The optimum gradation segmentation threshold meets condition:Using the optimum gradation segmentation threshold to pixel in current frame image Classify, the variance between two obtained classification set is maximum.
11. according to the method described in claim 10, it is characterized in that, further including:
Determine the gray level of pixel in current frame image;
Pixel in current frame image is divided into two classification set according to intensity segmentation threshold value, respectively corresponds tracking target classification Set and background class set;
Calculate the variance between tracking target classification set and background class set;
Intensity segmentation threshold value is adjusted within the scope of 0~L, and calculates corresponding variance, and determines the maximum value of variance;Wherein, L table Show the maximum gray scale of pixel in current frame image;
Corresponding intensity segmentation threshold value when variance maximum value is obtained, as optimum gradation segmentation threshold.
12. according to claim 1 to 6,8,9,10,11 any methods, which is characterized in that comparison first mesh The step of marking testing result and the second object detection results, obtaining the testing result of tracking target include:
The intersection for determining first object testing result and the second object detection results obtains the detection of tracking target by the intersection As a result.
13. a kind of device for tracking target detection, which is characterized in that including:
Image collection module, for obtaining the current frame image of tracking target;
First detection module, for determining the background model for tracking target in current frame image, according to the background model from working as Tracking target is extracted in prior image frame, as first object testing result;
Second detection module is determined for obtaining the edge pixel point in the current frame image according to the edge pixel point Tracking target in current frame image out, as the second object detection results;And
Contrasting detection module is obtained for the comparing result according to the first object testing result and the second object detection results To the testing result of tracking target.
14. a kind of computer equipment, including memory and processor, the memory are stored with computer program;Its feature It is, when the computer program is executed by the processor, so that the processor realizes any institute of claim 1 to 12 The step of stating method.
15. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that when the computer journey When sequence is executed by the processor, so that the step of processor realizes claim 1 to 12 any the method.
CN201810516188.3A 2018-05-25 2018-05-25 Method, device, computer equipment and storage medium for tracking target detection Active CN108898057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810516188.3A CN108898057B (en) 2018-05-25 2018-05-25 Method, device, computer equipment and storage medium for tracking target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810516188.3A CN108898057B (en) 2018-05-25 2018-05-25 Method, device, computer equipment and storage medium for tracking target detection

Publications (2)

Publication Number Publication Date
CN108898057A true CN108898057A (en) 2018-11-27
CN108898057B CN108898057B (en) 2021-08-10

Family

ID=64343054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810516188.3A Active CN108898057B (en) 2018-05-25 2018-05-25 Method, device, computer equipment and storage medium for tracking target detection

Country Status (1)

Country Link
CN (1) CN108898057B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111316127A (en) * 2018-12-29 2020-06-19 深圳市大疆创新科技有限公司 Target track determining method, target tracking system and vehicle
CN111539986A (en) * 2020-03-25 2020-08-14 西安天和防务技术股份有限公司 Target tracking method and device, computer equipment and storage medium
CN111899285A (en) * 2020-07-08 2020-11-06 浙江大华技术股份有限公司 Method and device for determining tracking track of target object and storage medium
CN112347899A (en) * 2020-11-03 2021-02-09 广州杰赛科技股份有限公司 Moving target image extraction method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004920A (en) * 2010-11-12 2011-04-06 浙江工商大学 Method for splitting and indexing surveillance videos
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN103793923A (en) * 2014-01-24 2014-05-14 华为技术有限公司 Method and device for acquiring moving object in image
CN105184779A (en) * 2015-08-26 2015-12-23 电子科技大学 Rapid-feature-pyramid-based multi-dimensioned tracking method of vehicle
US20160366308A1 (en) * 2015-06-12 2016-12-15 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Electronic device and image tracking method thereof
CN107992790A (en) * 2017-10-13 2018-05-04 西安天和防务技术股份有限公司 Target long time-tracking method and system, storage medium and electric terminal
CN108038866A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of moving target detecting method based on Vibe and disparity map Background difference

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004920A (en) * 2010-11-12 2011-04-06 浙江工商大学 Method for splitting and indexing surveillance videos
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN103793923A (en) * 2014-01-24 2014-05-14 华为技术有限公司 Method and device for acquiring moving object in image
US20160366308A1 (en) * 2015-06-12 2016-12-15 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Electronic device and image tracking method thereof
CN105184779A (en) * 2015-08-26 2015-12-23 电子科技大学 Rapid-feature-pyramid-based multi-dimensioned tracking method of vehicle
CN107992790A (en) * 2017-10-13 2018-05-04 西安天和防务技术股份有限公司 Target long time-tracking method and system, storage medium and electric terminal
CN108038866A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of moving target detecting method based on Vibe and disparity map Background difference

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张志强: "应用Otsu改进Canny算子的图像边缘检测方法", 《计算机与数字工程》 *
李伟: "基于双阈值运动区域分割的AdaBoost行人检测算法", 《计算机应用工程》 *
杨会锋: "基于改进K-均值聚类算法的背景建模方法", 《电子测量与仪器学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111316127A (en) * 2018-12-29 2020-06-19 深圳市大疆创新科技有限公司 Target track determining method, target tracking system and vehicle
CN111539986A (en) * 2020-03-25 2020-08-14 西安天和防务技术股份有限公司 Target tracking method and device, computer equipment and storage medium
CN111539986B (en) * 2020-03-25 2024-03-22 西安天和防务技术股份有限公司 Target tracking method, device, computer equipment and storage medium
CN111899285A (en) * 2020-07-08 2020-11-06 浙江大华技术股份有限公司 Method and device for determining tracking track of target object and storage medium
CN111899285B (en) * 2020-07-08 2023-03-14 浙江大华技术股份有限公司 Method and device for determining tracking track of target object and storage medium
CN112347899A (en) * 2020-11-03 2021-02-09 广州杰赛科技股份有限公司 Moving target image extraction method, device, equipment and storage medium
CN112347899B (en) * 2020-11-03 2023-09-19 广州杰赛科技股份有限公司 Moving object image extraction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN108898057B (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN111460926B (en) Video pedestrian detection method fusing multi-target tracking clues
CN108898057A (en) Track method, apparatus, computer equipment and the storage medium of target detection
CN106408592B (en) A kind of method for tracking target updated based on target template
CN105405154B (en) Target object tracking based on color-structure feature
CN112836640B (en) Single-camera multi-target pedestrian tracking method
CN110717414A (en) Target detection tracking method, device and equipment
CN110245662A (en) Detection model training method, device, computer equipment and storage medium
CN108446622A (en) Detecting and tracking method and device, the terminal of target object
CN109871763A (en) A kind of specific objective tracking based on YOLO
US20120114176A1 (en) Image processing apparatus and image processing method
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN110009665A (en) A kind of target detection tracking method blocked under environment
CN112991391A (en) Vehicle detection and tracking method based on radar signal and vision fusion
CN112947419B (en) Obstacle avoidance method, device and equipment
CN105608417A (en) Traffic signal lamp detection method and device
JP2007078409A (en) Object positioning system
CN106709938A (en) Multi-target tracking method based on improved TLD (tracking-learning-detected)
CN113989656A (en) Event interpretation method and device for remote sensing video, computer equipment and storage medium
WO2023197232A9 (en) Target tracking method and apparatus, electronic device, and computer readable medium
CN103607558A (en) Video monitoring system, target matching method and apparatus thereof
CN110660084A (en) Multi-target tracking method and device
CN112347967B (en) Pedestrian detection method fusing motion information in complex scene
CN106558069A (en) A kind of method for tracking target and system based under video monitoring
JP2021128761A (en) Object tracking device of road monitoring video and method
CN117152206A (en) Multi-target long-term tracking method for unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant