CN108090922A - Intelligent Target pursuit path recording method - Google Patents

Intelligent Target pursuit path recording method Download PDF

Info

Publication number
CN108090922A
CN108090922A CN201611025664.9A CN201611025664A CN108090922A CN 108090922 A CN108090922 A CN 108090922A CN 201611025664 A CN201611025664 A CN 201611025664A CN 108090922 A CN108090922 A CN 108090922A
Authority
CN
China
Prior art keywords
target
tracking
camera
image
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201611025664.9A
Other languages
Chinese (zh)
Inventor
夏筱筠
刘飞
张乘龙
杲颖
崔冬静
王宏娟
郭建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Computing Technology of CAS
Original Assignee
Shenyang Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Computing Technology of CAS filed Critical Shenyang Institute of Computing Technology of CAS
Priority to CN201611025664.9A priority Critical patent/CN108090922A/en
Publication of CN108090922A publication Critical patent/CN108090922A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The present invention relates to intelligent Target pursuit path recording methods, and target image is gathered using multiple cameras, by estimating away from the three-dimensional track of target movement is obtained, comprise the following steps more:When each camera detects the target in acquisition image to track target, using KCF track algorithms to tracking target into line trace;The position of target obtains the three-dimensional coordinate of target in the image gathered according to each camera;And it records three-dimensional coordinate and is shown.The present invention tracks target using improved M KCF high speeds track algorithm, and high certainty of measurement, speed is fast, suitable for the situation that target translational speed is very fast or system real time is more demanding.

Description

Intelligent target tracking track recording method
Technical Field
The invention relates to the field of computer tracking, in particular to an intelligent target tracking track recording method which tracks a target through a tracking algorithm, then calculates the three-dimensional coordinate of the target through a parallax principle and finally forms a track.
Background
Tracking of targets is an important research area of computer vision. With the development of science and technology, target tracking and target track recording have practical value in the fields of traffic monitoring, pedestrian traffic, astronomical observation, automatic driving, aircraft research and development and the like. Aiming at target tracking, a large number of scholars at home and abroad do much work. The current common target tracking algorithm can almost achieve the purpose of real-time tracking. But in some fields, such as the field of aircraft development, or where real-time target tracking is a high requirement. Because the target speed is high or the real-time requirement is high, the traditional tracking method cannot achieve the purpose of real-time tracking.
Disclosure of Invention
Aiming at the problems that the traditional tracking system cannot track a target moving fast or the real-time performance of the system is not good enough, the invention provides an improved high-speed tracking algorithm to track the target, and finally the aim of recording the target track is achieved.
The technical scheme adopted by the invention for realizing the purpose is as follows: the intelligent target tracking track recording method adopts a plurality of cameras to collect target images and obtains a three-dimensional track of target motion through multi-view ranging, and comprises the following steps:
when each camera detects that the target in the collected image is the tracking target, tracking the tracking target by adopting a KCF tracking algorithm;
obtaining a three-dimensional coordinate of a target according to the position of the target in the image acquired by each camera;
and recording the three-dimensional coordinates for display.
The KCF tracking algorithm comprises the following steps:
1) using tracking targets in currently acquired images of all cameras as positive samples, and obtaining negative samples after the positive samples are circularly displaced;
2) updating the KCF tracking algorithm by adopting positive samples and negative samples to obtain the parameters w and the constants w of the SVM classifier in the KCF tracking algorithm0
3) Inputting a next frame of collected image by each camera, and substituting the characteristic x of the next frame of collected image into a function f (x) ═ wTx+w0(ii) a x is a feature of the image, w0A constant for the classification plane;
judging whether the f (x) maximum value of each camera is greater than 0;
if the maximum value of f (x) in the current camera is greater than 0, the characteristic x position corresponding to the maximum value of f (x) is the target position in the next frame of acquired image of the current camera; otherwise, the target is lost and the tracking is stopped;
and when all the cameras are judged, returning to the step 1) until the target disappears or the camera is manually stopped.
The specific steps of obtaining the negative sample after the positive sample is circularly displaced are as follows:
multiplying the positive samples by a cyclic matrix formed by the identity matrix to form negative samples;
c (x) is an n × n circulant matrix obtained by cyclic shifting of a 1 × n vector x:
the w is obtained by the following steps:
bringing X into the formula
w=(XXT+λI)-1XTy
Wherein X is a matrix obtained after discrete Fourier transform is carried out on the cyclic matrix; y represents a sample label, a positive sample is 1, and a negative sample is-1; λ is a parameter; and I is an identity matrix.
The obtaining of the three-dimensional coordinates of the target according to the position of the target in the image acquired by each camera specifically comprises:
obtaining the distance between the target in each frame and the set origin according to the target position in each frame of image in each camera; and obtaining the three-dimensional coordinates of each frame of the target according to the distance of each frame and the target position in each frame of image of each camera.
The invention has the following beneficial effects and advantages:
1. the invention adopts the industrial PC as the upper computer, and the measuring system has simple structure, high reliability, low cost and high performance.
2. The algorithm supports more than 2 cameras at least, and the more the number of the video acquisition units is, the higher the precision is, and the fewer the video acquisition units are, the simpler the deployment is.
3. The invention tracks the target by using the improved M-KCF high-speed tracking algorithm, has high measurement precision and high speed, and is suitable for the condition that the target has higher moving speed or the requirement on the real-time performance of the system is higher.
4. The invention realizes data processing calculation through a computer control program, has high calculation precision, and accurately and visually displays the test result through the digital display unit.
5. The invention adopts industrial PC, and the system has simple structure, high reliability and low cost; the recording quality of the target track within hundred meters is quite high, the speed is high, and the investment of observers is greatly reduced.
Drawings
FIG. 1 is a block diagram of the overall system architecture of the present invention;
FIG. 2 is a flowchart of the identification tracking system process of the present invention;
FIG. 3 is a flowchart of a coordinate calculation recording system process according to the present invention;
FIG. 4 is a block diagram of a data management module of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The invention discloses an intelligent target track tracking and recording system. The system comprises an upper computer, a double (multi) visual frequency acquisition unit and a display module. The upper computer comprises an intelligent tracking and three-dimensional track recording module. The upper computer receives a digital signal of the video acquisition unit, and records the running track of the target through an intelligent tracking and three-dimensional track recording module according to the received digital signal, wherein the intelligent tracking system identifies, calculates and records the running track of the target by using target identification and a high-speed target tracking algorithm under a complex background, and then outputs the recorded running track to a display module for display; the video acquisition unit is a double (multi) camera, acquires target running track videos at different angles and inputs the videos into the upper computer; the display module generates a three-dimensional display image of the target running track by receiving the target running track input by the upper computer. The system realizes the tracking of the target, the calculation and the recording of the track through a tracking program, and finally displays the three-dimensional track of the target operation.
The utility model provides an intelligence target tracking trajectory record system, the hardware part includes host computer, two (many) visual frequency acquisition unit and display module:
the upper computer receives the digital signal of the video acquisition unit, and the upper computer intelligently identifies, tracks and calculates and records the received digital signal and finally outputs the digital signal to the display module for display;
the signal output end of the video acquisition unit is connected with an upper computer;
and the display module receives the coordinates of the target running track output by the upper computer and outputs the coordinates to the display part.
The upper computer stores a control program, and the control program comprises a target identification module, a tracking module, a target running track three-dimensional coordinate calculation module and a coordinate track storage module.
The system executes the following measuring steps through an upper computer control program:
step 1) starting a control program initialization module, setting basic parameters of the whole system, and finally controlling the operation of the whole system through the parameter setting.
And 2) starting a target identification module and waiting for the target to enter a double (multi) visual frequency acquisition unit. And after the target enters the double (multi) visual frequency acquisition unit, the target identification module judges the object. Whether it is a target to be tracked.
And 3) starting a tracking module, and judging as true in the step 2). And when the tracking module is started, the tracking module is operated until the target disappears or the manual operation is stopped.
And 4) starting a three-dimensional coordinate calculation and recording module of the target running track, and stopping when the step 3) is finished. And calculating the three-dimensional coordinate of the target running track and recording the three-dimensional coordinate in the hard disk of the upper computer.
And 5) starting a display module to receive the target running track three-dimensional coordinates output by the upper computer, and finally generating a visual three-dimensional track to display to a user.
And 6) starting a data management module, wherein the data management module comprises test information, information of the object to be tested, an operation position, a data table, storage, calling and printing.
The control program initialization module sets basic parameters of the whole system, including the number of cameras and the setting of the parameters, target features to be identified, the size of a target, a threshold value including triggering tracking, parameters of a tracking algorithm, parameters of target track coordinate calculation, parameters of a coordinate fitting algorithm in a display module and the like.
The purpose of the object recognition module is to capture objects that enter the video capture unit. When the target enters the acquisition range of the video acquisition unit, an identification algorithm in the identification module is started to judge whether the entering object is the target to be tracked or not.
The target tracking module determines the position of a target in the video acquisition unit by using an algorithm, receives video data input into an upper computer by the video acquisition unit, and judges the position of the target in each frame of the video according to the image of each frame of the data and the target position of the previous frame. And records the position information of the current target.
The three-dimensional coordinate calculating and recording module is used for calculating the three-dimensional coordinates of the target according to the positions of the target in the videos shot at different angles and the world coordinates of each camera and other data tracked by the tracking algorithm. The system records the three-dimensional coordinates calculated for each frame and provides the data to the display module.
The display module comprises a target track fitting display area, a target displacement track numerical value display area, system parameter setting and a data table area.
And (3) fitting and displaying a target displacement track: and displaying the measured three-dimensional coordinate points of the target track and the fitted target running track curve.
Setting system parameters: and setting a measuring mode and parameters.
Displaying the target displacement track numerical value: and displaying and storing the measured data.
The data management module is used for measuring information, wherein the information comprises a tester and test time; the measured target information comprises the number and the type of the measured object; the operation position realizes the selection of the path of the data storage file and the naming of the file; the data table measurement data is displayed in a standard data report; the storage realizes the storage of the measurement data; calling out the query for realizing the measurement data; and printing the data report. As shown in fig. 4.
The system comprises a target tracking track recording system, a hardware part and a control part, wherein the hardware part comprises an upper computer, a double (multi) visual frequency acquisition unit and a display module, the upper computer receives a digital signal of the video acquisition unit, and the upper computer intelligently identifies, tracks and calculates and records the received digital signal and finally outputs the digital signal to the display module for display; the signal output end of the video acquisition unit is connected with an upper computer; and the display module receives the coordinates of the target running track output by the upper computer and outputs the coordinates to the display part.
The video acquisition unit adopts an MV-GE30GC MedeWirghood industrial camera of MedeWirgyrum; the effective pixel of the video acquisition unit is 752 Hx480V (36 ten thousand), the preset resolution reaches 640X480@ ROI @76FPS, the dynamic range reaches 55dB, the exposure time range is 0.0285-54.72 milliseconds, and the video acquisition unit supports various operating systems and various programming software such as C/C + +, VB6, BCB, VB.net, Delphi6, C #, Labview and the like. The camera is placed on the holder, and under the condition that the acquisition unit is two cameras, the two cameras are placed in parallel and aligned to the same direction; when three or more cameras are used, the cameras form an angle of 90 degrees with each other as much as possible and are aligned to the same direction.
The intelligent target tracking track recording system executes the following measuring steps through an upper computer control program:
step 1) starting a control program initialization module, setting basic parameters of the whole system, and finally controlling the operation of the whole system through the parameter setting.
And 2) starting a target identification module and waiting for the target to enter a double (multi) visual frequency acquisition unit. After the target enters the acquisition range of the double (multi) visual frequency acquisition unit, the target identification module receives the video transmitted by the video acquisition unit through a target identification algorithm to judge whether the object is the target to be tracked. And when the target is a target needing to be tracked, transmitting target information into the target tracking module and starting the target tracking module.
And 3) starting a tracking module. And when the judgment in the step 2) is true, starting a tracking module to receive the image input by the video acquisition unit, calculating the position of the target in the current frame according to the position of the target in the previous frame and the characteristics of the target, and temporarily recording the information. And when the tracking module is started, the tracking module is operated until the target disappears in the video acquisition unit or manual operation is stopped.
And 4) starting a three-dimensional coordinate calculation and recording module of the target running track, and stopping when the step 3) is finished. And the three-dimensional coordinate calculation module calculates the three-dimensional coordinate of the target running track according to the target position information recorded in the target tracking module. And the recording module is used for calculating the three-dimensional coordinate of the target running track and recording the three-dimensional coordinate of the target running track in a hard disk of the upper computer.
And 5) starting a display module to receive the target running track three-dimensional coordinates output by the upper computer, and finally generating a visual three-dimensional track to display to a user.
And 6) starting a data management module, wherein the data management module comprises test information, information of the object to be tested, an operation position, a data table, storage, calling and printing.
The control program initialization module is used for setting basic parameters of the whole system, including the number of cameras and the setting of the parameters, target features to be identified, the size of a target, a threshold value including trigger tracking, parameters of a tracking algorithm, parameters of target track coordinate calculation, parameters of a coordinate fitting algorithm in a display module and the like.
The target identification module aims to accurately capture whether an object entering the acquisition range of the video acquisition unit is a target to be tracked. To achieve this goal, the target recognition module uses a Gaussian mixture model to establish the background captured by the video capture unit. And when no target enters the video acquisition unit, taking the picture shot by the video acquisition unit as background data. After the background picture is obtained, each frame of image acquired by the video acquisition unit is subtracted from the background image, and the position where the pixel of the image has obvious change is the position of an object entering the video acquisition unit. Thus, the target identification module can obtain the position of the object entering the acquisition range of the video acquisition unit.
And then, judging whether the object entering the acquisition range of the video acquisition unit is a target to be tracked by using the trained SVM classifier. The SVM classification is to find a classification plane f (x) ═ wTx+w0To maximize the distance between the closest points in the two classes. w, w0X represent the coefficients, constants and variables of the classification plane equation, respectively.
The distance from the closest point in the two classes to the classification plane is set as r, so that the distance from the closest point to the classification plane is the largest. This then has:
xprepresenting the projection of point x onto the classification plane and r represents the distance of x from the classification plane. After the substitution, the following results:
the distance between the closest points in the two classes is:
f(x+)、f(x-) Respectively representing the values of positive and negative samples which are obtained after operation; x is the number of+x-Respectively representing feature vectors of positive and negative samples; y isiA label representing a sample, wherein a positive sample is 1, a negative sample is-1, and i represents a sample serial number; w is the classification plane coefficient, so the hyperplane that maximizes the separation is:
is equivalent to:
the optimization problem is a quadratic programming problem (the objective function is a quadratic function, and the constraint is a linear constraint), which is a standard optimizationThe problem, which can be equivalently written as a dual form, is solved using lagrangian functions. Thus, the feature vector x of the target detected by the target detection module is used as the input function of the SVM classifier, and when f (x) is equal to wTx+w0>The target at 0 is the target to be tracked.
The improved tracking algorithm is an improved KCF tracking algorithm. The KCF algorithm utilizes the properties of a circulant matrix and an SVM classifier in the tracking process, and utilizes a basic sample of a target as a positive sample and a sample after the circular displacement of the basic sample as a negative sample when the classifier is trained. Therefore, only the basic sample needs to be calculated, and the speed is high.
C (x) is an n × n matrix, which can be obtained by cyclic shifting of a 1 × n vector x, then:
all circulant matrices can be diagonalized, and can be represented by the discrete fourier transform of vector x:
f denotes a matrix of transformation constants,represents the discrete fourier transform:
the objective of the SVM classifier is to minimize:
the solution is as follows:
w=(XXT+λI)-1XTy
x is a matrix obtained after discrete Fourier transform is carried out on the cyclic matrix; y represents a sample label, a positive sample is 1, and a negative sample is-1; λ is a parameter controlling w overfitting; and I is an identity matrix.
The discrete fourier transform of the re-introduced circulant moment, the solution w of the SVM classifier can also be written as:
wherein λ is a parameter for controlling overfitting, which may be 0.0001 to 10000, and in this example 1;the matrix x, y corresponds to the discrete fourier transform of the product of the positions.
The SVM may map an input vector x into a feature space phi (x) using a kernel function. Thus the solution w of the SVM classifier can be written as:that is:
whereinExpressed as a kernel matrix Kijthe discrete Fourier transform of the first row, i denotes the sample number, αiRepresenting the coefficients that convert the solution w into a linear combination of inputs.
The target tracking module uses a KCF algorithm to use the target position input by the target recognition module as a parameter, extracts the characteristics of the target as the input parameters of the SVM classifier, and extracts the characteristics of the target as the input parameters of the SVM classifier. After the SVM classifier is trained, the video acquisition unit is read and transmitted into the SVM classifierThe picture of the current frame. Extracting the characteristics of the current frame, and inputting the characteristic vector into a classifier, then: sample labelWherein,zifor the training sample obtained in the new frame, x is the target model obtained by learning in one frame, and the position where y is the maximum is the position of the target in the new frame. κ denotes a kernel function with a cacique invariance (e.g., radial basis kernel function, linear and polynomial kernel functions, etc.).
The improved KCF algorithm in the target tracking module uses all target features in the double (multiple) cameras as positive samples, samples after cyclic displacement of all the positive samples are negative samples, and the same SVM classifier is trained together. Therefore, the whole system is simpler in structure, and the tracking effect in multiple cameras is more robust. After the tracking algorithm determines the position of the target in a new frame, the target tracking module records the position data of the target in each frame and transmits the position data to the track calculating and recording module.
The target track calculating and recording module receives data transmitted by the target tracking module, and calculates the three-dimensional coordinates of the target in the current frame by using the target position information transmitted by the tracking module and the angle difference shot by each camera.
P(Xw,YW,Zw) The projection point of the point on the image coordinate systems (pixel units) of the two cameras is P1(u1,v1)、P2(u1,v1) Suppose camera C1And C2Has been calibrated, M1、M2Projection matrices for left and right cameras, respectively, where M1The following were used:
then there is a matrix form according to the transformation of the three-dimensional coordinates to the imaging pixel coordinates:
wherein the matrix M1For the known number calibrated by the camera, so that the corresponding point P in the image shot by different cameras is simultaneously shot1(u1,v1)、P2(u2,v2). Calculate a point P (X) in three-dimensional spacew,YW,Zw) Following a point P projected in two pictures1(u1,v1)、P2(u2,v2) The relationship of (1):
the method is simplified as follows:
MsX=N
because two corresponding points of the two pictures intersect at the same point P in space with the extension line of the camera focal length, MsThe world coordinate of P can be obtained by solving the equation of degree.
And after the target track calculation module finishes calculating, the recording function records the calculated three-dimensional coordinates and the position of the target in the image into a hard disk of an upper computer. And when the target track calculation and recording module finishes running, starting the display module.
The display module comprises a target track fitting display area, a target displacement track numerical value display area and a system parameter setting area. When the display module is started, the target tracking module and the target track calculation module are firstly read, and the three-dimensional coordinate data recorded by the recording module are read by the recording module. And then directly displaying discrete point data of the target running track in a target track fitting display area and a target displacement track numerical value display area. And then the display module calls a target track fitting function in the module, and fits a smooth target running track by using a fuzzy recognition nonlinear fitting method according to the read discrete point data of the target running track. The fuzzy identification uses a T-S model, and the membership function is 8.
Display module system parameter setting area: and the interactive interface of the system is used for setting information such as measurement modes, parameters and the like.
The data management module is used for measuring information, wherein the information comprises a tester and test time; measured target information and a measured object number; the operation position realizes the selection of the path of the data storage file and the naming of the file; the data table measurement data is displayed in a standard data report; the storage realizes the storage of the measurement data; calling out the query for realizing the measurement data; and printing the data report.
As shown in fig. 1-3, the hardware part of the intelligent target tracking trajectory recording system comprises an upper computer, a dual (multi) visual frequency acquisition unit and a display module, wherein the upper computer receives a digital signal of the video acquisition unit, and the upper computer performs intelligent identification, tracking, trajectory calculation and recording on the received digital signal and finally outputs the digital signal to the display module for display; the signal output end of the video acquisition unit is connected with an upper computer; and the display module receives the coordinates of the target running track output by the upper computer and outputs the coordinates to the display part.
The upper computer adopts a portable PC, a dual-core processor, a main frequency 2.8GHz, an internal memory 8GB, an independent display card and a common PC with a display memory 512 MB. The hardware deployment of the whole system is facilitated, the flexibility of the system is improved, and necessary conditions are provided for stable operation of a recording system which can only track the target track. And the upper computer stores a control program.

Claims (5)

1. The intelligent target tracking track recording method is characterized in that a plurality of cameras are adopted to collect target images, and a three-dimensional track of target motion is obtained through multi-view ranging, and the method comprises the following steps:
when each camera detects that the target in the collected image is the tracking target, tracking the tracking target by adopting a KCF tracking algorithm;
obtaining a three-dimensional coordinate of a target according to the position of the target in the image acquired by each camera;
and recording the three-dimensional coordinates for display.
2. The intelligent target tracking trajectory recording method according to claim 1, characterized in that the KCF tracking algorithm comprises the steps of:
1) using tracking targets in currently acquired images of all cameras as positive samples, and obtaining negative samples after the positive samples are circularly displaced;
2) updating the KCF tracking algorithm by adopting positive samples and negative samples to obtain the parameters w and the constants w of the SVM classifier in the KCF tracking algorithm0
3) Inputting a next frame of collected image by each camera, and substituting the characteristic x of the next frame of collected image into a function f (x) ═ wTx+w0(ii) a x is a feature of the image, w0A constant for the classification plane;
judging whether the f (x) maximum value of each camera is greater than 0;
if the maximum value of f (x) in the current camera is greater than 0, the characteristic x position corresponding to the maximum value of f (x) is the target position in the next frame of acquired image of the current camera; otherwise, the target is lost and the tracking is stopped;
and when all the cameras are judged, returning to the step 1) until the target disappears or the camera is manually stopped.
3. The intelligent target tracking trajectory recording method according to claim 2, wherein the obtaining of the negative sample after the cyclic displacement of the positive sample specifically comprises:
multiplying the positive samples by a cyclic matrix formed by the identity matrix to form negative samples;
c (x) is an n × n circulant matrix obtained by cyclic shifting of a 1 × n vector x:
4. the intelligent target tracking trajectory recording method of claim 2, wherein w is obtained by:
bringing X into the formula
w=(XXT+λI)-1XTy
Wherein X is a matrix obtained after discrete Fourier transform is carried out on the cyclic matrix; y represents a sample label, a positive sample is 1, and a negative sample is-1; λ is a parameter; and I is an identity matrix.
5. The intelligent target tracking trajectory recording method according to claim 1, wherein the obtaining of the three-dimensional coordinates of the target according to the position of the target in the image acquired by each camera specifically comprises:
obtaining the distance between the target in each frame and the set origin according to the target position in each frame of image in each camera; and obtaining the three-dimensional coordinates of each frame of the target according to the distance of each frame and the target position in each frame of image of each camera.
CN201611025664.9A 2016-11-21 2016-11-21 Intelligent Target pursuit path recording method Withdrawn CN108090922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611025664.9A CN108090922A (en) 2016-11-21 2016-11-21 Intelligent Target pursuit path recording method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611025664.9A CN108090922A (en) 2016-11-21 2016-11-21 Intelligent Target pursuit path recording method

Publications (1)

Publication Number Publication Date
CN108090922A true CN108090922A (en) 2018-05-29

Family

ID=62169541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611025664.9A Withdrawn CN108090922A (en) 2016-11-21 2016-11-21 Intelligent Target pursuit path recording method

Country Status (1)

Country Link
CN (1) CN108090922A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876821A (en) * 2018-07-05 2018-11-23 北京云视万维科技有限公司 Across camera lens multi-object tracking method and system
CN109407117A (en) * 2018-09-07 2019-03-01 安徽大禹安全技术有限公司 Earthquake emergency communication management system based on big-dipper satellite
CN109558877A (en) * 2018-10-19 2019-04-02 复旦大学 Naval target track algorithm based on KCF
CN110706291A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Visual measurement method suitable for three-dimensional trajectory of moving object in pool experiment
CN110827327A (en) * 2018-08-13 2020-02-21 中国科学院长春光学精密机械与物理研究所 Long-term target tracking method based on fusion
CN111696138A (en) * 2020-06-17 2020-09-22 北京大学深圳研究生院 System for automatically collecting, tracking and analyzing biological behaviors

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015142923A1 (en) * 2014-03-17 2015-09-24 Carnegie Mellon University Methods and systems for disease classification
CN105550670A (en) * 2016-01-27 2016-05-04 兰州理工大学 Target object dynamic tracking and measurement positioning method
CN106023248A (en) * 2016-05-13 2016-10-12 上海宝宏软件有限公司 Real-time video tracking method
CN106127137A (en) * 2016-06-21 2016-11-16 长安大学 A kind of target detection recognizer based on 3D trajectory analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015142923A1 (en) * 2014-03-17 2015-09-24 Carnegie Mellon University Methods and systems for disease classification
CN105550670A (en) * 2016-01-27 2016-05-04 兰州理工大学 Target object dynamic tracking and measurement positioning method
CN106023248A (en) * 2016-05-13 2016-10-12 上海宝宏软件有限公司 Real-time video tracking method
CN106127137A (en) * 2016-06-21 2016-11-16 长安大学 A kind of target detection recognizer based on 3D trajectory analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JO~AO F. HENRIQUES等: "High-Speed Tracking with Kernelized Correlation Filters", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
杨德东等: "采用核相关滤波器的长期目标跟踪", 《光学 精密工程》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876821A (en) * 2018-07-05 2018-11-23 北京云视万维科技有限公司 Across camera lens multi-object tracking method and system
CN108876821B (en) * 2018-07-05 2019-06-07 北京云视万维科技有限公司 Across camera lens multi-object tracking method and system
CN110827327A (en) * 2018-08-13 2020-02-21 中国科学院长春光学精密机械与物理研究所 Long-term target tracking method based on fusion
CN110827327B (en) * 2018-08-13 2023-04-18 中国科学院长春光学精密机械与物理研究所 Fusion-based long-term target tracking method
CN109407117A (en) * 2018-09-07 2019-03-01 安徽大禹安全技术有限公司 Earthquake emergency communication management system based on big-dipper satellite
CN109558877A (en) * 2018-10-19 2019-04-02 复旦大学 Naval target track algorithm based on KCF
CN110706291A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Visual measurement method suitable for three-dimensional trajectory of moving object in pool experiment
CN111696138A (en) * 2020-06-17 2020-09-22 北京大学深圳研究生院 System for automatically collecting, tracking and analyzing biological behaviors
CN111696138B (en) * 2020-06-17 2023-06-30 北京大学深圳研究生院 System for automatically collecting, tracking and analyzing biological behaviors

Similar Documents

Publication Publication Date Title
CN108090922A (en) Intelligent Target pursuit path recording method
Iacono et al. Towards event-driven object detection with off-the-shelf deep learning
CN109697726A (en) A kind of end-to-end target method for estimating based on event camera
CN107748860A (en) Method for tracking target, device, unmanned plane and the storage medium of unmanned plane
CN101826155B (en) Method for identifying act of shooting based on Haar characteristic and dynamic time sequence matching
CN109445453A (en) A kind of unmanned plane Real Time Compression tracking based on OpenCV
CN106575363A (en) Method for tracking keypoints in scene
CN101572770B (en) Method for testing motion available for real-time monitoring and device thereof
CN111399634B (en) Method and device for recognizing gesture-guided object
Raktrakulthum et al. Vehicle classification in congested traffic based on 3D point cloud using SVM and KNN
CN114792417B (en) Model training method, image recognition method, device, equipment and storage medium
CN113947770B (en) Method for identifying object placed in different areas of intelligent cabinet
CN108564043B (en) Human body behavior recognition method based on space-time distribution diagram
CN106934339B (en) Target tracking and tracking target identification feature extraction method and device
CN110930436B (en) Target tracking method and device
Paletta et al. FACTS-a computer vision system for 3D recovery and semantic mapping of human factors
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
CN109492513B (en) Face space duplication eliminating method for light field monitoring
Huang et al. Motion characteristics estimation of animals in video surveillance
CN111126279B (en) Gesture interaction method and gesture interaction device
Wang et al. Research and Design of Human Behavior Recognition Method in Industrial Production Based on Depth Image
CN112700494A (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
Chen et al. Event data association via robust model fitting for event-based object tracking
Troutman et al. Towards fast and automatic map initialization for monocular SLAM systems
Xu et al. Research on monocular vehicle detection and 3D coordinate calculation based on YOLOv7

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180529

WW01 Invention patent application withdrawn after publication